Key Takeaways
- Researchers at the University of California, Berkeley and the University of California, San Francisco developed a brain-computer interface (BCI) powered by AI to restore speech in paralyzed patients.
- Ann, a stroke survivor, participated in a trial where electrodes attached to her brain recorded her neural activity, helping her regain her voice.
- The BCI converted Anne’s brain signals into speech in less than a second using machine learning and artificial intelligence algorithms.
After two decades without speech, a woman paralyzed by a stroke has regained her voice with the help of an experimental brain-computer interface developed by researchers at the University of California, Berkeley, and University of California, San Francisco.
The innovation addresses a long-standing challenge in speech neuroprostheses and was elaborated in a study published in Nature Neuroscience. The research leveraged artificial intelligence to translate the participant Anne’s thoughts into natural speech in real time. It marks a major leap forward in supporting real-time communication for those who have lost the ability to speak.
AI Restores Speech After 18 Years
Anne Johnson became paralyzed and lost her ability to speak after having a stroke in 2005, when she was 30 years old. 18 years later, she consented to being surgically fitted with an experimental implant that links to the brain-computer interface (BCI). Researchers placed the implant on the motor cortex of her brain, the part of the brain that controls physical movement, and tracked the signals emitted by her brain as she thought the words she wanted to speak.
The research team overcame the issue of latency, the delay between a person’s intent to speak and the generation of sound, by utilizing advances in artificial intelligence. They developed a streaming system that decodes neural signals into audible speech in real-time.
“Our streaming system has the same quick speech decoding capacity similar to that of Alexa and Siri in terms of neuroprostheses,” explained Gopala Anumanchipalli, co-principal investigator and assistant professor at the University of California, Berkeley. “Unlike vision, motion, or hunger, which we have in common with other species, speech truly sets us apart. It is yet to be discovered how intelligent behaviour emerges from neurons and cortical tissue.”
The study used a brain-computer interface to create a direct link between Anne’s brain’s electrical signals and a computer. The interface will decode the neural signals using a grid of electrodes that are placed on the brain’s speech centre.
The research team had previously worked with Anne to produce speech with the help of an automated voice and a digital avatar. The system displayed a delay of eight seconds to decode her brain patterns, and would speak complete sentences at a time. Now, the improved experimental device can instantly identify words from brain activity and translate them into speech within a second, according to Nature News.
Anumanchipalli noted that while major progress has been made in creating artificial limbs, restoring speech is still a complex task.
Machine Learning And Artificial Intelligence
Highlighting the need for instant responses in conversation, Anumanchipalli explained that the brain-computer interface converted Anne’s brain signals into speech in less than a second using machine learning and custom AI algorithms. While technology supported Anne to speak, Anumanchipalli praised her for doing the most difficult part of the process.
“Anne is the actual driver here. Her brain does all the heavy lifting, and we are just trying to comprehend what it is trying to do. Even though AI (Artificial Intelligence) fills in some of the gaps, Anne is the main character. The brain was built for fluid communication and evolved over millions of years to do so.
To train the artificial intelligence, researchers made Anne mouth phrases that were displayed on a screen from a list of 1024 words, which the system learned to interpret.
Rather than depending upon publicly available artificial intelligence models, the team developed a system from scratch, customized to Anne. “We have not used anything readily available. Everything we used is specially made just for Anne. We are not acquiring AI from any other company,” Anumanchipalli revealed.
Anne’s breakthrough is only one leg of a broader movement in brain-computer interface research, which has caught the attention of major players in neuroscience and tech, including Elon Musk’s Neuralink.
Privacy, A Crucial Aspect
Anumanchipalli said developing a proprietary AI was not only for customization, but for protecting user privacy too.
“The aim is to protect privacy. We are not sharing her signals with a firm in Silicon Valley. We are building software that stays with her,” he said. “Over time, this will become a standalone device where no one else can control what she is trying to speak.”
Anumanchipalli stressed the significance of public funding for improvements in brain-computer interface research. Looking ahead, Anumanchipalli hopes researchers will focus on return speech using technology.