
In a California hospital, a woman who hadn’t spoken in nearly two decades silently mouthed the words, “Why did he tell you?” Moments later, a synthetic voice — trained on a single clip recorded before a stroke robbed her of speech — spoke them aloud.
The words weren’t typed or selected from a menu. They came directly from her brain.
Researchers at the University of California, San Francisco, have unveiled a brain implant that translates thoughts into speech at near-conversational speed. The developments mark a turning point for brain–computer interfaces, or BCIs — technologies that decode neural signals to help people communicate.
“This is where we are right now,” Edward Chang, a neurosurgeon and co-author of the study, told Nature. “But you can imagine, with more sensors, with more precision and with enhanced signal processing, those things are only going to change and get better.
A Break in Silence
The patient, a woman named Ann, lost her ability to speak after a brainstem stroke in 2005. In the new study, she underwent surgery to have a paper-thin implant placed on her brain’s surface, packed with 253 electrodes. The array sat on her cerebral cortex, where speech-related neural activity originates. Every 80 milliseconds, it recorded the firework-like bursts of activity as she mouthed words silently.
To make sense of the recorded neural patterns, the team turned to artificial intelligence. They trained algorithms to recognize patterns in Ann’s brain signals and link them with specific sounds, words, and phrases.
Previous neuroprosthetics often relied on predicting entire sentences before generating any output, introducing long delays. In contrast, the new system processes brain signals in as much time it takes to blink.

The result is speech that streams in near real-time, at rates up to 90 words per minute for certain phrase sets. That’s more than triple the speed of her previous assistive device, which required nearly 23 seconds per sentence. The system now converts internal speech into audible language in just under three seconds.
Even more striking, they restored her own voice.
Regaining a lost voice
Using audio from her wedding video, the researchers crafted a synthetic voice modeled on how she used to sound. When the computer spoke, it was as if she had spoken herself.
“This is a big leap forward,” said Christian Herff, a computational neuroscientist at Maastricht University in the Netherlands who was not involved in the work. “Older systems are like a WhatsApp conversation: I write a sentence, you write a sentence and you need some time to write a sentence again… It just doesn’t flow like a normal conversation.”
One of the system’s key achievements was operating without needing any sound from the user during training. Traditional models rely on audible speech to align brain signals with words. But that’s a nonstarter for those who can’t speak.
Instead, the team used a self-supervised speech model called HuBERT, which can learn phonetic patterns from audio without needing transcripts. They fed the system synthetic speech as a reference — like giving it a map with imagined roads — and let it figure out the terrain from neural signals alone.
This breakthrough means the system could work even for people who’ve never been able to speak or those who lose speech early in life.
And unlike prior methods, which worked only in short bursts, the system could decode free-form, long-form speech for several minutes continuously.
The researchers also tested how the system handled new words not seen during training — like “Zulu” and “Quebec” — and found it could generate intelligible speech over 46% of the time, far better than random.
What Comes Next?
So far, the streaming decoder has only been tested in one participant. The technology is still a prototype. While some generated sentences were flawless, others were garbled. In one case, the participant tried to say, “I just got here.” The decoder produced, “I’ve said to stash it.”
The current system works best with a limited vocabulary — 1,024 words and 50 preset phrases. And although it reacts faster than before, a noticeable delay still exists.
“When the delay is larger than 50 milliseconds, it starts to really confuse you,” Herff explained.
Still, the promise is clear. If refined, this could lead to clinical-grade neuroprosthetics that allow people with severe paralysis to communicate naturally again — not through robotic voices or alphabet boards, but in their own words and with their own voices.
The researchers are now working to test the system in more participants and improve its accuracy. They hope to shrink the hardware and make it more wearable. Eventually, such a device could operate like a smartphone app, offering real-time translation from thought to speech.
The findings appeared in the journal Nature Neuroscience.