A brainstem stroke following a horrible car crash left a 20-year-old man paralyzed, robbing him of speech. Eighteen years after his dreadful accident, the man is now able to communicate with the outside world thanks to a medical implant that converts brain waves into sentences on a computer. Although this is just a proof-of-concept, the research is extremely promising, suggesting it may one day be possible to restore sophisticated communication abilities to people who became speech impaired because of an injury.
“Most of us take for granted how easily we communicate through speech,” Dr. Edward Chang, a neurosurgeon at the University of California, San Francisco, told the Associated Press. “It’s exciting to think we’re at the very beginning of a new chapter, a new field.”
People who are paralyzed and have a speech disability have very limited options for communication. The patient mentioned in this new research, for instance, would communicate using a pointer attached to a baseball cap in a pecking motion on a touchscreen to type words or letters. Other patients who might not even be able to use their necks rely on devices that track eye movements and translate them into a cursor movement to select words or letters on a computer screen.
While these options allow paralyzed patients a semblance of connection with the outside world, they’re painfully slow. This is where brain-computer interfaces come in. Their jaw-dropping ability to transform neural activity into an actionable potential has been impressive, to say the least.
These include implants that transform the thoughts of a patient imagining they were writing a sentence by hand with a pen into the actual sentence on a computer screen. Brain-computer interfaces can also be used by paralyzed patients to control mechanical arms, exoskeletons, and even drones. Such interfaces can also facilitate a telepathic-like exchange of information between two people.
Rather than making a mind-controlled prosthetic, Chang and colleagues’ work centers on a neuroprosthetic for speech. The device converts brainwaves that normally control the subtle movements of the lips, jaw, tongue, and larynx to form sounds into words or entire sentences on a computer screen.
After implanting electrodes on the surface of the patient’s brain area responsible for controlling speech, the computer algorithm was trained with neural patterns as the man attempted to say common words such as “water” or “good”. The training took place over the course of 50 sessions spaced over almost two years.
The algorithm was thus taught to associated specific brain wave patterns with 50 words that could be used to form over 1,000 sentences. Previously, Chen’s lab had spent years mapping the brain’s areas responsible for speech, so they had a lot of experience.
For instance, when prompted with questions like ‘How are you today?’ or ‘Are you thirsty?‘ the man answered ‘Am very good’ or ‘No, I am not thirsty’ using the text-based communication enabled by the device that read his thoughts.
It takes three to four seconds for the words imagined by the patient to appear on the computer screen. That’s not nearly as fast as speaking but still much faster than tapping out a response, the researchers explained in a paper published in the New England Journal of Medicine.
The prototype could be refined and turned into a device that helps people with injuries, strokes, or illnesses like Lou Gehrig’s disease that interferes with the delivery of messages from the brain to the vocal tract.
The researchers plan on improving the speed, accuracy, and vocabulary size of their algorhythm. The goal is to have a device that generates voice rather than text on a screen.