
Invasive brain chips aren’t the only way to help patients with brain damage regain their ability to speak and communicate. A team of scientists at Meta has created an AI model that can understand what a person is thinking, and convert their thoughts into typed sentences.
The AI also sheds light on how the human brain conveys thoughts in the form of language. The researchers suggest their model represents the first and crucial step toward developing noninvasive brain-computer interfaces (BCIs).
“Modern neuroprostheses can now restore communication in patients who have lost the ability to speak or move. However, these invasive devices entail risks inherent to neurosurgery. Here, we introduce a non-invasive method to decode the production of sentences from brain activity,” the researchers note.
To demonstrate the capabilities of their AI system, the Meta team conducted two separate studies. Here’s how their system performed.
Turning brain signals into words

The first study involved 35 participants who first observed some letters appearing on a screen followed by a cue telling them to type the sentence the letters formed from memory. The researchers used magnetoencephalography (MEG) to map the magnetic signals generated by the participants’ brains while they focused on turning their thoughts into typed sentences.
Next, they trained an AI model using the MEG data. Again a test was conducted, this time the AI model (called Brain2Qwerty) had to predict and type the sentences forming in participants’ minds as they read letters on a screen. Finally, researchers compared the output of the model to the actual sentences typed by the participants.
Brain2Qwerty was 68% accurate in predicting letters the participants typed. It mostly struggled with sentences involving letters such as K and Z. However, when errors occurred, it guessed letters that were near the correct one on a QWERTY keyboard. This indicates that the model could also detect motor signals in the brain and predict what a participant typed.
In the second study, researchers examined how the brain forms language while typing. They collected 1,000 brain activity snapshots per second. Next, they used these snapshots to map how the brain built a sentence. They found that the brain keeps words and letters separate using a dynamic neural code that shifts how and where information is stored.
This code prevents overlap and helps maintain sentence structure while linking letters, syllables, and words smoothly. Think of it like moving information around in the brain so that each letter or word has its own space, even if they are processed at the same time.
“This approach confirms the hierarchical predictions of linguistic theories: the neural activity preceding the production of each word is marked by the sequential rise and fall of context-, word-, syllable-, and letter-level representations,” the study authors note.
This way, the brain can keep track of each letter without mixing them up, ensuring smooth and accurate typing or speech. The researchers compare this to a technique in artificial intelligence called positional embedding, which helps AI models understand the order of words.
“Overall, these findings provide a precise computational breakdown of the neural dynamics that coordinate the production of language in the human brain,” they added.
Brain2Qwerty has some limitations
While Meta’s AI model can decode human thoughts with exceptional accuracy, there’s still a lot of work that needs to be done to make it practical. For instance, currently, the AI model only works in a controlled lab environment and requires a cumbersome setup.
Turning it into a practical noninvasive BCI that could be used for healthcare and other purposes seems quite challenging at this stage. Moreover, the current studies involved only 35 subjects.
It would be interesting to see if the Meta team could overcome these challenges before its rivals come up with a better thought-to-text AI system.
Note: Both studies are yet to be peer-reviewed. You can read them here and here.