In a breakthrough for brain-computer interface technology, researchers at Stanford University and Emory BrainGate have decoded a person’s “inner speech,” offering new hope for restoring communication to individuals with severe paralysis.
The achievement, detailed in a recent study, reports that researchers have successfully decoded the silent monologue in a person’s mind with up to 74 percent accuracy. A team jointly based at Emory BrainGate and Stanford University led the research, which opens up new possibilities for individuals who are unable to speak due to severe paralysis or other neurological conditions.
“This is the first time we’ve managed to understand what brain activity looks like when you just think about speaking,” said lead author Erin Kunz of Stanford University. “For people with severe speech and motor impairments, BCIs capable of decoding inner speech could help them communicate much more easily and more naturally.”
Companies such as Neuralink, Synchron, INBRAIN Neuroelectronics, and Cognixion are among those pushing innovation in the BCI field, applying the technology to various applications, including video games, robotic limb movement, music composition, and communication without speaking.
Now, thanks to these advancements, BCIs can also detect the brain signals used when a person tries to speak, even if no words are spoken clearly, by reading the neural patterns that control speech muscles.
For people with limited muscle control due to various disabilities, this research could provide a way to bypass the physical act of speaking altogether. The team hypothesized that directly decoding inner speech could not only be possible, but also would likely be more efficient than attempting to speak aloud.
“If you just have to think about speech instead of actually trying to speak, it’s potentially easier and faster for people,” explained co-first author Benyamin Meschede-Krasa of Stanford University.
The study examined four patients with severe paralysis caused by either amyotrophic lateral sclerosis (ALS) or a brainstem stroke. Microelectrodes were implanted in their motor cortex, the part of the brain responsible for controlling speech. Participants were then asked to either attempt to speak or imagine speaking a series of words.
The results showed that both attempted and imagined speech activated overlapping brain regions and produced similar neural patterns, though inner speech generated weaker signals. With the help of trained AI models, the system decoded imagined sentences with notable success, reaching an accuracy of up to 74 percent across a 125,000-word dataset.
One surprising result was that BCIs sometimes picked up inner speech that participants had not been instructed to produce, such as silently counting objects on a screen. This demonstrated the system’s sensitivity. The researchers also found that the system could distinguish between participants trying to vocalize words and those merely thinking them, filtering out unintended thoughts.
Privacy and control were built into the study design through a password mechanism: decoding began only when participants silently thought of the phrase “Chitty Chitty Bang Bang,” which the system recognized with over 98 percent accuracy.
“The future of BCIs is bright,” said senior author Frank Willett. “This work gives real hope that speech BCIs can one day restore communication that is as fluent, natural, and comfortable as conversational speech.”
The research was published in the Cell Press journal on August 14, 2025.
Chrissy Newton is a PR professional and founder of VOCAB Communications. She currently appears on The Discovery Channel and Max and hosts the Rebelliously Curious podcast, which can be found on YouTube and on all audio podcast streaming platforms. Follow her on X: @ChrissyNewton, Instagram: @BeingChrissyNewton, and chrissynewton.com.
