Thinking Freely with Nita Farahany

Thinking Freely with Nita Farahany

Stanford Scientists Decode Inner Speech with 26-54% Accuracy

A Breakthrough for Paralysis and a Challenge for Mental Privacy

Nita Farahany's avatar
Nita Farahany
Aug 14, 2025
∙ Paid
4
1
Share

You’re paralyzed and unable to speak, but you need to tell your caregiver that you’re in pain. If you’re using a current brain-computer interfaces (BCIs), you have to try to force words out of your mouth that you will never actually vocalize, to attempt to speak. In the process, you become exhausted by the physical effort of commanding dead muscles to work. It’s kind of like trying to sprint with broken legs to get to where you’re going.

But what if you could just think the words, and they appear on a screen, instead?

Are you enjoying Thinking Freely with Nita Farahany? If so, subscribe now.

That’s what researchers from Stanford and the BrainGate consortium just demonstrated in research published in Cell. Working with four participants with paralysis, the team showed that “inner speech,” can be decoded directly from brain signals with word error rates between 26% and 54% for large vocabulary tasks.

While this was a higher error rate than when the participants attempted to speak, the participants overwhelmingly preferred thinking the words over attempted speech because it required less physical effort and fatigue, and likely felt more natural.

The Stanford team, led by Erin Kunz and colleagues, worked with four BrainGate patients with severe speech impairments from ALS or stroke who had electrode arrays implanted in their motor cortex . When the patients deliberately thought specific words, the AI system could decode individual words with about 50% accuracy, improving to 74-86% accuracy for simple word sets.

This is NOT mind reading in the science fiction sense. The system could only detect deliberately formed words in the “inner voice” most people experience. The system could not detect thought in the broader sense, like the emotions, abstractions, or subconscious processes we experience. Think of the difference between forming a sentence in your mind after someone telling you to do so, versus letting your mind wander through fragments of thoughts, images, and feelings. It’s the former that the scientists called “inner speech.”

In some of the cognitive tasks in the study, even when the participants weren’t instructed to form inner speech, the system detected their mental strategies. Like when the participants counted colored shapes on a screen, the decoder picked up a pattern of increasing number sequences, suggesting those participants were silently counting. Or when subject “T12” memorized arrow patterns, the decoder detected the sequence positions during the delay period, likely because she was mentally rehearsing “up, right, up” as a memory tool. The researchers called these instances of “uninstructed inner speech” that emerged naturally without explicit instruction.

The Upside

For people with severe communication disorders, and particularly people with complete locked-in syndrome, this breakthrough points to a future of restored self-determination through more natural communication strategies. Attempted speech is exhausting, what with trying to activate muscles and control breathing in ways that can make communication slow to a crawl. Engaging in inner speech isn’t limited by these physical constraints.

When combined with advances in language models (similar to autocorrect), the system achieved functional communication rates. Participant T15 had a 26% word error rate with a 125,000-word vocabulary. To put that in perspective, roughly one in four words comes out wrong, which might be like the frustrating experience of texting with aggressive autocorrect, except that you can’t override it. In other words, it’s imperfect, and requires patience from both the user and listener, but it’s usable for daily communication when the alternative is no communication at all.

Thanks for reading Thinking Freely with Nita Farahany! Share this post with others.

Share

Taking Mental Privacy Seriously

I was particularly impressed that the Stanford team engaged so deeply with mental privacy concerns raised by their research. (Shout out to the Jerry Tang et al paper, out of Alex Huth’s Lab in 2023, which was the first I am aware of that did so).

The researchers found that inner speech and attempted speech had nearly identical brain patterns, differing in the intensity of the signal rather than the neural pattern itself. Which means the boundary between “inner speech” and “attempted speech” may be a difference in signal degree, not signal kind.

Keep reading with a 7-day free trial

Subscribe to Thinking Freely with Nita Farahany to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Nita Farahany
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture