Neuroengineers from the Mortimer B. Zuckerman Mind Brain Behavior Institute at Columbia University in the US have developed a brain-computer interface that directly translates thoughts into intelligible, recognisable speech.

The new system offers hope for people with limited or no ability to speak, including patients with amyotrophic lateral sclerosis (ALS) or those recovering from a stroke. Around one in three patients who have had a stroke have some kind of problem with speech.

Discover B2B Marketing That Performs

Combine business intelligence and editorial excellence to reach engaged professionals across 36 leading media platforms.

Find out more

Based on speech synthesisers and artificial intelligence (AI), the interface tracks brain activity and clearly reconstructs the words heard by a person.

“The new system offers hope for people with limited or no ability to speak, including patients with amyotrophic lateral sclerosis (ALS) or those recovering from a stroke.”

The team believes that the new system has the potential to facilitate computers that can directly communicate with the brain.

Zuckerman Mind Brain Behavior Institute principal investigator Dr Nima Mesgarani said: “Our voices help connect us to our friends, family and the world around us, which is why losing the power of one’s voice due to injury or disease is so devastating.

“With today’s study, we have a potential way to restore that power. We’ve shown that, with the right technology, these people’s thoughts could be decoded and understood by any listener.”

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

To create the system, the team used a computer algorithm called vocoder, which synthesises speech after being trained on recordings of people talking.

Commonly, this technology is leveraged by Amazon Echo and Apple Siri.

The researchers partnered with Northwell Health Physician Partners Neuroscience Institute neurosurgeon Dr Ashesh Dinesh Mehta to train the vocoder in interpreting brain activity.

Epilepsy patients treated by Mehta were asked to listen to certain sentences and digits, and their brain signals were recorded to be run through the vocoder.

The sound generated by the vocoder in response to these signals was analysed and cleaned up by AI-based neural networks, which mimic the structure of biological neurons.

This led to the production of a robotic-sounding voice. People were able to understand and repeat the produced sounds in about 75% of the cases, which is said to be significantly higher than previous attempts.

While the system requires further training and testing, it is hoped to be applied in implants that can be worn and directly translate the user’s thoughts into words.

Mesgarani added: “This would be a game changer. It would give anyone who has lost their ability to speak, whether through injury or disease, the renewed chance to connect to the world around them.”

Additional reporting by Charlotte Edwards. 

Medical Device Network Excellence Awards - Nominations Closed

Nominations are now closed for the Medical Device Network Excellence Awards. A big thanks to all the organisations that entered – your response has been outstanding, showcasing exceptional innovation, leadership, and impact

Excellence in Action
HemoSonics has won the 2025 Marketing Award for its impactful promotion of theQuantra Hemostasis System and leadership in blood management education. See how targeted campaigns, thought leadership content, and hands on clinician training are accelerating Quantra’s market traction and shaping the future of hemostasis testing.

Discover the Impact