Neuroscientists at the University of California, San Francisco (UCSF) have developed a brain-machine interface to produce understandable synthetic speech by analysing brain signals.

Funded by the National Institutes of Health (NIH) Brain Research through Advancing Innovative Technologies (BRAIN) Initiative, the research is believed to have the potential to help certain patients who have lost the ability to speak because of neurological damage.

Discover B2B Marketing That Performs

Combine business intelligence and editorial excellence to reach engaged professionals across 36 leading media platforms.

Find out more

The brain-machine interface is powered by a computer that is programmed using brain signals recorded from epilepsy patients who have normal speaking abilities.

Data captured from brain scans of these patients was used to facilitate synthetic speech involving entire sentences, rather than the letter-by-letter approach with existing assistive devices.

UCSF speech scientist Gopala Anumanchipalli said: “Current technology limits users to, at best, ten words per minute, while natural human speech occurs at roughly 150 words/minute.

“This discrepancy is what motivated us to test whether we could record speech directly from the human brain.”

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData
“Current speech assistive technology limits users to, at best, ten words per minute, while natural human speech occurs at roughly 150 words/minute.”

During the study, patients who had temporary implants in their brains to map their disease activity were asked to read several sentences aloud. The team recorded their brain activity during the speech process.

The researchers then used the recordings and linguistic principles to reverse engineer the vocal tract movements that are needed to produce those sounds.

Based on the mapping of sound to anatomy, the researchers devised a virtual vocal tract, an anatomically detailed computer simulation that includes the lips, jaw, tongue and larynx. This vocal tract, which can be controlled by each volunteer’s brain activity, consists of two ‘neural network’ machine learning algorithms.

Volunteers who listened to the computer-generated sentences were able to correctly determine the sentences in majority of the cases. Moreover, the new system was able to generate understandable synthetic versions of sentences that were just mimed by the participant.

In the future, the programme is expected to restore communication to people with severe speech disability.

The researchers intend to conduct a clinical trial in paralysed, speech-impaired patients in order to find the best approach to collect their brain signal data, which can be applied to the previously trained computer algorithm.

In January 2019, scientists from the Mortimer B. Zuckerman Mind Brain Behavior Institute at Columbia University also announced a brain-computer interface that directly translates thoughts into intelligible speech.

Medical Device Network Excellence Awards - Nominations Closed

Nominations are now closed for the Medical Device Network Excellence Awards. A big thanks to all the organisations that entered – your response has been outstanding, showcasing exceptional innovation, leadership, and impact

Excellence in Action
Awarded for Innovation in Remote Hearing Diagnostics , hearX’s Self Test Kit (STK) delivers clinically validated audiometry via smart devices, enabling remote, scalable hearing assessments in homes, clinics and retail. Learn how hearX is redefining hearing care delivery and reducing costs for providers globally.

Discover the Impact