Headset device transcribes user’s internal thoughts

Charlotte Edwards 9 April 2018 (Last Updated April 9th, 2018 17:14)

A wearable device and associated computing system created at the Massachusetts Institute of Technology (MIT) can transcribe words that the user thinks but does not say out loud.

Headset device transcribes user’s internal thoughts
The wearable technology has potential medical applications. Credit: Lorrie Lejeune/MIT.

A wearable device and associated computing system created at the Massachusetts Institute of Technology (MIT) can transcribe words that the user thinks but does not say out loud.

Researchers believe this device could be used to assist people with verbal communication disabilities. It also has many other potential applications such as for communicating during silent military operations or in high-noise environments like power stations.

Thad Starner, a professor in Georgia Tech’s College of Computing, said: “Roger Ebert did not have the ability to speak anymore because lost his jaw to cancer. Could he do this sort of silent speech and then have a synthesiser that would speak the words?”

The computer interface device consists of electrodes that are placed on the face and jaw to pick up otherwise undetectable neuromuscular signals that are triggered by internal verbalisations. The signals are then sent to a machine-learning system that has been trained to correlate specific signals with specific words.

The device also includes a pair of bone-conduction headphones, which transmit vibrations through bones of the face to the inner ear. The headphones do not obstruct the ear canal so the system can still convey information to the user without interrupting conversation or interfering with auditory experiences.

Arnav Kapur, a graduate student at the MIT Media Lab, led the development of the new system, He said: “The motivation for this was to build an IA device — an intelligence-augmentation device. Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”

The concept of internal verbalisations having physical signals has been explored since the 19th century. However, sub-vocalisation as a computer interface is a largely unexplored area. The MIT researchers first had to determine which facial areas are the sources of the most reliable neuromuscular signals. This involved conducting experiments in which subjects sub-vocalised the same series of words four times but with 16 electrodes placed in different facial locations.

Once the electrode locations had been selected, data was collected on a few computational tasks with limited vocabularies. One such task was maths, in which the user would think of multiplication problems and then have them answered by the device.

Using the prototype, another experiment was conducted involving 10 subjects who spent 15 minutes each customising the mathematics application to their own neurophysiology. They then spent another 90 minutes using it to complete maths problems. This study had a 92% success rate for transcription accuracy. Kapur believes the system’s performance will improve when more training data has been collected.

A YouTube video showcasing the everyday applications of the device can be found here- https://www.youtube.com/watch?v=RuUSc53Xpeg#action=share