The revolutionary device could be used for a plethora of applications but scientists’ main goal is to implant it into stroke victims or those who have suffered severe brain trauma and have lost the ability to speak. The system, which is currently being developed, could not only help restore speech but also the users’ tone of voice which helps to convey emotions. Dr Edward Chang, a professor of neurological surgery and member of the University of California San Francisco (UCSF) Weill Institute for Neuroscience, said: “For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity.
“This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.”
As part of the research, UCSF scientists took five volunteers who had no impairments to their speech and placed electrodes in their brains.
The participants were then asked to read thousands of sentences, and the electrodes mapped the brain regions known to be involved in language production.
The scientists then used reverse engineering to construct the physical movements to create sounds, such as pursed lips, tightening vocal chords and the movement of the tongue.
Such detailed mapping of ‘sound to anatomy’ allows scientists to create a virtual vocal tract which can be controlled by the participants’ brain activity.
Researchers found the interface was able to re-create 69 percent of the words which were originally read by and 43 percent of sentences were recreated “perfectly”, according to the study published in the science journal Nature.
Josh Chartier, a bioengineering graduate student in the Chang lab, said: “We still have a ways to go to perfectly mimic spoken language.
“We’re quite good at synthesising slower speech sounds like ‘sh’ and ‘z’ as well as maintaining the rhythms and intonations of speech and the speaker’s gender and identity, but some of the more abrupt sounds like ‘b’s and ‘p’s get a bit fuzzy.
“Still, the levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what’s currently available.
“People who can’t move their arms and legs have learned to control robotic limbs with their brains.
We are hopeful that one day people with speech disabilities will be able to learn to speak again using this brain-controlled artificial vocal tract.”
Gopala Anumanchipalli, a speech scientist, added: ”I’m proud that we’ve been able to bring together expertise from neuroscience, linguistics, and machine learning as part of this major milestone towards helping neurologically disabled patients.”