Article 4MAYQ Facebook-Funded Study Translates Brain Activity Into Text

Facebook-Funded Study Translates Brain Activity Into Text

by
martyb
from SoylentNews on (#4MAYQ)

takyon writes:

Team IDs Spoken Words and Phrases in Real Time from Brain's Speech Signals

UC San Francisco scientists recently showed that brain activity recorded as research participants spoke could be used to create remarkably realistic synthetic versions of that speech, suggesting hope that one day such brain recordings could be used to restore voices to people who have lost the ability to speak. However, it took the researchers weeks or months to translate brain activity into speech, a far cry from the instant results that would be needed for such a technology to be clinically useful. Now, in a complementary new study, again working with volunteer study subjects, the scientists have for the first time decoded spoken words and phrases in real time from the brain signals that control speech, aided by a novel approach that involves identifying the context in which participants were speaking.

[...] In the new study, published July 30 in Nature Communications [DOI: 10.1038/s41467-019-10994-4], researchers from the Chang lab led by postdoctoral researcher David Moses, PhD, worked with three such research volunteers to develop a way to instantly identify the volunteers' spoken responses to a set of standard questions based solely on their brain activity, representing a first for the field.

To achieve this result, Moses and colleagues developed a set of machine learning algorithms equipped with refined phonological speech models, which were capable of learning to decode specific speech sounds from participants' brain activity. Brain data was recorded while volunteers listened to a set of nine simple questions (e.g. "How is your room currently?", "From 0 to 10, how comfortable are you?", or "When do you want me to check back on you?") and responded out loud with one of 24 answer choices. After some training, the machine learning algorithms learned to detect when participants were hearing a new question or beginning to respond, and to identify which of the two dozen standard responses the participant was giving with up to 61 percent accuracy as soon as they had finished speaking.

[...] Moses's new study was funded by through a multi-institution sponsored academic research agreement with Facebook Reality Labs (FRL), a research division within Facebook focused on developing augmented- and virtual-reality technologies. As FRL has described, the goal for their collaboration with the Chang lab, called Project Steno, is to assess the feasibility of developing a non-invasive, wearable BCI device that could allow people to type by imagining themselves talking.

See also: Facebook gets closer to letting you type with your mind
Brain-computer interfaces are developing faster than the policy debate around them

Previously: Brain Implant Translates Thoughts Into Synthesized Speech

Original Submission

Read more of this story at SoylentNews.

External Content
Source RSS or Atom Feed
Feed Location https://soylentnews.org/index.rss
Feed Title SoylentNews
Feed Link https://soylentnews.org/
Feed Copyright Copyright 2014, SoylentNews
Reply 0 comments