Here's How Facebook's Brain-Computer Interface Development is Progressing
In 2017, Facebook announced that it had assigned at least 60 engineers to an effort to build a brain-computer interface (BCI). The goal: allow mobile device and computer users to communicate at a speed of at least 100 words per minute-far faster than anyone can type on a phone.
Last July, Facebook-supported researchers at the University of California San Francisco (UCSF) published the results of a study demonstrating that Facebook's prototype brain-computer interface could be used to decode speech in real time-at least speech in the form of a limited range of answers to questions.
Facebook that month published a blog post explaining a bit about the technology developed so far. The post described a device that shines near-infrared light into the skull and uses changes in the way brain tissue absorbs that light to measure the blood oxygenation of groups of brain cells.
Said the blog post:
Think of a pulse oximeter-the clip-like sensor with a glowing red light you've probably had attached to your index finger at the doctor's office. Just as it's able to measure the oxygen saturation level of your blood through your finger, we can also use near-infrared light to measure blood oxygenation in the brain from outside of the body in a safe, non-invasive way".And while measuring oxygenation may never allow us to decode imagined sentences, being able to recognize even a handful of imagined commands, like "home," "select," and "delete," would provide entirely new ways of interacting with today's VR systems-and tomorrow's AR glasses.
The company hasn't talked much about the project since-until this month, when Mark Chevillet, research director for Facebook Reality Labs and the BCI project leader, gave an update at ApplySci's Wearable Tech, Digital Health, and Neurotech Silicon Valley conference.
For starters, the team has been finishing up a move to its new hardware design. It's not, by any means, the final version, but they say it is vastly more usable than the initial prototype.
The hardware used for UCSF's research was big, expensive, and not all that wearable, Chevillet admitted. But the team has developed a cheaper and more wearable version, using lower cost components and some custom electronics. This so-called research kit, shown in the July blog post [photo below], is currently being tested to confirm that it is just as sensitive as the larger device, he says.
Photo: Facebook An early research kit of a wearable brain-computer interface device, built by Facebook Reality Labs.Meanwhile, the researchers are focusing their efforts on speed and noise reduction.
"We are measuring the hemodynamic response," Chevillet says, "which peaks about five seconds after the brain signal." The current system detects the response at the peak, which may be too slow for a truly useful brain-computer interface. "We could detect it earlier, before the peak, if we can drive up our signal and drive down the noise," says Chevillet.
The new headsets will help this effort, Chevillet indicated, because the biggest source of noise is movement. The smaller headset sits tightly on the head, resulting in fewer shifts in position than is the case with the larger research device.
The team is also looking into increasing the size of the optical fibers that collect the signal in order to detect more photons, he says.
And it has built and is testing a system that uses time domains to eliminate noise, Chevillet reports. By sending in pulses of light, instead of continuous light, he says, the team hopes to distinguish between the photons that travel only through the scalp and skull before being reflected-the noise-from those that actually make it into brain tissue. "We hope to have the results to report out later this year," he says.
Another way to improve the signal-to-noise ratio of the device, he suggests, is increasing the contrast. You can't necessarily increase the brightness of the light, he says; it has to stay below a safe level for brain tissue. But the team can increase the number of pixels in the photodetector array. "We are trying a 32-by-32-pixel single photon detector array to see if we can improve the signal-to-noise ratio, and will report that out later this year," Chevillet says.
But, he admits, "even with what we are doing to get a better signal, it will be noisy."
That's why, Chevillet explained, the company is focusing on detecting the mental efforts that produce speech-it doesn't actually read random thoughts. "We can use noisy signals with speech algorithms," he says, "because we have speech algorithms that have been trained on huge amounts of audio, and we can transfer that training over."
This approach to the brain-computer interface is intriguing but won't be easy to pull off, says Roozbeh Ghaffari, a biomedical researcher at Northwestern University and CEO of Epicore Biosystems. "There may indeed be ways to relate neurons firing to changes in local blood oxygenation levels," Ghaffari told Spectrum. "But the changes in blood oxygenation levels that map to neuronal activity are highly localized; the ability to map these localized changes to speech activity-from the skin surface, on a cycle-by-cycle basis-could be challenging."