An ALS Patient Set a Record For Communicating Via a Brain Implant: 62 Words Per Minute
An anonymous reader quotes a report from MIT Technology Review: Eight years ago, a patient lost her power of speech because of ALS, or Lou Gehrig's disease, which causes progressive paralysis. She can still make sounds, but her words have become unintelligible, leaving her reliant on a writing board or iPad to communicate. Now, after volunteering to receive a brain implant, the woman has been able to rapidly communicate phrases like "I don't own my home" and "It's just tough" at a rate approaching normal speech. That is the claim in a paper published over the weekend on the website bioRxiv by a team at Stanford University. The study has not been formally reviewed by other researchers. The scientists say their volunteer, identified only as "subject T12," smashed previous records by using the brain-reading implant to communicate at a rate of 62 words a minute, three times the previous best. [...] People without speech deficits typically talk at a rate of about 160 words a minute. Even in an era of keyboards, thumb-typing, emojis, and internet abbreviations, speech remains the fastest form of human-to-human communication. The brain-computer interfaces that [co-lead author Krishna Sehnoy's] team works with involve a small pad of sharp electrodes embedded in a person's motor cortex, the brain region most involved in movement. This allows researchers to record activity from a few dozen neurons at once and find patterns that reflect what motions someone is thinking of, even if the person is paralyzed. In previous work, paralyzed volunteers have been asked to imagine making hand movements. By "decoding" their neural signals in real time, implants have let them steer a cursor around a screen, pick out letters on a virtual keyboard, play video games, or even control a robotic arm. In the new research, the Stanford team wanted to know if neurons in the motor cortex contained useful information about speech movements, too. That is, could they detect how "subject T12" was trying to move her mouth, tongue, and vocal cords as she attempted to talk? These are small, subtle movements, and according to Sabes, one big discovery is that just a few neurons contained enough information to let a computer program predict, with good accuracy, what words the patient was trying to say. That information was conveyed by Shenoy's team to a computer screen, where the patient's words appeared as they were spoken by the computer. [...] The current system already uses a couple of types of machine learning programs. To improve its accuracy, the Stanford team employed software that predicts what word typically comes next in a sentence. "I" is more often followed by "am" than "ham," even though these words sound similar and could produce similar patterns in someone's brain. Adding the word prediction system increased how quickly the subject could speak without mistakes.
Read more of this story at Slashdot.