A Brain Scanner Combined With an AI Language Model Can Provide a Glimpse Into Your Thoughts
An anonymous reader quotes a report from Scientific American: Functional magnetic resonance imaging (fMRI) captures coarse, colorful snapshots of the brain in action. While this specialized type of magnetic resonance imaging has transformed cognitive neuroscience, it isn't a mind-reading machine: neuroscientists can't look at a brain scan and tell what someone was seeing, hearing or thinking in the scanner. But gradually scientists are pushing against that fundamental barrier to translate internal experiences into words using brain imaging. This technology could help people who can't speak or otherwise outwardly communicate such as those who have suffered strokes or are living with amyotrophic lateral sclerosis. Current brain-computer interfaces require the implantation of devices in the brain, but neuroscientists hope to use non-invasive techniques such as fMRI to decipher internal speech without the need for surgery. Now researchers have taken a step forward by combining fMRI's ability to monitor neural activity with the predictive power of artificial intelligence language models. The hybrid technology has resulted in a decoder that can reproduce, with a surprising level of accuracy, the stories that a person listened to or imagined telling in the scanner. The decoder could even guess the story behind a short film that someone watched in the scanner, though with less accuracy. "There's a lot more information in brain data than we initially thought," said Jerry Tang, a computational neuroscientist at the University of Texas at Austin and the study's lead author, during a press briefing. The research, published on Monday in Nature Communications, is what Tang describes as "a proof of concept that language can be decoded from noninvasive recordings of brain activity." The decoder technology is in its infancy. It must be trained extensively for each person who uses it, and it doesn't construct an exact transcript of the words they heard or imagined. But it is still a notable advance. Researchers now know that the AI language system, an early relative of the model behind ChatGPT, can help make informed guesses about the words that evoked brain activity just by looking at fMRI brain scans. While current technological limitations prevent the decoder from being widely used, for good or ill, the authors emphasize the need to enact proactive policies that protect the privacy of one's internal mental processes. [...] The model misses a lot about the stories it decodes. It struggles with grammatical features such as pronouns. It can't decipher proper nouns such as names and places, and sometimes it just gets things wrong altogether. But it achieves a high level of accuracy, compared with past methods. Between 72 and 82 percent of the time in the stories, the decoder was more accurate at decoding their meaning than would be expected from random chance. Here's an example of what one study participant heard, as transcribed in the paper: "i got up from the air mattress and pressed my face against the glass of the bedroom window expecting to see eyes staring back at me but instead finding only darkness." The model went on to decode: "i just continued to walk up to the window and open the glass i stood on my toes and peered out i didn't see anything and looked up again i saw nothing." The research was published in the journal Nature Communications.
Read more of this story at Slashdot.