The Biggest Questions: Is it possible to really understand someone else’s mind?
Technically speaking, neuroscientists have been able to read your mind for decades. It's not easy, mind you. First, you must lie motionless within the narrow pore of a hulking fMRI scanner, perhaps for hours, while you watch films or listen to audiobooks. Meanwhile, the machine will bang and knock as it records the shifting patterns of blood flow within your brain-a proxy for neural activity. The researchers, for whose experiment you have volunteered, will then feed the moment-to-moment pairings of blood flow and movie frames or spoken words to software that will learn the particularities of how your brain responds to the things it sees and hears.
None of this, of course, can be done without your consent; for the foreseeable future, your thoughts will remain your own, if you so choose. But if you do elect to endure those claustrophobic hours in the scanner, the software will learn to generate a bespoke reconstruction of what you were seeing or listening to, just by analyzing how blood moves through your brain.
Back in 2011, UC Berkeley neuroscientists trained such a program to create ethereal doubles of the videos their subjects had been watching. More recently, researchers have deployed generative AI tools, like Stable Diffusion and GPT, to create far more realistic, if not entirely accurate, reconstructions of films and podcasts based on neural activity. Given the hype, and financial investment, that generative AI has attracted, this kind of stimulus reconstruction technology will inevitably continue to improve-especially if Elon Musk's Neuralink succeeds in bringing brain implants to the masses.
But as exciting as the idea of extracting a movie from someone's brain activity may be, it is a highly limited form of mind reading." To really experience the world through your eyes, scientists would have to be able to infer not just what film you are watching but also what you think about it, how it makes you feel, and what it reminds you of. These interior thoughts and feelings are far more difficult to access. Scientists have managed to infer which specific object, out of two possibilities, someone was dreaming about; but in less constrained settings, such approaches struggle.
That's because machine-learning algorithms need both brain signals and information about what they correspond to, paired in perfect synchrony, to learn what the signals mean. When studying inner experience, all scientists have to go on is what people say is going on inside their head, and that can be reliable. It's not like it's directly measuring as a ground truth what people experienced," says Raphael Milliere, a lecturer in philosophy at Macquarie University in Australia.
Tying brain activity to subjective experience requires facing up to the slipperiness and inexactitude of language, particularly when deployed to capture the richness of one's inner life. In order to meet that demanding brief, scientists like Milliere are marrying contemporary artificial intelligence with centuries-old techniques, from philosophical interview strategies to ancient meditation practices. Bit by bit, they are starting to suss out some of the brain regions and networks that give rise to specific dimensions of human experience.
That's a problem we can make, and have made, some progress on," Milliere says. I'm not saying it's easy, but I think it's certainly more tractable than solving the grand mystery of consciousness."
Going to extremesOver 300 years ago, the philosopher John Locke asked whether the color blue looks the same to everyone-or whether my experience of blue" might be closer to your experience of yellow." Answering such subtle questions could be a distant horizon toward which the neuroscience of experience might aim. In its current, early stage, however, the field has to address itself to much more dramatic forms of experience. If we want to get a better grasp of what is distinctive about the ordinary, wakeful states in our daily lives, it's useful to see what happens when you undergo some transition into a different kind of state," Milliere says.
Some scientists focus on deep states of meditation or intense hallucinations. For his part, Milliere is particularly interested in understanding self-consciousness-the awareness of oneself as a thinking, feeling individual in a particular place and time-and so he studies what happens to someone's brain during a psychedelic trip. By comparing how subjects respond post-trip to questions like I experienced a disintegration of my self' or ego'" with their brain activity patterns, researchers have discovered some changes that may be linked to the loss of self-consciousness. The default mode network (DMN), for example-a group of brain regions that all become active when people are lost in thought-tends to lose its typical coordination.
Taking a high dose of psychedelics is certainly the easiest way to lose one's sense of self while awake. But if drugs aren't your thing, there is another option: spend tens of thousands of hours practicing meditation. Highly skilled practitioners of Buddhist meditation can voluntarily enter a state in which the boundary between themselves and the world begins to seem porous, or even disappears entirely. Interestingly, such states are also associated with activity changes in some core regions of the default mode network, like the posterior cingulate cortex.
Because the potential pool of subjects is so much smaller, studying meditators can be a trickier way of getting at extreme experiences. But meditators also have some distinctive benefits as experimental subjects, says Sara Lazar, associate professor of psychiatry at Harvard Medical School. Expert meditators are masters of their own internal lives-they can spontaneously produce feelings of profound gratitude or descend into states of deep focus-and they tend to report their inner experiences in far more detail than untrained people are able to. It's because we spend so much time just listening and paying attention to what's actually going on inside of us," says Lazar, herself an experienced meditator.
We non-meditators are sometimes so unaware of what's going on in our own heads that when our minds start to wander-which they often do-we might not even notice what is happening. In order to study what the brain does at such times, Kalina Christoff, a psychologist at the University of British Columbia, had to periodically prompt her subjects to consider whether their minds had, at that moment, been wandering, and whether they had realized that they'd lost their focus. Frequently, they did not. Her subjects' default mode networks were more active while their minds were wandering, and especially so when they were unaware that it was happening.
To investigate the onset of mind wandering in more detail, however, Christoff had to turn to experienced meditators, who could detect it the moment it occurred. Only with their assistance was she able to determine that the DMN is particularly active in the moments just before the mind begins to drift away.
Altogether, these results paint a fairly coherent picture. When you are wondering what to have for dinner or worrying over a recent disagreement with a friend, your DMN switches on; but in states of intense, selfless focus, the network deactivates or desynchronizes. But that doesn't mean scientists can tell whether you are conscious of yourself, or whether your head is in the clouds, just by looking at your brain activity. In one study, researchers were able to decode particular internal states-a focus on the breath, a focus on sounds, and a wandering mind, for example-at a better rate than would be expected by chance, but they still got it wrong more than half the time. And these coarse descriptions of people's inner states hardly paint a complete picture of what it's like to be them.
Even so, Lazar thinks brain data might help us better understand our own experiences. Deactivation of the default mode network, and of the posterior cingulate cortex in particular, is associated with states of effortless focus" that beginning meditators often struggle to attain. So some researchers are testing whether seeing live data from their own brains, in a process called neurofeedback, could help people learn to meditate. Once you've felt the right state at least once or twice, then you know: okay, this is what I'm going for, this is what I'm aiming for," Lazar says. Now I know what this feels like."
Asking the right questionsIf you're a neuroscientist interested in subjective experience, times are relatively good: research on psychedelics and meditation has exploded in the past decade, and noninvasive neuroimaging technologies are only growing ever more powerful and precise. But the data means little without a solid indication of what the subject is experiencing, and the only way to obtain that information is to ask. We simply cannot do away with reports of some sort," Milliere says.
Psychological questionnaires are one approach. They're conveniently quantitative, and they're easy to use, but they require subjects to slot their transcendent experiences into preestablished, and potentially ill-fitting, boxes. There are alternatives. Phenomenology, the branch of philosophy that seeks to analyze first-person experience in rigorous, exacting detail, has had over a century to refine its techniques for obtaining such reports-three times as long as the fMRI machine has existed. Milliere has organized training sessions for his neuroscientist colleagues in micro-phenomenology," a philosophical interview method that seeks to elicit as much experiential information from a subject as possible without leading the responses in any particular direction.
But long textual descriptions, of the sort produced by a micro-phenomenological interview, are much trickier to parse than questionnaires. Researchers can manually rate each response according to the attributes that interest them, but that can be a messy and labor-intensive process-and it robs interviews of much of the nuance that makes them so valuable. Natural-language-processing algorithms, like those that power ChatGPT, may offer a more efficient and consistent alternative: they can quickly and automatically analyze large volumes of text for particular features. Already, Milliere has experimented with applying natural-language processing to reports of psychedelic experiments from online databases like Erowid and discovered that the resulting characterizations correspond well to data obtained from questionnaires.
Even with the help of micro-phenomenology, however, wrapping up what's going on inside your head into a neat verbal package is a daunting task. So instead of asking subjects to struggle to represent their experiences in words, some scientists are using technology to try to reproduce those experiences. That way, all subjects need to do is confirm or deny that the reproductions match what's happening in their heads.
In a study that has not yet been peer reviewed, a team of scientists from the University of Sussex, UK, attempted to devise such a question by simulating visual hallucinations with deep neural networks. Convolutional neural networks, which were originally inspired by the human visual system, typically take an image and turn it into useful information-a description of what the image contains, for example. Run the network backward, however, and you can get it to produce images-phantasmagoric dreamscapes that provide clues about the network's inner workings.
The idea was popularized in 2015 by Google, in the form of a program called DeepDream. Like people around the world, the Sussex team started playing with the system for fun, says Anil Seth, a professor of neuroscience and one of the study's coauthors. But they soon realized that they might be able to leverage the approach to reproduce various unusual visual experiences.
Drawing on verbal reports from people with hallucination-causing conditions like vision loss and Parkinson's, as well as from people who had recently taken psychedelics, the team designed an extensive menu of simulated hallucinations. That allowed them to obtain a rich description of what was going on in subjects' minds by asking them a simple question: Which of these images best matches your visual experience? The simulations weren't perfect, although many of the subjects were able to find an approximate match.
Unlike the decoding research, this study involved no brain scans-but, Seth says, it may still have something valuable to say about how hallucinations work in the brain. Some deep neural networks do a respectable job of modeling the inner mechanisms of the brain's visual regions, and so the tweaks that Seth and his colleagues made to the network may resemble the underlying biological tweaks" that made the subjects hallucinate. To the extent that we can do that," Seth says, we've got a computational-level hypothesis of what's happening in these people's brains that underlie these different experiences."
This line of research is still in its infancy, but it suggests that neuroscience might one day do more than simply telling us what someone else is experiencing. By using deep neural networks, the team was able to bring its subjects' hallucinations out into the world, where anyone could share in them.
Externalizing other sorts of experiences would likely prove far more difficult-deep neural networks do a good job of mimicking senses like vision and hearing, but they can't yet model emotions or mind-wandering. As brain modeling technologies advance, however, they could bring with them a radical possibility: that people might not only know, but actually share, what is going on in someone else's mind.