Tech that aims to read your mind and probe your memories is already here
This article is from The Checkup, MIT Technology Review's weekly biotech newsletter. To receive it in your inbox every Thursday, sign up here.
Earlier this week, I had a fascinating call with Nita Farahany, a futurist and legal ethicist at Duke University in Durham, North Carolina. Farahany has spent much of her career exploring the impacts of new technologies-in particular, those that attempt to understand or modify our brains.
In recent years, we've seen neurotechnologies move from research labs to real-world use. Schools have used some devices to monitor the brain activity of children to tell when they are paying attention. Police forces are using others to work out whether someone is guilty of a crime. And employers use them to keep workers awake and productive.
These technologies hold the remarkable promise of giving us all-new insight into our own minds. But our brain data is precious, and letting it fall into the wrong hands could be dangerous, Farahany argues in her new book, The Battle for Your Brain. I chatted with her about some of her concerns.
The following interview has been edited for length and clarity.
Your book describes how technologies that collect and probe our brain data might be used-for better or for worse. What can you tell from a person's brain data?
When I talk about brain data, I'm referring to the use of EEG, fNIRS [functional near-infrared spectroscopy], fMRI [functional magnetic resonance imaging], EMG and other modalities that collect biological, electrophysiological, and other functions from the human brain. These devices tend to collect data from across the brain, and you can then use software to try to pick out a particular signal.
Brain data is not thought. But you can use it to make inferences about what's happening in a person's mind. There are brain states you can decode: tired, paying attention, mind-wandering, engagement, boredom, interest, happy, sad. You could work out how they are thinking or feeling, whether they are hungry, whether they are a Democrat or Republican.
You can also pick up a person's reactions, and try to probe the brain for information and figure out what's in their memory or their thought patterns. You could show them numbers to try to figure out their PIN number, or images of political candidates to find out if they have more positive or negative reactions. You can probe for biases, but also for substantive knowledge that a person holds, such as recognition of a crime scene or a password.
Until now, most people will only have learned about their brain data through medical exams. Our health records are protected. What about brain data collected by consumer products?
I feel like we're at an inflection point. [A lot of] consumer devices are hitting the market this year, and in the next two years. There have been huge advances in AI that allows us to decode brain activity, and in the miniaturization of electrodes, which [allows manufacturers] to put them into earbuds and headphones. And there has been significant investment from big tech companies. It is, I believe, about to become ubiquitous.
The only person who has access to your brain data right now is you, and it is only analyzed in the internal software of your mind. But once you put a device on your head ... you're immediately sharing that data with whoever the device manufacturer is, and whoever is offering the platform. It could also be shared with any government or employer that might have given you the device.
Is that always a bad thing?
It's transformational for individuals to have access to their own brain data, in a good way. The brain has always been this untouchable and inaccessible area of our bodies. And suddenly that's in the hands of individuals. The relationship we're going to have with ourselves is going to change.
If scientists and researchers have access to that data, it could help them understand brain dysfunction, which could lead to the development of new treatments for neurological disease and mental illness.
The collection or creation of the data isn't what's problematic-it's when the data is used in ways that are harmful to individuals, collectives, or groups. And the problem is that that can happen very quickly.
An authoritarian government having access to it could use it to try to identify people who don't show political adherence, for example. That's a pretty quick and serious misuse of the data. Or trying to identify people who are neuroatypical, and discriminate against or segregate them. In a workplace, it could be used for dehumanization of individuals by subjecting them to neurosurveillance. All of that simultaneously becomes possible.
Some consumer products, such as headbands and earbuds that purport to measure your brain activity and induce a sense of calm, for example, have been dismissed as gimmicks by some scientists.
Very much so. The hardcore BCI [brain-computer interface] folks who are working on serious implanted [devices] to revolutionize and improve health will say ... you're not picking up much real information. The signal is distorted by noise-muscle twitches and hair, for example. But that doesn't mean that there's no signal. There are still meaningful things that you can pick up. I think people dismiss it at their peril. They don't know about what's happening in the field-the advances and how rapidly they're coming.
In the book, you give a few examples of how these technologies are already being used by employers. Some devices are used to monitor how awake and alert truck drivers are, for example.
That's not such a terrible use, from my perspective. You can balance the interest of mental privacy of the individual against societal interest, and keeping others on the road safe, and keeping the driver safe.
And giving employees the tools to have real-time neurofeedback [being able to monitor your own brain activity] to understand their own stress or attention levels is also starting to become widespread. If it's given to individuals to use for themselves as a tool of self-reflection and improvement, I don't find that to be problematic.
The problem comes if it's used as a mandatory tool, and employers gather data to make decisions about hiring, firing, and promotions. They turn it into a kind of productivity score. Then I think it becomes really insidious and problematic. It undermines trust ... and can make the workplace dehumanizing.
You also describe how corporations and governments might use our brain data. I was especially intrigued by the idea of targeted dream incubation ...
This is the stuff of the movie Inception! [Brewing company] Coors teamed up with a dream researcher to incubate volunteers' dreams with thoughts of mountains and fresh streams, and ultimately associate those thoughts with Coors beer. To do this, they played soundscapes to the volunteers when they were just waking up or falling asleep-times when our brains are the most suggestible.
It's icky for so many reasons. It is about literally looking for the moments when you're least able to protect your own mind, and then attempting to create associations in your mind. It starts to feel a lot like the kind of manipulation that should be off limits.
They recruited consenting volunteers. But could this be done without people's consent? Apple has a patent on a sleep mask with EEG sensors embedded in it, and LG has showcased EEG earbuds for sleep, for example. Imagine if any of these sensors could pick up when you're at your most suggestible, and connect to a nearby cell phone or home device to play a soundscape to manipulate your thinking. Don't you think it's creepy?
Yes, I do! How can we prevent this from happening?
I'm actively talking to a lot of companies, and telling them they need to have really robust privacy policies. I think people should be able to experiment with devices without worrying about what the implications might be.
Have those companies been receptive to the idea?
Most neurotech companies that I've talked with recognize the issues, and are trying to come forward with solutions and be responsible. I've been very encouraged by their sincerity. But I've been less impressed with some of the big tech companies. As we've seen with the recent major layoffs, the ethics people are some of the first to go at those companies.
Given that these smaller neuro companies are getting acquired by the big titans in tech, I'm less confident that brain data collected by these small companies will remain under their privacy policies. The commodification of data is the business model of these big companies. I don't want to leave it to companies to self-govern.
What else can we do?
My hope is that we immediately move toward adopting a right to cognitive liberty-a novel human right that in principle exists within existing human rights law.
I think of cognitive liberty as an umbrella concept made up of three core principles: mental privacy, freedom of thought, and self-determination. That last principle covers the right to access our own brain information, to know our own brains, and to change our own brains.
It's an update to our general conception of liberty to recognize what liberty needs to look like in the digital age.
How likely is it that we'll be able to implement something like this?
I think it's actually quite likely. The UN Human Rights Committee can, through a general comment or opinion, recognize the right to cognitive liberty. It doesn't require a political process at the UN.
But will it be implemented in time?
I hope so. That's why I wrote the book now. We don't have a lot of time. If we wait for some disaster to occur, it's going to be too late.
But we can set neurotechnology on a course that can be empowering for humanity.
Farahany's book, The Battle for Your Brain, is out this week. There's also loads of neurotech content in Tech Review's archive:
The US military has been working to develop mind-reading devices for years. The aim is to create technologies that allow us to help people with brain or nervous system damage, but also enable soldiers to direct drones and other devices by thought alone, as Paul Tullis reported in 2019.
Several multi-millionaires who made their fortune in tech have launched projects to link human brains to computers, whether to read our minds, communicate, or supercharge our brainpower. Antonio Regalado spoke to entrepreneur Bryan Johnson in 2017 about his plans to build a neural prosthetic for human intelligence enhancement. (Since then, Johnson has embarked on a quest to keep his body as young as possible.)
We can deliver jolts of electricity to the brain via headbands and caps-devices that are generally considered to be noninvasive. But given that they are probing our minds and potentially changing the way they work, perhaps we need to reconsider how invasive they really are, as I wrote in an earlier edition of The Checkup.
Elon Musk's company Neuralink has stated it has an eventual goal of creating a whole-brain interface capable of more closely connecting biological and artificial intelligence." Antonio described how much progress the company and its competitors have made in a feature that ran in the Computing issue of the magazine.
When a person with an electrode implanted in their brain to treat epilepsy was accused of assaulting a police officer, law enforcement officials asked to see the brain data collected by the device. The data was exonerating; it turns out the person was having a seizure at the time. But brain data could just as easily be used to incriminate someone else, as I wrote in a recent edition of The Checkup.
From around the webHow would you feel about getting letters from your doctor that had been written by an AI? A pilot study showed that it is possible to generate clinic letters with a high overall correctness and humanness score with ChatGPT." (The Lancet Digital Health)
When Meredith Broussard found out that her hospital had used AI to help diagnose her breast cancer, she explored how the technology fares against human doctors. Not great, it turned out. (Wired)
A federal judge in Texas is being asked in a lawsuit to direct the US Food and Drug Administration to rescind its approval of mifepristone, one of two drugs used in medication abortions. A ruling against the FDA could diminish the authority of the organization and be catastrophic for public health." (The Washington Post)
The US Environmental Protection Agency has proposed regulation that would limit the levels of six forever chemicals" in drinking water. Perfluoroalkyl and polyfluoroalkyl substances (PFAS) are synthetic chemicals that have been used to make products since the 1950s. They break down extremely slowly and have been found in the environment, and in the blood of people and animals, around the world. We still don't know how harmful they are. (EPA)
Would you pay thousands of dollars to have your jaw broken and remodeled to resemble that of Batman? The surgery represents yet another disturbing cosmetic trend. (GQ)