We’re not ready for AI, says the winner of a new $1m AI prize
Regina Barzilay, a professor at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), is the first winner of the Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity, a new prize recognizing outstanding research in AI. Barzilay started her career working on natural-language processing. After surviving breast cancer in 2014, she switched her focus to machine-learning algorithms for detecting cancer and designing new drugs. The award will be presented in February 2021 by the Association for the Advancement of Artificial Intelligence (AAAI).
The $1 million prize money, provided by Chinese online education company Squirrel AI, which we have written about previously, puts the award on the same financial level as the Nobel Prize and the Turing Award in computer science. I talked to Barzilay on the phone about the prize-and the promise and frustrations of AI.
Our conversation has been edited for length and clarity.
Congratulations on this award. What does it mean to you and to AI in general?
Thank you. You know, there are lots of areas where AI still isn't making a difference but could be. We use machine translation or recommender systems all the time, but nobody thinks of these as fancy technology, nobody asks about them. But with other areas of our life that are crucial to our well-being, such as health care, AI doesn't yet have the acceptance of society. I hope that this award, and the attention that comes with it, helps to change people's minds and lets them see the opportunities-and pushes the AI community to take the next steps.
What kinds of steps?
Back when technology moved from steam power to electricity, the first attempts to bring electricity to industry weren't very successful because people just tried to replicate steam engines. I think something similar is going on now with AI. We need to work out how to integrate it into many different areas: not just health care, but education, materials design, city planning, and so on. Of course, there is more to do on the technology side, including making better algorithms, but we are bringing this technology into highly regulated environments and we have not really looked at how to do that.
Right now AI is flourishing in places where the cost of failure is very low. If Google finds you a wrong translation or gives you a wrong link, it's fine; you can just go to the next one. But that's not going to work for a doctor. If you give patients the wrong treatment or miss a diagnosis, there are really serious implications. Many algorithms can actually do things better than humans. But we always trust our own intuitions, our own mind, more than something we don't understand. We need to give doctors reasons to trust AI. The FDA is looking at this problem, but I think it's very far from solved in the US, or anywhere else in the world.
In 2014 you were diagnosed with breast cancer. Did that change how you thought about your work?
Oh, yeah, absolutely. One of the things that happened when I went through treatment and spent inordinate amounts of time in the hospital is that the things I'd been working on now felt trivial. I thought: People are suffering. We can do something.
When I started treatment, I would ask what happens to patients like me, with my type of tumor and my age and this treatment. They would say: Oh, there was this clinical trial, but you don't really fit it exactly." And I thought, breast cancer is a very common disease. There are so many patients, with so much accumulated data. How come we're not using it? But you can't get this information easily out of the system in US hospitals. It's there, but it's in text. And so I started using NLP to access it. I couldn't imagine any other field where people voluntarily throw away the data that's available. But that's what was going on in medicine.
Did hospitals jump at the chance to make more use of this data?
It took some time to find a doctor who'd work with me. I was telling people, if you have any problem, I will try to solve it. I don't need funding. Just give me a problem and the data. But it took me a while to find collaborators. You know, I wasn't a particularly popular character.
From this NLP work I then moved into predicting patient risk from mammograms, using image recognition to predict if you would get cancer or not-how your disease is likely to progress.
Would these tools have made a difference if they had been available to you when you were diagnosed?
Absolutely. We can run this stuff on my mammograms from before my diagnosis, and it was already there-you can clearly detect it. It's not some kind of miracle-cancer doesn't grow from yesterday to today. It's a pretty long process. There are signs in the tissue, but the human eye has limited ability to detect what may be very small patterns. In my case it would have been visible two years before.
Why didn't the doctor see it?
It's a hard task. Every mammogram has white spots that may or may not be cancer, and a doctor has to decide which of these white spots needs to be biopsied. The doctor needs to balance acting on intuition versus harming a patient by doing biopsies that aren't needed. But this is exactly the type of decision that data-driven AI can help us make in a much more systematic way.
Which brings us back to the problem of trust. Do we need a technical fix, making tools more explainable, or do we need to educate the people who use them?
That's a great question. Some decisions would be really easy to explain to a human. If an AI detects cancer in an image, you can zoom in to the area that the model looks at when it makes the prediction. But if you ask a machine, as we increasingly are, to do things that a human can't, what exactly is the machine going to show you? It's like a dog, which can smell much better than us, explaining how it can smell something. We just don't have that capacity. I think that as the machines become much more advanced, this is the big question. What explanation would convince you if you on your own cannot solve this task?
So should we wait until AI can explain itself fully?
No. Think about how we answer life-and-death questions now. Most medical questions, such as how you will respond to this treatment or that medication, are answered using statistical models that can lead to mistakes. None of them are perfect.
It's the same with AI. I don't think it's good to wait until we develop perfect AI. I don't think that's going to happen anytime soon. The question is how to use its strengths and avoid its weaknesses.
Finally, why has AI not yet had much impact on covid-19?
AI is not going to solve all the big problems we have. But there have been some small examples. When all nonessential clinical services were scaled down earlier this year, we used an AI tool to identify which oncology patients in Boston should still go and have their yearly mammogram.
But the main reason AI hasn't been more useful is not the lack of technology but the lack of data. You know, I'm on the leadership team of MIT's J-Clinic, a center for AI in health care, and there were lots of us in April saying: We really want to do something-where can we get the data? But we couldn't get it. It was impossible. Even now, six months later, it's not obvious how we get data.
The second reason is that we weren't ready. Even in normal circumstances, when people are not under stress, it is difficult to adopt AI tools into a process and make sure it's all properly regulated. In the current crisis, we simply don't have that capacity.
You know, I understand why doctors are conservative: people's lives are on the line. But I do hope that this will be a wake-up call to how unprepared we are to react fast to new threats. As much as I think that AI is the technology of the future, unless we figure out how to trust it, we will not see it moving forward.