Article 69NAH Meet the AI expert who says we should stop using AI so much

Meet the AI expert who says we should stop using AI so much

by
Tate Ryan-Mosley
from MIT Technology Review on (#69NAH)
Story Image

Meredith Broussard is unusually well placed to dissect the ongoing hype around AI. She's a data scientist and associate professor at New York University, and she's been one of the leading researchers in the field of algorithmic bias for years.

And though her own work leaves her buried in math problems, she's spent the last few years thinking about problems that mathematics can't solve. Her reflections have made their way into a new book about the future of AI. In More than a Glitch, Broussard argues that we are consistently too eager to apply artificial intelligence to social problems in inappropriate and damaging ways. Her central claim is that using technical tools to address social problems without considering race, gender, and ability can cause immense harm.

Broussard has also recently recovered from breast cancer, and after reading the fine print of her electronic medical records, she realized that an AI had played a part in her diagnosis-something that is increasingly common. That discovery led her to run her own experiment to learn more about how good AI was at cancer diagnostics.

We sat down to talk about what she discovered, as well as the problems with the use of technology by police, the limits of AI fairness," and the solutions she sees for some of the challenges AI is posing. The conversation has been edited for clarity and length.

I was struck by a personal story you share in the book about AI as part of your own cancer diagnosis. Can you tell our readers what you did and what you learned from that experience?

At the beginning of the pandemic, I was diagnosed with breast cancer. I was not only stuck inside because the world was shut down; I was also stuck inside because I had major surgery. As I was poking through my chart one day, I noticed that one of my scans said, This scan was read by an AI. I thought, Why did an AI read my mammogram? Nobody had mentioned this to me. It was just in some obscure part of my electronic medical record. I got really curious about the state of the art in AI-based cancer detection, so I devised an experiment to see if I could replicate my results. I took my own mammograms and ran them through an open-source AI in order to see if it would detect my cancer. What I discovered was that I had a lot of misconceptions about how AI in cancer diagnosis works, which I explore in the book.

[Once Broussard got the code working, AI did ultimately predict that her own mammogram showed cancer. Her surgeon, however, said the use of the technology was entirely unnecessary for her diagnosis, since human doctors already had a clear and precise reading of her images.]

One of the things I realized, as a cancer patient, was that the doctors and nurses and health-care workers who supported me in my diagnosis and recovery were so amazing and so crucial. I don't want a kind of sterile, computational future where you go and get your mammogram done and then a little red box will say This is probably cancer. That's not actually a future anybody wants when we're talking about a life-threatening illness, but there aren't that many AI researchers out there who have their own mammograms.

You sometimes hear that once AI bias is sufficiently fixed," the technology can be much more ubiquitous. You write that this argument is problematic. Why?

One of the big issues I have with this argument is this idea that somehow AI is going to reach its full potential, and that that's the goal that everybody should strive for. AI is just math. I don't think that everything in the world should be governed by math. Computers are really good at solving mathematical issues. But they are not very good at solving social issues, yet they are being applied to social problems. This kind of imagined endgame of Oh, we're just going to use AI for everything is not a future that I cosign on.

You also write about facial recognition. I recently heard an argument that the movement to ban facial recognition (especially in policing) discourages efforts to make the technology more fair or more accurate. What do you think about that?

I definitely fall in the camp of people who do not support using facial recognition in policing. I understand that's discouraging to people who really want to use it, but one of the things that I did while researching the book is a deep dive into the history of technology in policing, and what I found was not encouraging.

I started with the excellent book Black Software by [NYU professor of Media, Culture, and Communication] Charlton McIlwain, and he writes about IBM wanting to sell a lot of their new computers at the same time that we had the so-called War on Poverty in the 1960s. We had people who really wanted to sell machines looking around for a problem to apply them to, but they didn't understand the social problem. Fast-forward to today-we're still living with the disastrous consequences of the decisions that were made back then.

Police are also no better at using technology than anybody else. If we were talking about a situation where everybody was a top-notch computer scientist who was trained in all of the intersectional sociological issues of the day, and we had communities that had fully funded schools and we had, you know, social equity, then it would be a different story. But we live in a world with a lot of problems, and throwing more technology at already overpoliced Black, brown, and poorer neighborhoods in the United States is not helping.

You discuss the limitations of data science in working on social problems, yet you are a data scientist yourself! How did you come to realize the limitations of your own profession?

I hang out with a lot of sociologists. I am married to a sociologist. One thing that was really important to me in thinking through the interplay between sociology and technology was a conversation that I had a few years ago with Jeff Lane, who is a sociologist and ethnographer [as an associate professor at Rutgers School of Information].

We started talking about gang databases, and he told me something that I didn't know, which is that people tend to age out of gangs. You don't enter the gang and then just stay there for the rest of your life. And I thought, Well, if people are aging out of gang involvement, I will bet that they're not being purged from the police databases. I know how people use databases, and I know how sloppy we all are about updating databases.

So I did some reporting, and sure enough, there was no requirement that once you're not involved in a gang anymore, your information will be purged from the local police gang database. This just got me started thinking about the messiness of our digital lives and the way this could intersect with police technology in potentially dangerous ways.

Predictive grading is increasingly being used in schools. Should that worry us? When is it appropriate to apply prediction algorithms, and when is it not?

One of the consequences of the pandemic is we all got a chance to see up close how deeply boring the world becomes when it is totally mediated by algorithms. There's no serendipity. I don't know about you, but during the pandemic I absolutely hit the end of the Netflix recommendation engine, and there's just nothing there. I found myself turning to all of these very human methods to interject more serendipity into discovering new ideas.

To me, that's one of the great things about school and about learning: you're in a classroom with all of these other people who have different life experiences. As a professor, predicting student grades in advance is the opposite of what I want in my classroom. I want to believe in the possibility of change. I want to get my students further along on their learning journey. An algorithm that says This student is this kind of student, so they're probably going to be like this is counter to the whole point of education, as far as I'm concerned.

We sometimes fall in love with the idea of statistics predicting the future, so I absolutely understand the urge to make machines that make the future less ambiguous. But we do have to live with the unknown and leave space for us to change as people.

Can you tell me about the role you think that algorithmic auditing has in a safer, more equitable future?

Algorithmic auditing is the process of looking at an algorithm and examining it for bias. It's very, very new as a field, so this is not something that people knew how to do 20 years ago. But now we have all of these terrific tools. People like Cathy O'Neil and Deborah Raji are doing great work in algorithm auditing. We have all of these mathematical methods for evaluating fairness that are coming out of the FAccT conference community [which is dedicated to trying to make the field of AI more ethical]. I am very optimistic about the role of auditing in helping us make algorithms more fair and more equitable.

In your book, you critique the phrase black box" in reference to machine learning, arguing that it incorrectly implies it's impossible to describe the workings inside a model. How should we talk about machine learning instead?

That's a really good question. All of my talk about auditing sort of explodes our notion of the black box." As I started trying to explain computational systems, I realized that the black box" is an abstraction that we use because it's convenient and because we don't often want to get into long, complicated conversations about math. Which is fair! I go to enough cocktail parties that I understand you do not want to get into a long conversation about math. But if we're going to make social decisions using algorithms, we need to not just pretend that they are inexplicable.

One of the things that I try to keep in mind is that there are things that are unknown in the world, and then there are things that are unknown to me. When I'm writing about complex systems, I try to be really clear about what the difference is.

When we're writing about machine-learning systems, it is tempting to not get into the weeds. But we know that these systems are being discriminatory. The time has passed for reporters to just say Oh, we don't know what the potential problems are in the system. We can guess what the potential problems are and ask the tough questions. Has this system been evaluated for bias based on gender, based on ability, based on race? Most of the time the answer is no, and that needs to change.

More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech goes on sale March 14, 2023.

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments