Stop talking about AI ethics. It’s time to talk about power.
At the turn of the 20th century, a German horse took Europe by storm. Clever Hans, as he was known, could seemingly perform all sorts of tricks previously limited to humans. He could add and subtract numbers, tell time and read a calendar, even spell out words and sentences-all by stamping out the answer with a hoof. A" was one tap; B" was two; 2+3 was five. He was an international sensation-and proof, many believed, that animals could be taught to reason as well as humans.
The problem was Clever Hans wasn't really doing any of these things. As investigators later discovered, the horse had learned to provide the right answer by observing changes in his questioners' posture, breathing, and facial expressions. If the questioner stood too far away, Hans would lose his abilities. His intelligence was only an illusion.
This story is used as a cautionary tale for AI researchers when evaluating the capabilities of their algorithms. A system isn't always as intelligent as it seems. Take care to measure it properly.
COURTESY OF KATE CRAWFORDBut in her new book, Atlas of AI, leading AI scholar Kate Crawford flips this moral on its head. The problem, she writes, was with the way people defined Hans's achievements: Hans was already performing remarkable feats of interspecies communication, public performance, and considerable patience, yet these were not recognized as intelligence."
So begins Crawford's exploration into the history of artificial intelligence and its impact on our physical world. Each chapter seeks to stretch our understanding of the technology by unveiling how narrowly we've viewed and defined it.
Crawford does this by bringing us on a global journey, from the mines where the rare earth elements used in computer manufacturing are extracted to the Amazon fulfillment centers where human bodies have been mechanized in the company's relentless pursuit of growth and profit. In chapter one, she recounts driving a van from the heart of Silicon Valley to a tiny mining community in Nevada's Clayton Valley. There she investigates the destructive environmental practices required to obtain the lithium that powers the world's computers. It's a forceful illustration of how close these two places are in physical space yet how vastly far apart they are in wealth.
By grounding her analysis in such physical investigations, Crawford disposes of the euphemistic framing that artificial intelligence is simply efficient software running in the cloud." Her close-up, vivid descriptions of the earth and labor AI is built on, and the deeply problematic histories behind it, make it impossible to continue speaking about the technology purely in the abstract.
In chapter four, for example, Crawford takes us on another trip-this one through time rather than space. To explain the history of the field's obsession with classification, she visits the Penn Museum in Philadelphia, where she stares at rows and rows of human skulls.
The skulls were collected by Samuel Morton, a 19th-century American craniologist, who believed it was possible to objectively" divide them by their physical measurements into the five races" of the world: African, Native American, Caucasian, Malay, and Mongolian. Crawford draws parallels between Morton's work and the modern AI systems that continue to classify the world into fixed categories.
These classifications are far from objective, she argues. They impose a social order, naturalize hierarchies, and magnify inequalities. Seen through this lens, AI can no longer be considered an objective or neutral technology.
In her 20-year career, Crawford has contended with the real-world consequences of large-scale data systems, machine learning, and artificial intelligence. In 2017, with Meredith Whittaker, she cofounded the research institute AI Now as one of the first organizations dedicated to studying the social implications of these technologies. She is also now a professor at USC Annenberg, in Los Angeles, and the inaugural visiting chair in AI and justice at the Ecole Normale Superieure in Paris, as well as a senior principal researcher at Microsoft Research.
Five years ago, Crawford says, she was still working to introduce the mere idea that data and AI were not neutral. Now the conversation has evolved, and AI ethics has blossomed into its own field. She hopes her book will help it mature even further.
I sat down with Crawford to talk about her book.
The following has been edited for length and clarity.
Why did you choose to do this book project, and what does it mean to you?Crawford: So many of the books that have been written about artificial intelligence really just talk about very narrow technical achievements. And sometimes they write about the great men of AI, but that's really all we've had in terms of really contending with what artificial intelligence is.
I think it's produced this very skewed understanding of artificial intelligence as purely technical systems that are somehow objective and neutral, and-as Stuart Russell and Peter Norvig say in their textbook-as intelligent agents that make the best decision of any possible action.
I wanted to do something very different: to really understand how artificial intelligence is made in the broadest sense. This means looking at the natural resources that drive it, the energy that it consumes, the hidden labor all along the supply chain, and the vast amounts of data that are extracted from every platform and device that we use every day.
In doing that,, I wanted to really open up this understanding of AI as neither artificial nor intelligent. It's the opposite of artificial. It comes from the most material parts of the Earth's crust and from human bodies laboring, and from all of the artifacts that we produce and say and photograph every day. Neither is it intelligent. I think there's this great original sin in the field, where people assumed that computers are somehow like human brains and if we just train them like children, they will slowly grow into these supernatural beings.
That's something that I think is really problematic-that we've bought this idea of intelligence when in actual fact, we're just looking at forms of statistical analysis at scale that have as many problems as the data that it's given.
Was it immediately obvious to you that this is how people should be thinking about AI? Or was it a journey?It's absolutely been a journey. I'd say one of the turning points for me was back in 2016, when I started a project called Anatomy of an AI system" with Vladan Joler. We met at a conference specifically about voice-enabled AI, and we were trying to effectively draw what it takes to make an Amazon Echo work. What are the components? How does it extract data? What are the layers in the data pipeline?
We realized, well-actually, to understand that, you have to understand where the components come from. Where did the chips get produced? Where are the mines? Where does it get smelted? Where are the logistical and supply chain paths?
Finally, how do we trace the end of life of these devices? How do we look at where the e-waste tips are located in places like Malaysia and Ghana and Pakistan? What we ended up with was this very time-consuming two-year research project to really trace those material supply chains from cradle to grave.
When you start looking at AI systems on that bigger scale, and on that longer time horizon, you shift away from these very narrow accounts of AI fairness" and ethics" to saying: these are systems that produce profound and lasting geomorphic changes to our planet, as well as increase the forms of labor inequality that we already have in the world.
So that made me realize that I had to shift from an analysis of just one device, the Amazon Echo, to applying this sort of analytic to the entire industry. That to me was the big task, and that's why Atlas of AI took five years to write. There's such a need to actually see what these systems really cost us, because we so rarely do the work of actually understanding their true planetary implications.
The other thing I would say that's been a real inspiration is the growing field of scholars who are asking these bigger questions around labor, data, and inequality. Here I'm thinking of Ruha Benjamin, Safiya Noble, Mar Hicks, Julie Cohen, Meredith Broussard, Simone Brown-the list goes on. I see this as a contribution to that body of knowledge by bringing in perspectives that connect the environment, labor rights, and data protection.
You travel a lot throughout the book. Almost every chapter starts with you actually looking around at your surroundings. Why was this important to you?It was a very conscious choice to ground an analysis of AI in specific places, to move away from these abstract nowheres" of algorithmic space, where so many of the debates around machine learning happen. And hopefully it highlights the fact that when we don't do that, when we just talk about these nowhere spaces" of algorithmic objectivity, that is also a political choice, and it has ramifications.
In terms of threading the locations together, this is really why I started thinking about this metaphor of an atlas, because atlases are unusual books. They're books that you can open up and look at the scale of an entire continent, or you can zoom in and look at a mountain range or a city. They give you these shifts in perspective and shifts in scale.
There's this lovely line that I use in the book from the physicist Ursula Franklin. She writes about how maps join together the known and the unknown in these methods of collective insight. So for me, it was really drawing on the knowledge that I had, but also thinking about the actual locations where AI is being constructed very literally from rocks and sand and oil.
What kind of feedback has the book received?One of the things that I've been surprised by in the early responses is that people really feel like this kind of perspective was overdue. There's a moment of recognition that we need to have a different sort of conversation than the ones that we've been having over the last few years.
We've spent far too much time focusing on narrow tech fixes for AI systems and always centering technical responses and technical answers. Now we have to contend with the environmental footprint of the systems. We have to contend with the very real forms of labor exploitation that have been happening in the construction of these systems.
And we also are now starting to see the toxic legacy of what happens when you just rip out as much data off the internet as you can, and just call it ground truth. That kind of problematic framing of the world has produced so many harms, and as always, those harms have been felt most of all by communities who were already marginalized and not experiencing the benefits of those systems.
What do you hope people will start to do differently?I hope it's going to be a lot harder to have these cul-de-sac conversations where terms like ethics" and AI for good" have been so completely denatured of any actual meaning. I hope it pulls aside the curtain and says, let's actually look at who's running the levers of these systems. That means shifting away from just focusing on things like ethical principles to talking about power.
How do we move away from this ethics framing?If there's been a real trap in the tech sector for the last decade, it's that the theory of change has always centered engineering. It's always been, If there's a problem, there's a tech fix for it." And only recently are we starting to see that broaden out to Oh, well, if there's a problem, then regulation can fix it. Policymakers have a role."
But I think we need to broaden that out even further. We have to say also: Where are the civil society groups, where are the activists, where are the advocates who are addressing issues of climate justice, labor rights, data protection? How do we include them in these discussions? How do we include affected communities?
In other words, how do we make this a far deeper democratic conversation around how these systems are already influencing the lives of billions of people in primarily unaccountable ways that live outside of regulation and democratic oversight?
In that sense, this book is trying to de-center tech and starting to ask bigger questions around: What sort of world do we want to live in?
What sort of world do you want to live in? What kind of future do you dream of?I want to see the groups that have been doing the really hard work of addressing questions like climate justice and labor rights draw together, and realize that these previously quite separate fronts for social change and racial justice have really shared concerns and a shared ground on which to coordinate and to organize.
Because we're looking at a really short time horizon here. We're dealing with a planet that's already under severe strain. We're looking at a profound concentration of power into extraordinarily few hands. You'd really have to go back to the early days of the railways to see another industry that is so concentrated, and now you could even say that tech has overtaken that.
So we have to contend with ways in which we can pluralize our societies and have greater forms of democratic accountability. And that is a collective-action problem. It's not an individual-choice problem. It's not like we choose the more ethical tech brand off the shelf. It's that we have to find ways to work together on these planetary-scale challenges.
Update: The description of the AI Now institute has been clarified.