The messy morality of letting AI make life-and-death decisions
In a workshop in Rotterdam in the Netherlands, Philip Nitschke-Dr. Death" or the Elon Musk of assisted suicide" to some-is overseeing the last few rounds of testing on his new Sarco machine before shipping it to Switzerland, where he says its first user is waiting.
This is the third prototype that Nitschke's nonprofit, Exit International, has 3D-printed and wired up. Number one has been exhibited in Germany and Poland. Number two was a disaster," he says. Now he's ironed out the manufacturing errors and is ready to launch: This is the one that will be used."
A coffin-size pod with Star Trek stylings, the Sarco is the culmination of Nitschke's 25-year campaign to demedicalize death" through technology. Sealed inside the machine, a person who has chosen to die must answer three questions: Who are you? Where are you? And do you know what will happen when you press that button?
Here's what will happen: The Sarco will fill with nitrogen gas. Its occupant will pass out in less than a minute and die by asphyxiation in around five.
A recording of that short, final interview will then be handed over to the Swiss authorities. Nitschke has not approached the Swiss government for approval, but Switzerland is one of a handful of countries that have legalized assisted suicide. It is permitted as long as people who wish to die perform the final act themselves.
Nitschke wants to make assisted suicide as unassisted as possible, giving people who have chosen to kill themselves autonomy, and thus dignity, in their final moments. You really don't need a doctor to die," he says.
Because the Sarco uses nitrogen, a widely available gas, rather than the barbiturates that are typically used in euthanasia clinics, it does not require a physician to administer an injection or sign off on lethal drugs.
At least that's the idea. Nitschke has not yet been able to sidestep the medical establishment fully. Switzerland requires that candidates for euthanasia demonstrate mental capacity, Nitschke says, which is typically assessed by a psychiatrist. There's still a belief that if a person is asking to die, they've got some sort of undiagnosed mental illness," he says. That it's not rational for a person to seek death."
He believes he has a solution, however. Exit International is working on an algorithm that Nitschke hopes will allow people to perform a kind of psychiatric self-assessment on a computer. In theory, if a person passed this online test, the program would provide a four-digit code to activate the Sarco. That's the goal," says Nitschke. Having said all that, the project is proving very difficult."
Nitschke's mission may seem extreme-even outrageous-to some. And his belief in the power of algorithms may prove to be overblown. But he is not the only one looking to involve technology, and AI in particular, in life-or-death decisions.
Yet where Nitschke sees AI as a way to empower individuals to make the ultimate choice by themselves, others wonder if AI can help relieve humans from the burden of such choices. AI is already being used to triage and treat patients across a growing number of health-care fields. As algorithms become an increasingly important part of care, we must ensure that their role is limited to medical decisions, not moral ones.
Medical care is a limited resource. Patients must wait for appointments to get tests or treatment. Those in need of organ transplants must wait for suitable hearts or kidneys. Vaccines must be rolled out first to the most vulnerable (in countries that have them). And during the worst of the pandemic, when hospitals faced a shortage of beds and ventilators, doctors had to make snap decisions about who would receive immediate care and who would not-with tragic consequences.
The covid crisis brought the need for such choices into harsh focus-and led many to wonder if algorithms could help. Hospitals around the world bought new or co-opted existing AI tools to assist with triage. Some hospitals in the UK that had been exploring the use of AI tools to screen chest x-rays jumped on those tools as a fast, cheap way to identify the most severe covid cases. Suppliers of this tech, such as Qure.ai, based in Mumbai, India, and Lunit, based in Seoul, Korea, took on contracts in Europe, the US, and Africa. Diagnostic Robotics, an Israeli firm that supplies AI-based triage tools to hospitals in Israel, India, and the US, has said it saw a sevenfold jump in demand for its technology in the first year of the pandemic. Business in health-care AI has been booming ever since.
This rush to automate raises big questions with no easy answers. What kinds of decision is it appropriate to use an algorithm to make? How should these algorithms be built? And who gets a say in how they work?
Rhema Vaithianathan, the director of the Centre for Social Data Analytics and a professor at the Auckland University of Technology in New Zealand, who focuses on tech in health and welfare, thinks it is right that people are asking AI to help make big decisions. We should be addressing problems that clinicians find really hard," she says.
One of the projects she is working on involves a teen mental-health service, where young people are diagnosed and treated for self-harming behaviors. There is high demand for the clinic, and so it needs to maintain a high turnover, discharging patients as soon as possible so that more can be brought in.
Doctors face the difficult choice between keeping existing patients in care and treating new ones. Clinicians don't discharge people because they're super scared of them self-harming," says Vaithianathan. That's their nightmare scenario."
Even when AI seems accurate, scholars and regulators alike call for caution.
Vaithianathan and her colleagues have tried to develop a machine-learning model that can predict which patients are most at risk of future self-harming behavior and which are not, using a wide range of data, including health records and demographic information, to give doctors an additional resource in their decision-making. I'm always looking for those cases where a clinician is struggling and would appreciate an algorithm," she says.
The project is in its early stages, but so far the researchers have found that there may not be enough data to train a model that can make accurate predictions. They will keep trying. The model does not have to be perfect to help doctors, Vaithianathan says.
They are not the only team trying to predict the risk of discharging patients. A review published in 2021 highlighted 43 studies by researchers claiming to use machine-learning models to predict whether patients will be readmitted or die after they leave hospitals in the US. None were accurate enough for clinical use, but the authors look forward to a time when such models improve quality of care and reduce health-care costs."
And yet even when AI seems accurate, scholars and regulators alike call for caution. For one thing, the data that algorithms follow and the way they follow it are human artifacts, riddled with prejudice. Health data is overpopulated by people who are white and male, for example, which skews its predictive power. And the models offer a veneer of objectivity that can lead people to pass the buck on ethical decisions, trusting the machine rather than questioning its output.
This ongoing problem is a theme in David Robinson's new book, Voices in the Code, about the democratization of AI. Robinson, a visiting scholar at the Social Science Matrix at the University of California, Berkeley, and a member of the faculty of Apple University, tells the story of Belding Scribner. In 1960 Scribner, a nephrologist in Seattle, inserted a short Teflon tube known as a shunt into some of his patients' arms to prevent their blood from clotting while they underwent dialysis treatment. The innovation allowed people with kidney disease to stay on dialysis indefinitely, transforming kidney failure from a fatal condition into a long-term illness.
When word got out, Scribner was inundated with requests for treatment. But he could not take everyone. Whom should he help and whom should he turn away? He soon realized that this wasn't a medical decision but an ethical one. He set up a committee of laypeople to decide. Of course, their choices weren't perfect. The prejudices at the time led the committee to favor married men with jobs and families, for example.
The way Robinson tells it, the lesson we should take from Scribner's work is that certain processes-bureaucratic, technical, and algorithmic-can make difficult questions seem neutral and objective. They can obscure the moral aspects of a choice-and the sometimes awful consequences.
Bureaucracy itself can serve as a way of converting hard moral problems into boring technical ones," Robinson writes. This phenomenon predates computers, he says, but software-based systems can accelerate and amplify this trend. Quantification can be a moral anesthetic, and computers make that anesthetic easier than ever to administer."
Whatever the process, we need to let that moral anesthetic wear off and examine the painful implications of the decision at hand. For Scribner, that meant asking an open panel of laypeople-instead of a group of ostensibly objective doctors meeting behind closed doors-whom to save. Today, it could mean asking for high-stakes algorithms to be audited. For now, the auditing of algorithms by independent parties is more wish-list item than standard practice. But, again using the example of kidney disease, Robinson shows how it can be done.
By the 2000s, an algorithm had been developed in the US to identify recipients for donated kidneys. But some people were unhappy with how the algorithm had been designed. In 2007, Clive Grawe, a kidney transplant candidate from Los Angeles, told a room full of medical experts that their algorithm was biased against older people like him. The algorithm had been designed to allocate kidneys in a way that maximized years of life saved. This favored younger, wealthier, and whiter patients, Grawe and other patients argued.
Such bias in algorithms is common. What's less common is for the designers of those algorithms to agree that there is a problem. After years of consultation with laypeople like Grawe, the designers found a less biased way to maximize the number of years saved-by, among other things, considering overall health in addition to age. One key change was that the majority of donors, who are often people who have died young, would no longer be matched only to recipients in the same age bracket. Some of those kidneys could now go to older people if they were otherwise healthy. As with Scribner's committee, the algorithm still wouldn't make decisions that everyone would agree with. But the process by which it was developed is harder to fault.
I didn't want to sit there and give the injection. If you want it, you press the button."
Philip Nitschke
Nitschke, too, is asking hard questions.
A former doctor who burned his medical license after a years-long legal dispute with the Australian Medical Board, Nitschke has the distinction of being the first person to legally administer a voluntary lethal injection to another human. In the nine months between July 1996, when the Northern Territory of Australia brought in a law that legalized euthanasia, and March 1997, when Australia's federal government overturned it, Nitschke helped four of his patients to kill themselves.
The first, a 66-year-old carpenter named Bob Dent, who had suffered from prostate cancer for five years, explained his decision in an open letter: If I were to keep a pet animal in the same condition I am in, I would be prosecuted."
Nitschke wanted to support his patients' decisions. Even so, he was uncomfortable with the role they were asking him to play. So he made a machine to take his place. I didn't want to sit there and give the injection," he says. If you want it, you press the button."
The machine wasn't much to look at: it was essentially a laptop hooked up to a syringe. But it achieved its purpose. The Sarco is an iteration of that original device, which was later acquired by the Science Museum in London. Nitschke hopes an algorithm that can carry out a psychiatric assessment will be the next step.
But there's a good chance those hopes will be dashed. Creating a program that can assess someone's mental health is an unsolved problem-and a controversial one. As Nitschke himself notes, doctors do not agree on what it means for a person of sound mind to choose to die. You can get a dozen different answers from a dozen different psychiatrists," he says. In other words, there is no common ground on which an algorithm could even be built.
But that's not the takeaway here. Like Scribner, Nitschke is asking what counts as a medical decision, what counts as an ethical one, and who gets to choose. Scribner thought that laypeople-representing society as a whole-should choose who received dialysis, because when patients have more or less equal chances of survival, who lives and who dies is no longer a technical question. As Robinson describes it, society must be responsible for such decisions, although the process can still be encoded in an algorithm if it's done inclusively and transparently. For Nitschke, assisted suicide is also an ethical decision, one that individuals must make for themselves. The Sarco, and the theoretical algorithm he imagines, would only protect their ability to do so.
AI will become increasingly useful, perhaps essential, as populations boom and resources stretch. Yet the real work will be acknowledging the awfulness and arbitrariness of many of the decisions AI will be called on to make. And that's on us.
For Robinson, devising algorithms is a bit like legislation: In a certain light, the question of how best to make software code that will govern people is just a special case of how best to make laws. People disagree about the merits of different ways of making high-stakes software, just as they disagree about the merits of different ways of making laws." And it is people-in the broadest sense-who are ultimately responsible for the laws we have.