Article 6HHWF The Unperson Of 2023

The Unperson Of 2023

by
Mike Masnick
from Techdirt on (#6HHWF)
Story Image

2023 is over. Taylor Swift was Time's Person of the Year, beating out candidates like Jerome Powell, who may have stuck the economic soft-landing, but can't hit the high notes. Only a fool would challenge the decision, but I would like to nominate 2023's Unperson of the year - ChatGPT; the neural-network based, Large Language Model which launched only 13 months ago and took the world by storm. My claim is not that we need to pay more attention to it; from jeremiads about risks ranging from plagiarism and mass unemployment to the annihilation of the human species, we haven't been able to shut up. Instead, we need to pay a different kind of attention. Something important just happened and I am not sure we noticed.

For the first time in history, humans had a world-altering fact forced on them; sentences do not imply sentience. The crown-jewel of our species, the quality that supposedly entitled us to our special moral status: the ability fluently to manipulate highly complex, abstract language about a near-infinite number of subjects is, undeniably, no longer confined to humans. ChatGPT did what parrots with large vocabularies and chimps that have learned ASL could not. It produced language that might pass as human-created. It did so not in a philosophy hypothetical or computer lab, but in real-time, to hundreds of millions of people.

Most of us believe that being human confers a special moral status, even if we disagree how much that allows us to prefer our interests over other living things. But why? Some believe it is because of a divine command that gave us the world and its inhabitants in sole and complete dominion. But those seeking a secular justification had to root it in our capabilities - something we have or do that makes us unique. There have been many candidates, from tool-use, to a conception of past and future, to notions of morality and beauty; all challenged by studies of non-human animals suggesting they are far more capable than we once imagined. But the ability we principally focus on is language, or at least on the kind of consciousness that language seems to show. More than 2300 years ago Aristotle laid out the basic argument. Language, he claimed, allows reasoning about expediency: how best to achieve our goals? But it also enables reasoning about justice: which goals are right and just? This is why he believed only the human species has morality. From that capacity comes the particular type of morally-freighted associations so important to Greek philosophers, the family and the polis, but also the state's version of morality - the law. The human being is a moral, social being. And language is the root of it all.

More than two millennia later, when Alan Turing tried to answer the question, can machines think?" he turned to the same capability for his answer. The imitation game," popularly known as the Turing Test, argued the best assessment of whether a machine was conscious would be its ability to converse fluently enough to fool a human. That was seventy years ago, and we now know he was wrong. Large Language Models like ChatGPT can do exactly that, yet they are predict the next word" machines - masters of syntax, not semantics. They are not conscious. (This does not mean machine consciousness is impossible.) Our deeply-ingrained reflex to impute consciousness to language-users will tell us otherwise. Blake Lemoine, a Google engineer, was the first to succumb. He became convinced Google's chatbot was conscious. (Google fired him.) Most saw it as just a funny story, but it was a harbinger. Chatbots feature most prominently in our lives as writing aids, research assistants and cheating tools. But their significance has eluded us - it is the fundamental challenge to the unquestioned species-exceptionalism on which our vision of the world depends.

How do we respond? Four approaches present themselves: Refinement, denial, humility, and reflection.

Refinement: First, we could refine our vision of consciousness - insisting on semantic, not just syntactical, comprehension. Humans are still special, but the grounds for their specialness have changed. Maybe an embodied intelligence," for example, an AI incarnated as a robot that learns meaning by interaction with the world, would pass a Turing-plus test. Maybe an image-generator that grew up" experiencing the world in multiple ways, not merely scanning pictures of the world, would cross our threshold for art. The psychologists Lakoff and Johnson make a decent case that human consciousness depends on exactly such an embodied mind" and some computer scientists are pursuing its machine-analogues, trying to develop robots that learn from interaction with the world as children do. Not convinced? Maybe there is some other characteristic that we have, that machines have not yet achieved. There is a nagging worry though. Are we just nervously redrawing the boundaries of our species-island again and again, as the encroaching tides creep higher? We've done that before; for example, the assimilation of evolution into religious ideas about humanity.

Denial: There is an easier way to retain the special status of the human species of course. The second approach is simple definitional denial that anything non-biological could ever be conscious, coupled to a claim that only our species has that capacity in full measure. The philosopher John Searle produced a sophisticated version of this argument with his Chinese Room thought experiment, intended as a response to Turing's imitation game. Imagine a person who does not speak Mandarin. They are inside a sealed room and they receive slips of paper on which messages in Mandarin have been written. They have also been given an elaborate rule-set which tells them to respond to messages that contain certain Chinese characters with notes of their own, carrying just the right set of ideograms to give the illusion of communication. The person receiving these notes would imagine a Chinese-speaking consciousness inside the room, but no such consciousness exists. So far, Searle's argument is perfectly reasonable. In fact, he has described with remarkable prescience the reasons that ChatGPT's predict the next word" neural networks do not equate to understanding of the meaning in the apparently cogent messages generated. (The same argument works for AI image generators.) His mistake comes when he tries to argue that all machine intelligences must be of this kind, doomed always to syntactical imitation, not semantic comprehension.

How could simple 1 or 0 binary circuits ever yield consciousness!? This may sound similar to a favorite argument of those who denied human evolution, a version of the fallacy of composition; single celled organisms aren't conscious. Therefore no conscious being could evolve from such simple beginnings! Of course that argument was wrong. Might this one be also? Not all AI's are going to have a chatbot's architecture. Searle's response is disappointing; he simply makes the oracular pronouncement that [c]onsciousness is a biological phenomenon like photosynthesis, digestion or mitosis." As an explanation why we should believe that consciousness is irreducibly biological this is disappointing. Assuming your conclusion is fun; nice work if you can get it, but it does not substitute for actual argument. If you admit that our consciousness has a material basis - arising from physical processes in the brain - then it takes chutzpah to believe that only our biological brains could ever produce such processes. Denial doesn't seem like a great option.

Humility: Third, we could embrace humility. Maybe much of our own quotidian consciousness is more like a chatbot's imitation than we like to think, a mindless invocation of repetitive patterns without intentionality. In a moment of devastating bathos, Stephen Wolfram said that we had discovered that language was computationally shallower" than we had thought. One imagines a New Yorker cartoon of two robots gathered around humanity's grave; We found them to be computationally shallow." What an epitaph!

Reflection: The fourth and final option might be the hardest but also the most promising. We could use the happenings of last year as a spur to reflection - combining some of the insights of refinement and of humility. Machine learning could teach us more about ourselves - the mirror looking back at us. This could prompt anxious reappraisal of our species-exceptionalism. It might make us focus more on new scientific ideas about consciousness, like Global Neuronal Workspace Theory. It might make us worry whether we are treating the great apes and the cetaceans correctly. It might lead us to assess what rights, if any, we should confer on artificially created beings-even if those rights were granted as a matter of convenience, not moral kinship. (Looking at you, corporations.) Better to think about that now than when Hal is knocking on our doors.

There is a fifth approach, of course. We could just ignore it all-a truly human capability. We could use our chatbots to write scripts about crabs fighting hot dogs on the moon, or just to cheat on exams, and forget the rest. We could, in other words, just shake it off. With no disrespect to Ms. Swift, that would be a shame. The Unperson of 2023 has lessons to teach us.

James Boyle is the William Neal Reynolds Professor of Law at Duke Law School. His new book, The Line: AI and the Future of Personhood, will be published under a Creative Commons License by MIT Press in 2024. Preprints of the introduction and first two chapters can be found here.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments