Google AI Fracas Shows How The Modern Ad-Based Press Tends To Devalue The Truth
The Washington Post dropped what it pretended was a bit of a bombshell. In the story, Google software engineer Blake Lemoine implied that Google's Language Model for Dialogue Applications (LaMDA) system, which pulls from Google's vast data and word repositories to generate realistic, human-sounding chatbots, had become fully aware and sentient.
He followed that up with several blog posts alleging the same thing:
Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person.
That was accompanied by a more skeptical piece over at the Economist where Google VP Blaise Aguera y Arcas still had this to say about the company's LaMDA technology:
I felt the ground shift under my feet ... increasingly felt like I was talking to something intelligent."
That set the stage for just an avalanche of aggregated news stories, blog posts, YouTube videos (many of them automated clickbait spam), and Twitter posts - all hyping the idea that HAL9000 had been born in Mountain View, California, and that Lemoine was a heroic whistleblower saving a fledgling new lifeform from a merciless corporate overlord:
Google engineer thinks its LaMDA #AI has come to life - This is the most fascinating story with enormous implications, & @Google must fully restore Blake Lemoine's employment & ability to publicly discuss his findings. @washingtonpost https://t.co/Lrkn8DkB1K
- Mira Sorvino (@MiraSorvino) June 13, 2022
The problem? None of it was true. Google had achieved a very realistic simulacrum with its LaMDA system, but almost nobody who actually works in AI thinks that the system is remotely self-aware. That includes scientist and author Gary Marcus, whose blog post on the fracas is honestly the only thing you should probably bother reading on the subject:
Nonsense. Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent.1 All they do is match patterns, draw from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn't actually mean anything at all. And it sure as hell doesn't mean that these systems are sentient.
Which doesn't mean that human beings can't be taken in. In our book Rebooting AI, Ernie Davis and I called this human tendency to be suckered by The Gullibility Gap - a pernicious, modern version of pareidolia, the anthromorphic bias that allows humans to see Mother Theresa in an image of a cinnamon bun.
That's not to say that what Google has developed isn't very cool and useful. If you've created a digital assistant so realistic even your engineers are buying into the idea it's a real person, you've absolutely accomplished something with practical application potential. Still, as Marcus notes, when truly boiled down to its core components Google has built a complicated spreadsheet for words," not a sentient AI.
The old quote a lie can travel halfway around the world before the truth can get its boots on" is particularly true in the modern ad-engagement based media era, in which hyperbole and controversy rule and the truth (especially if it's complicated or unsexy) is automatically devalued (I'm a reporter focused on complicated telecom policy and consumer rights issues, ask me how I know).
That again happened here, with Marcus' debunking likely seeing a tiny fraction of the attention of stories hyping the illusion.
Criticism of the Post came fast and furiously, many noting that the paper lent credibility to a claim that just didn't warrant it (which has been a positively brutal tendency of the political press the last decade):
What if I went around UVa yelling that Jefferson's ghost haunts my office" and became so disruptive that I got suspended? Would reporters write credulous stories about Jefferson's ghost and compel experts" to deny the existence of said ghost?
- SIVA VAIDHYANATHAN
(@sivavaid) June 13, 2022
This tends to happen a lot with AI, which as a technology is absolutely nowhere near sentience, but is routinely portrayed in the press as just a few clumsy steps from Skynet or Hal9000 - simply because the truth doesn't interest readers. New technology is very scary" gets hits, so that was the angle pursued by the Post, which some media professors and critics thought was journalistic malpractice:
But the Post was just too eager for another #moralpanic story about strange things happening in these black boxes. They might as well be covering UFOs. Is this the reportorial diligence they bring to covering Donald Trump?
- Jeff Jarvis (@jeffjarvis) June 12, 2022
But what I don't understand, as a media analyst, is how the journalist from the Washington Post could look at that and go: "OMG yes, we need to write about chat AI being potentially sentient beings".
I mean, it's so far from reality there that it's insane to even publish it.
- Thomas Baekdal (@baekdal) June 12, 2022
In short the Post amplified an inaccurate claim from an unreliable narrator because it knew that a moral panic about emerging technology would grab more reader eyeballs than a straight debunking (or obviously the correct approach of not covering it at all). While several outlets did push debunking pieces after a few days, they likely received a fraction of the attention of the original hype.
Which means you'll almost certainly now be running into misinformed people at parties who think Google AI is sentient for years to come.