Article 6HBF6 2023: The Year Of AI Panic

2023: The Year Of AI Panic

by
Mike Masnick
from Techdirt on (#6HBF6)

In 2023, the extreme ideology of human extinction from AI" became one of the most prominent trends. It was followed by extreme regulation proposals.

As we enter 2024, let's take a moment to reflect: How did we get here?

image-7.png?resize=1024%2C557&ssl=1Alina Constantin / Better Images of AI / Handmade A.I / CC-BY 4.02022: Public release of LLMs

The first big news story on LLMs (Large Language Models) can be traced to a (now famous) Google engineer. In June 2022, Blake Lemoine went on a media tour to claim that Google's LaMDA (Language Model for Dialogue Application) is sentient." Lemoine compared LaMDA to an 8-year-old kid that happens to know physics."

This news cycle was met with skepticism: Robots can't think or feel, despite what the researchers who build them want to believe. A.I. is not sentient. Why do people say it is?"

In August 2022, OpenAI made DALL-E 2 accessible to 1 million people.

In November 2022, the company launched a user-friendly chatbot named ChatGPT.

People started interacting with more advanced AI systems, and impressive Generative AI tools, with Blake Lemoine's story in the background.

At first, news articles debated issues like copyright and consent regarding AI-generated images (e.g., AI Creating Art' Is An Ethical And Copyright Nightmare") or how students will use ChatGPT to cheat on their assignments (e.g., New York City blocks use of the ChatGPT bot in its schools," The College Essay Is Dead").

2023: The AI monster must be tamed, or we will all die!

The AI arms race escalated when Microsoft's Bing and Google's Bard were launched back-to-back in February 2023. It was the overhyped utopian dreams that helped overhype the dystopian nightmares.

A turning point came after the release of New York Times columnist Kevin Roose's story on his disturbing conversation with Microsoft's new Bing chatbot. It has since become known as the Sydney tried to break up my marriage" story. The printed version included parts of Roose's correspondence with the chatbot, framed as Bing's Chatbot Drew Me In and Creeped Me Out."

The normal way that you deal with software that has a user interface bug is you just go fix the bug and apologize to the customer that triggered it," responded Microsoft CTO Kevin Scott. This one just happened to be one of the most-read stories in New York Times history."

From there on, it snowballed into a headline competition, as noted by the Center for Data Innovation: Once news media first get wind of a panic, it becomes a game of one-upmanship: the more outlandish the claims, the better." It reached that point with TIME magazine's June 12, 2023, cover story: THE END OF HUMANITY.

Two open letters on existential risk" (AI x-risk") and numerous opinion pieces were published in 2023.

The first open letter was on March 22, 2023, calling for a 6-month pause. It was initiated by the Future of Life Institute, which was co-founded by Jaan Tallinn, Max Tegmark, Viktoriya Krakovna, Anthony Aguirre, and Meia Chita-Tegmark, and funded by Elon Musk (nearly 90% of FLI's funds).

The letter called for AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT4." The open letter argued that If such a pause cannot be enacted quickly, governments should institute a moratorium." The reasoning was in the form of a rhetorical question: Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?"

It's worth mentioning that many who signed this letter did not actually believe AI poses an existential risk, but they wanted to draw attention to the various risks that worried them. The criticism was that Many top AI researchers and computer scientists do not agree that this doomer' narrative deserves so much attention."

The second open letter claimed AI is as risky as pandemics and nuclear war. It was initiated by the Center for AI Safety, which was founded by Dan Hendrycks and Oliver Zhang, and funded by Open Philanthropy, an Effective Altruism grant-making organization, run by Dustin Moskovitz and Cari Tuna (over 90% of CAIS's funds). The letter was launched in the New York Times with the headline, A.I. Poses Risk of Extinction,' Industry Leaders Warn."

Both letters have received extensive media coverage. The former executive director of the Centre for Effective Altruism and the current director of research at 80,000 Hours," Robert Wiblin, declared that AI extinction fears have largely won the public debate." Max Tegmark celebrated that AI extinction threat is going mainstream."

These statements resulted in newspapers' opinion sections being flooded with doomsday theories. In their extreme rhetoric, they warned against apocalyptic end times" scenarios and called for sweeping regulatory interventions.

Dan Hendrycks, from the Center for AI Safety, warned we could be on a pathway toward being supplanted as the earth's dominant species." (At the same time, he joined as an advisor to Elon Musk's xAI startup).

Zvi Mowshowitz (Don't worry about the vase substack) claimed that Competing AGIs might use Earth's resources in ways incompatible with our survival. We could starve, boil or freeze."

Michael Cuenco, associate editor of American Affairs, asked to put the AI revolution in a deep freeze" and called for a literal Butlerian Jihad."

Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute (MIRI), asked to Shut down all the large GPU clusters. Shut down all the large training runs. Track all GPUs sold. Be willing to destroy a rogue datacenter by airstrike."

There has been growing pressure on policymakers to surveil and criminalize AI development.

Max Tegmark, who claimed There won't be any humans on the planet in the not-too-distant future," was involved in the US Senates AI Insight Forum.

Conjecture's Connor Leahy, who said, I do not expect us to make it out of this century alive; I'm not even sure we'll get out of this decade," was invited to the House of Lords, where he proposed a global AI Kill Switch.'"

All the grandiose claims and calls for an AI moratorium spread from mass media, through lobbying efforts, to politicians' talking points. When AI Doomers became media heroes and policy advocates, it revealed what is behind them: A well-oiled x-risk" machine.

Since 2014: Effective Altruism has funded the AI Existential Risk" ecosystem with half a billion dollars

AI Existential Safetys increasing power can be better understood if you follow the money." Publicly available data from Effective Altruism organizations' websites, portals like OpenBook or Vipul Naik's Donation List, demonstrate how this ecosystem became such an influential subculture: It was funded with half a billion dollars by Effective Altruism organizations - mainly from Open Philanthropy, but also SFF, FTXs Future Fund, and LTFF.

This funding did NOT include investments in near-term AI Safety concerns such as effects on labor market, fairness, privacy, ethics, disinformation, etc." The focus was on reducing risks from advanced AI such as existential risks." Hence, the hypothetical AI Apocalypse.

2024: Backlash is coming

On November 24, 2023, Harvard's Steven Pinker shared: I was a fan of Effective Altruism. But it became cultish. Happy to donate to save the most lives in Africa, but not to pay techies to fret about AI turning us into paperclips. Hope they extricate themselves from this rut." In light of the half-a-billion funding for AI Existential Safety," he added that this money could have saved 100,000 lives (Malaria calculation). Thus, This is not Effective Altruism."

In 2023, EA-backed AI x-risk" took over the AI industry, AI media coverage, and AI regulation.

Nowadays, more and more information is coming out about the influence operation" and its impact on AI policy. See, for example, the reporting on Rishi Sunak's AI agenda and Joe Biden's AI order.

In 2024, this tech billionaires-backed influence campaign may backfire. Hopefully, a more significant reckoning will follow.

Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of TheTECHLASHand Tech Crisis Communication" book and AI Panic" newsletter.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments