2024: AI Panic Flooded The Zone, Leading To A Backlash
Last December, we published a recap, 2023: The Year of AI Panic."
Now, it's time to ask: What happened to the AI panic in 2024?
TL;DR - It was a rollercoaster ride: AI panic reached a peak and then fell down.
Two cautionary tales: The EU AI Act and California's SB-1047.
Please note: 1. Thefocushere is on the AI panic angle of the news, not other events such as product launches. The aim is to shed light on the effects of this extreme AI discourse.
2. The2023 recapprovides context for what happened a year later. Seeing howAI doomerstook it too far in 2023 gives a better understanding of why it backfired in 2024.
2023's AI panic
At the end of 2022, ChatGPT took the world by storm. It sparked the Generative AI" arms race. Shortly thereafter, we were bombarded with doomsday scenarios of anAI takeover, an AIapocalypse, and The END of Humanity." The AI Existential Risk" (x-risk) movement has gradually, then suddenly, moved from the fringe to the mainstream. Apart from becoming media stars, its members also influenced Congress and the EU. They didn't shift the Overton window; theyshattered it.
2023: The Year of AI Panic" summarized the key moments: The two Existential Risk" open letters (first by the Future of Life Institute and second by the Center for AI Safety), theAI DilemmaandTristan Harris' x-riskadvocacy (now known to befunded, in part, by the Future of Life Institute), the flood of doomsaying in traditional media, followed by numerous AI policy proposals thatfocus on existential threatsand seek tosurveil and criminalizeAI development. Oh, andTIME magazinehad a full-blown love affair with AI doomers (it still has).

- AIPanicAgents
Throughout the years, Eliezer Yudkowsky from Berkeley's MIRI (Machine Intelligence Research Institute) and his End of the World" beliefs heavily influenced a sub-culture of rationalists" and AI doomers. In 2023, they embarked on a policy and media tour.
In a TED talk, Will Superintelligent AIEnd the World?"Eliezer Yudkowskysaid, I expect an actually smarter and uncaring entity will figure out strategies and technologies that can kill us quickly and reliably and then kill us [...] It could kill us because it doesn't want us making other superintelligences to compete with it. It could kill us because it's using up all the chemical energy on earth, andwecontain some chemical potential energy." In TIME magazine, he advocated to Shut it All Down: Shut down all the large GPU clusters. Shut down all the large training runs. Be willing to destroy a rogue datacenter by airstrike."
Max Tegmarkfrom the Future of Life Institute said: There won't be any humans on the planet in the not-too-distant future. This is the kind of cancer thatkills all of humanity."
Next thing you know, he was addressing the U.S. Congress at the AI Insight Forum."
And successfullypushing the EUto include General-Purpose AI systems" in the AI Act" (discussed further in the 2024 recap).
Connor Leahyfrom Conjecture said: I do not expect us to make it out of this century alive. I'm not even sure we'll get out ofthis decade!"
Next thing you know, he appearedon CNNand later tweeted: I had a great time addressingthe House of Lordsabout extinction risk from AGI." He suggested a cap on computing power" at 10^24 FLOPs (Floating Point Operations) and a global AI kill switch."
Dan Hendrycksfrom the Center for AI Safety expressed an80% probability of doomand claimed, Evolutionary pressure will likely ingrain AIs with behaviors that promote self-preservation."[1]He warned that we are on a pathway toward being supplanted as theEarth's dominant species." Hendrycks also suggested CERN for AI," imagining a big multinational lab that would soak up the bulk of the world's graphics processing units [GPUs]. That would sideline the big for-profit labs by making it difficult for them to hoard computing resources." He later speculated that AI regulation in the U.S might pave the way for some shared international standards that might makeChina willing to also abideby some of these standards" (because, of course, China will slow down as well... That's how geopolitics work!).
Next thing you know, he collaborated with Senator Scott Wiener of California to pass an AI Safety bill, SB-1047 (more on this bill in the 2024 recap).

A "follow the money" investigation revealed it's not a grassroots, bottom-up movement, but a top-down movement heavily funded by a fewEffective Altruism(EA) billionaires, mainly DustinMoskovitz, JaanTallinn, and SamBankman-Fried.
The 2023 recap ended with this paragraph: In 2023, EA-backed AI x-risk' took over the AI industry, AI media coverage, and AI regulation. Nowadays, more and more information is coming out about the influence operation' and its impact on AI policy. See, for example, the reporting onRishi Sunak's AI agendaandJoe Biden's AI order. In 2024, this tech billionaires-backed influence campaign may backfire. Hopefully, a more significant reckoning will follow."
2024: Act 1. The AI panic further flooded the zone
With1.6 billion dollarsfrom the Effective Altruism movement,[2]the AI Existential Risk" ecosystem has grown tohundredsof organizations.[3]In 2024, their policy advocacy becamemore authoritarian.
- The Center for AI Policy (CAIP) outlined the goal: to establish a strict licensing regime, clamp down on open-source models, and imposecivil and criminal liability on developers."
- The Narrow Path" proposal started with AI poses extinction risks to human existence" (according to an accompanying report,The Compendium, By default, God-like AI leads to extinction"). Instead of asking for a six-month AI pause, this proposal asked for a 20-year pause. Why? Because two decadesprovide the minimum time frame to construct our defenses."
Note that these AI x-risk" groups sought to ban currently existing AI models.
- The Future of Life Institute proposed stringent regulation on models with a compute threshold of 10^25 FLOPs, explaining it would apply to fewer than10 current systems."
- The International Center for Future Generations (ICFG) proposed that open-sourcing of advanced AI models trained on 10^25 FLOP or more should beprohibited."
- Gladstone AIs Action Plan"[4]claimed that these models are considered dangerous until proven safe" and that releasing them could be grounds for criminal sanctions including jail time for the individuals responsible."
- Beforehand, the Center for AI Safety (CAIS) proposed to ban open-source models trained beyond10^23 FLOPs.
Llama 2 was trained with > 10^23 FLOPs and thus would have been banned.
- TheAI Safety Treatyand theCampaign for AI Safetywrote similar proposals, the latter spelling it out as Prohibiting the development of models above the level of OpenAI GPT-3."
- Jeffrey Ladishfrom Palisade Research (also from the Center for Humane Technology and CAIP) said, We canprevent the release of a LLaMA 2! We need government action on this asap." Simeon Campos from SaferAI set the threshold onLlama-1.
All thoseproposed prohibitionsclaimed thatpastthresholds would bring DOOM.
It was ridiculous back then; it looks more ridiculous now.
It's always just a bit higher than where we are today," venture capitalist Krishnan Rohit commented. Imagine ifwe had done this!!"
In a report entitled What mistakes has the AI safety movement made?", it was argued that AI safety istoo structurally power-seeking: trying to raise lots of money, trying to gain influence in corporations and governments, trying to control the way AI values are shaped, favoring people who are concerned about AI risk for jobs and grants, maintaining the secrecy of information, and recruitinghigh school studentsto the cause."
YouTube is floodedwith prophecies of AI doom, some of which target children. Among the channels tailored for kids are Kurzgesagt and Rational Animations, both funded by Open Philanthropy.[5]These videos serve a specific purpose, Rational Animations admitted: In my most recent communications with Open Phil, we discussed the fact that a YouTube video aimed at educating on a particular topic would be more effective if viewers had aneasy way to fall intoan intellectual rabbit hole' to learn more."
AI Doomerism is becoming a big problem, and it's well funded," observed Tobi Lutke, Shopify CEO. Like all cults,it's recruiting."

Also, like in other doomsday cults, the stress of believing an apocalypse isimminentwears down the ability to cope with anything else. Some are getting radicalized to a dangerous level, playing with the idea ofkilling AI developers(if that's what it takes to save humanity" from extinction).

Both PauseAI and StopAI stated that they arenon-violentmovements that do not permit even joking about violence." That's a necessary clarification for their various followers. There is, however, a need for stronger condemnation. The murder of the UHC CEO showed us that it only takes onebrainwashed individualto cross the line.
2024: Act 2. The AI panic started to backlash
In 2024, AI panic reached the point ofpracticalityand began to backfire.
- The EU AI Act as a cautionary tale
In December 2023, European Union (EU) negotiators struck a deal on the most comprehensive AI rules, the AI Act." Deal!" tweeted European Commissioner Thierry Breton, celebrating how The EU becomes the very first continent to set clear rules for the use of AI."
Eight months later, a Bloomberg article discussed how the new AI rules risk entrenching thetransatlantic tech dividerather than narrowing it."
Gabriele Mazzini, the EU AI Act Architect, and lead author, expressed regret and admitted that its reach has ended up beingtoo broad: The regulatory bar maybe has been set too high. There may be companies in Europe that could just say there isn't enough legal certainty in the AI Act to proceed."

In September, the EU released The Future of European Competitiveness" report. In it, Mario Draghi, former President of the European Central Bank and former Prime Minister of Italy, expressed a similar observation: Regulatory barriers to scaling up are particularly onerous in the tech sector,especially for young companies."
In December, there were additional indications of a growing problem.
1. When OpenAI released Sora, its video generator,Sam Altman reactedabout being unable tooperate in Europe: We want to offer our products in Europe ... We also have to comply with regulation."[6]

2. A Visualization of Europe'sNon-BubblyEconomy" by Andrew McAfee from MIT Sloan School of Management exploded online ashammering the EUbecame a daily habit.

These examples are relevant to the U.S., as California introduced itsown attempttomimic the EUwhen Sacramento emerged as America's Brussels.
- California's bill SB-1047 as another cautionary tale
Senator Scott Wiener's SB-1047 was supported byEA-backedAI safetygroups. The bill included strict developer liability provisions, and AI experts from academia and entrepreneurs from startups (little tech") were caught off guard. It built a coalition against the bill. The headline collage below illustrates the criticism of the bill as it would strangle innovation, AI R&D (Research and Development), and theopen-source communityin California and around the world.

The bill was eventually rejected byGavin Newsom's veto. The governor explained that there's a need for anevidence-based, workable regulation.

You've probably spotted the pattern by now. 1. Doomers scare the hell out of people. 2. It supports their call for a strict regulatory regime. 3. Those who listen to their fearmongering regret it.
Why? Because 1. Doomsday ideology is extreme. 2. The bills are vaguely written. 3. They don't consider tradeoffs.
2025
- The vibe shift in Washington
The new administration seems less inclined to listen to AI doomsaying.
Donald Trump's top picks for relevant positions prioritize American dynamism.
The Bipartisan House Task Force on Artificial Intelligence has just released anAI policy reportstating, Small businesses face excessive challenges in meeting AI regulatory compliance," There is currently limited evidence that open models should be restricted," and Congress should not seek to impose undue burdens on developers in the absence of clear, demonstrable risk."
There will probably be a fight at the state level, and if SB-1047 is any indication, it will be intense.
- Is the AI panic going to be further backlashed?
This panic cycle is not yet at the point of reckoning. But eventually, society will need to confront how the extreme ideology of AI will kill us all" became so influential in the first place.

-----------
Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of TheTECHLASHand Tech Crisis Communication" book and the AI Panic" newsletter.
-----------
Endnotes
- Dan Hendryck'stweetand Arvind Narayanan and Sayash Kapoor's article in AI Snake Oil": AI existential risk probabilities aretoo unreliableto inform policy." Thesimilarities= a coincidence
- This estimation includes the revelation that Tegmark'sFuture of Life Institutewas no longer a $2.4-million organization but a $674-million organization. It managed to convert a cryptocurrency donation (Shiba Inu tokens) to$665 million(using FTX/Alameda Research). Through its new initiative, theFuture of Life Foundation(FLF), FLI aims to help start 3 to 5 new organizations per year."Thisnewvisualization ofOpen Philanthropy's fundingshows that the existential risk ecosystem (Potential Risks from Advanced AI" + Global Catastrophic Risks" + Global Catastrophic Risks Capacity Building," different names to funding Effective AltruismAI Safetyorganizations/groups) has received ~$780 million(instead of $735 million in the previous calculation).
- The recruitment inelite universitiescan be described as bait-and-switch":From Global Poverty to AI Doomerism.The Funnel Mode" is basically, Come to save the poor or animals; stay to prevent Skynet."
- The U.S. government hadfundedGladstone AI's report as part of a federal contract worth $250,000.
- Kurzgesagtgot $7,533,224 from Open Philanthropy andRational Animationsgot $4,265,355. SamBankman-Fried plannedto add $400,000 to Rational Animations but wasconvicted of seven fraud chargesfor stealing $10 billion from customers and investors in one of thelargest financial fraudsof all time."
- Altman was probably referring to a mixed salad of the new AI Act with previous regulations like GDPR (General Data Protection Regulation) and DMA (Digital Markets Act).