Article 6W7D4 Eric Schmidt Suggests Countries Could Engage in Mutual Assured AI Malfunction (MAIM)

Eric Schmidt Suggests Countries Could Engage in Mutual Assured AI Malfunction (MAIM)

by
janrinok
from SoylentNews on (#6W7D4)
Superintelligence Strategy: Expert Version

upstart writes:

Superintelligence Strategy: Expert Version:

Title:Superintelligence Strategy: Expert VersionView a PDF of the paper titled Superintelligence Strategy: Expert Version, by Dan Hendrycks and Eric Schmidt and Alexandr WangView PDFHTML (experimental)

Abstract:Rapid advances in AI are beginning to reshape national security. Destabilizing AI developments could rupture the balance of power and raise the odds of great-power conflict, while widespread proliferation of capable AI hackers and virologists would lower barriers for rogue actors to cause catastrophe. Superintelligence -- AI vastly better than humans at nearly all cognitive tasks -- is now anticipated by AI researchers. Just as nations once developed nuclear strategies to secure their survival, we now need a coherent superintelligence strategy to navigate a new period of transformative change. We introduce the concept of Mutual Assured AI Malfunction (MAIM): a deterrence regime resembling nuclear mutual assured destruction (MAD) where any state's aggressive bid for unilateral AI dominance is met with preventive sabotage by rivals. Given the relative ease of sabotaging a destabilizing AI project -- through interventions ranging from covert cyberattacks to potential kinetic strikes on datacenters -- MAIM already describes the strategic picture AI superpowers find themselves in. Alongside this, states can increase their competitiveness by bolstering their economies and militaries through AI, and they can engage in nonproliferation to rogue actors to keep weaponizable AI capabilities out of their hands. Taken together, the three-part framework of deterrence, nonproliferation, and competitiveness outlines a robust strategy to superintelligence in the years ahead.

Journal Reference:
Hendrycks, Dan, Schmidt, Eric, Wang, Alexandr. Superintelligence Strategy: Expert Version, (DOI: 10.48550/arXiv.2503.05628)

Eric Schmidt Suggests Countries Could Engage in Mutual Assured AI Malfunction (MAIM)

upstart writes:

Eric Schmidt Suggests Countries Could Engage in Mutual Assured AI Malfunction (MAIM):

Former Google CEO Eric Schmidt and Scale AI founder Alexandr Wang are co-authors on a new paper called "Superintelligence Strategy" that warns against the U.S. government creating a Manhattan Project for so-called Artificial General Intelligence (AGI) because it could quickly get out of control around the world. The gist of the argument is that the creation of such a program would lead to retaliation or sabotage by adversaries as countries race to have the most powerful AI capabilities on the battlefield. Instead, the U.S. should focus on developing methods like cyberattacks that could disable threatening AI projects.

Schmidt and Wang are big boosters of AI's potential to advance society through applications like drug development and workplace efficiency. Governments, meanwhile, see it as the next frontier in defense, and the two industry leaders are essentially concerned that countries are going to end up in a race to create weapons with increasingly dangerous potential. Similar to how international agreements have reined in the development of nuclear weapons, Schmidt and Wang believe nation states should go slow on AI development and not fall prey to racing one another in AI-powered killing machines.

At the same time, however, both Schmidt and Wang are building AI products for the defense sector. The former's White Stork is building autonomous drone technologies, while Wang's Scale AI this week signed a contract with the Department of Defense to create AI "agents" that can assist with military planning and operations. After years of shying away from selling technology that could be used in warfare, Silicon Valley is now patriotically lining up to collect lucrative defense contracts.

All military defense contractors have a conflict of interest to promote kinetic warfare, even when not morally justified. Other countries have their own military industrial complexes, the thinking goes, so the U.S. needs to maintain one too. But in the end, innocent people suffer and die while powerful people play chess.

Palmer Luckey, the founder of defense tech darling Anduril, has argued that AI-powered targeted drone strikes are safer than launching nukes that could have a larger impact zone or planting land mines that have no targeting. And if other countries are going to continue building AI weapons, we should have the same capabilities as deterrence. Anduril has been supplying Ukraine with drones that can target and attack Russian military equipment over enemy lines.

Anduril recently ran an ad campaign that displayed the basic text "Work at Anduril.com" covered with the word "Don't" written in giant, graffiti-style spray-painted letters, seemingly playing to the idea that working for the military industrial complex is the counterculture now.

Read more of this story at SoylentNews.

External Content
Source RSS or Atom Feed
Feed Location https://soylentnews.org/index.rss
Feed Title SoylentNews
Feed Link https://soylentnews.org/
Feed Copyright Copyright 2014, SoylentNews
Reply 0 comments