Article 6AA7K The Open Letter to Stop 'Dangerous' AI Race Is a Huge Mess

The Open Letter to Stop 'Dangerous' AI Race Is a Huge Mess

by
Chloe Xiang
from on (#6AA7K)
Story Image

More than 30,000 people-including Tesla's Elon Musk, Apple co-founder Steve Wozniak, politician Andrew Yang, and a few leading AI researchers-have signed an open letter calling for a six-month pause on training AI systems more powerful than GPT-4.

The letter immediately caused a furor as signatories walked back their positions, some notable signatories turned out to be fake, and many more AI researchers and experts vocally disagreed with the letter's proposal and approach.

The letter was penned by the Future of Life Institute, a nonprofit organization with the stated mission to reduce global catastrophic and existential risk from powerful technologies." It is also host to some of the biggest proponents of longtermism, a kind of secular religion boosted by many members of the Silicon Valley tech elite since it preaches seeking massive wealth to direct towards problems facing humans in the far future. One notable recent adherent to this idea is disgraced FTX CEO Sam Bankman-Fried.

Specifically, the institute focuses on mitigating long-term "existential" risks to humanity such as superintelligent AI. Musk, who has expressed longtermist beliefs, donated $10 million to the institute in 2015.

Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4," the letter states. AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts."

This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities," the letter clarifies, referring to the arms race between big tech companies like Microsoft and Google, who in the past year have released a number of new AI products.

Other notable signatories include Stability AI CEO Emad Mostaque, author and historian Yuval Noah Harari, and Pinterest co-founder Evan Sharp. There are also a number of people who work for the companies participating in the AI arms race who have signed, including Google DeepMind and Microsoft. All signatories were confirmed to Motherboard by the Future of Life Institute to be independently verified through direct communication." No one from OpenAI, which develops and commercializes the GPT series of AI models, has signed the letter.

Despite this verification process, the letter started out with a number of false signatories, including people impersonating OpenAI CEO Sam Altman, Chinese president Xi Jinping, and Chief AI Scientist at Meta, Yann LeCun, before the institute cleaned the list up and paused the appearance of signatures on the letter as they verify each one.

The letter has been scrutinized by many AI researchers and even its own signatories since it was published on Tuesday. Gary Marcus, a professor of psychology and neural science at New York University, who told Reuters the letter isn't perfect, but the spirit is right." Similarly, Emad Mostaque, the CEO of Stability.AI, who has pitted his firm against OpenAI as a truly "open" AI company, tweeted, So yeah I don't think a six month pause is the best idea or agree with everything but there are some interesting things in that letter."

AI experts criticize the letter as furthering the AI hype" cycle, rather than listing or calling for concrete action on harms that exist today. Some argued that it promotes a longtermist perspective, which is a worldview that has been criticized as harmful and anti-democratic because it valorizes the uber-wealthy and allows for morally dubious actions under certain justifications.

Emily M. Bender, a Professor in the Department of Linguistics at the University of Washington and the co-author of the first paper the letter cites, tweeted that this open letter is dripping with #Aihype" and that the letter misuses her research. The letter says, AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research," but Bender counters that her research specifically points to current large language models and their use within oppressive systems-which is much more concrete and pressing than hypothetical future AI.

We wrote a whole paper in late 2020 (Stochastic Parrots, published in 2021) pointing out that this head-long rush to ever larger language models without considering risks was a bad thing. But the risks and harms have never been about too powerful AI'," she tweeted. Instead: They're about concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources)."

It's essentially misdirection: bringing everyone's attention to hypothetical powers and harms of LLMs and proposing a (very vague and ineffective) way of addressing them, instead of looking at the harms here and now and addressing those-for instance, requiring more transparency when it comes to the training data and capabilities of LLMs, or legislation regarding where and when they can be used," Sasha Luccioni, a Research Scientist and Climate Lead at Hugging Face, told Motherboard.

Arvind Narayanan, an Associate Professor of Computer Science at Princeton, echoed that the open letter was full of AI hype that makes it harder to tackle real, occurring AI harms."

Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?" the open letter asks.

Narayanan said these questions are nonsense" and ridiculous." The very far-out questions of whether computers will replace humans and take over human civilization are part of a longtermist mindset that distracts us from current issues. After all, AI is already being integrated into people's jobs and reducing the need for certain occupations, without being a "nonhuman mind" that will make us "obsolete."

I think these are valid long-term concerns, but they've been repeatedly strategically deployed to divert attention from present harms-including very real information security and safety risks!" Narayanan tweeted. Addressing security risks will require collaboration and cooperation. Unfortunately the hype in this letter-the exaggeration of capabilities and existential risk-is likely to lead to models being locked down even more, making it harder to address risks."

In a press conference on Wednesday, one of the notable signatories Yoshua Bengio-a Canadian computer scientist who was one of the earliest scientists to develop deep learning and the Founder and Scientific Director of the research institute Mila-told reporters that the six-month break is necessary for governance bodies, including governments, to understand, audit, and verify systems so that they are safe for the public.

Bengio said that there is currently a dangerous concentration of power, which is bad for capitalism, and that AI tools have the potential to destabilize democracy. There is a conflict between democratic values and the ways these tools are being developed," he said.

Max Tegmark, a professor of physics at MIT's NSF AI Institute for Artificial Intelligence and Fundamental Interactions (IAIFI) and the President of the Future of Life Institute, said that a worst-case scenario is that humans will gradually lose more and more control over civilization. The current risk, he said, is that we lose control to a bunch of unelected people in tech companies who get an outsized influence."

These comments address the broad future, with hints of fear-mongering, over the loss of democracy, of a certain ideal of capitalism, without addressing a single concrete measure beyond the six-month pause.

Timnit Gebru, a computer scientist and the founder of the Distributed Artificial Intelligence Research Institute, tweeted that it is ironic that they call for a pause on using models more powerful than GPT-4, yet fail to address the large number of concerns surrounding GPT-4 itself.

The other stuff can be catstrophic' but the current stuff with all the Africans being paid penuts to have PTSD, massive data theft etc is human flourishing' and benefits for everyone' I believe," Gebru added.

External Content
Source RSS or Atom Feed
Feed Location http://motherboard.vice.com/rss
Feed Title
Feed Link http://motherboard.vice.com/
Reply 0 comments