‘AI Pause’ Open Letter Stokes Fear and Controversy
The recent call for a six-month AI pause"-in the form of an online letter demanding a temporary artificial intelligence moratorium-has elicited concern among IEEE members and the larger technology world. The Institute contacted some of the members who signed the open letter, which was published online on 29 March. The signatories expressed a range of fears and apprehensions including about rampant growth of AI large-language models (LLMs) as well as of unchecked AI media hype.
The open letter, titled Pause Giant AI Experiments," was organized by the nonprofit Future of Life Institute and signed by more than 10,000 people (as of 5 April). It calls for cessation of research on all AI systems more powerful than GPT-4."
It's the latest of a host of recent AI pause" proposals including a suggestion by Google's Francois Chollet of a six-month moratorium on people overreacting to LLMs" in either direction.
In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (shut it all down," Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead.
IEEE members have expressed a similar diversity of opinions.
AI can be manipulated by a programmer to achieve objectives contrary to moral, ethical, and political standards of a healthy society," says IEEE Fellow Duncan Steel, a professor of electrical engineering, computer science, and physics at the University of Michigan, in Ann Arbor. I would like to see an unbiased group without personal or commercial agendas to create a set of standards that has to be followed by all users and providers of AI."
IEEE Senior Life Member Stephen Deiss-a retired neuromorphic engineer from the University of California, San Diego-says he signed the letter because the AI industry is unfettered and unregulated."
This technology is as important as the coming of electricity or the Net," Deiss says. There are too many ways these systems could be abused. They are being freely distributed, and there is no review or regulation in place to prevent harm."
Eleanor Nell" Watson, an AI ethicist who has taught IEEE courses on the subject, says the open letter raises awareness over such near-term concerns as AI systems cloning voices and performing automated conversations-which she says presents a serious threat to social trust and well-being."
Although Watson says she's glad the open letter has sparked debate, she says she confesses to having some doubts about the actionability of a moratorium, as less scrupulous actors are especially unlikely to heed it."
There are too many ways these systems could be abused. They are being freely distributed, and there is no review or regulation in place to prevent harm."
IEEE Fellow Peter Stone, a computer science professor at the University of Texas at Austin, says some of the biggest threats posed by LLMs and similar big-AI systems remain unknown.
We are still seeing new, creative, unforeseen uses-and possible misuses-of existing models," Stone says.
My biggest concern is that the letter will be perceived as calling for more than it is," he adds. I decided to sign it and hope for an opportunity to explain a more nuanced view than is expressed in the letter.
I would have written it differently," he says of the letter. But on balance I think it would be a net positive to let the dust settle a bit on the current LLM versions before developing their successors."
IEEE Spectrum has extensively covered one of the Future of Life Institute's previous campaigns, urging a ban on killer robots." The outlines of the debate, which began with a 2016 open letter, parallel the criticism being leveled at the current AI pause" campaign: that there are real problems and challenges in the field that, in both cases, are at best poorly served by sensationalism.
One outspoken AI critic, Timnit Gebru of the Distributed AI Research Institute, is similarly critical of the open letter. She describes the fear being promoted in the AI pause" campaign as stemming from what she calls long-termism"-discerning AI's threats only in some futuristic, dystopian sci-fi scenario, rather than in the present day, where AI's bias amplification and power concentration problems are well known.
IEEE Member Jorge E. Higuera, a senior systems engineer at Circontrol in Barcelona, says he signed the open letter because it can be difficult to regulate superintelligent AI, particularly if it is developed by authoritarian states, shadowy private companies, or unscrupulous individuals."
IEEE Fellow Grady Booch, chief scientist for software engineering at IBM, signed although he also, in his discussion with The Institute, cited Gebru's work and reservations about AI's pitfalls.
Generative models are unreliable narrators," Booch says. The problems with large-language models are many: There are legitimate concerns regarding their use of information without consent; they have demonstrable racial and sexual biases; they generate misinformation at scale; they do not understand but only offer the illusion of understanding, particularly for domains on which they are well-trained with a corpus that includes statements of understanding.
These models are being unleashed into the wild by corporations who offer no transparency as to their corpus, their architecture, their guardrails, or the policies for handling data from users. My experience and my professional ethics tell me I must take a stand, and signing the letter is one of those stands."
Please share your thoughts in the comments section below.