Musk’s AI-Related Open Letter Ignites Controversies
The Elon Musk-signed open letter' asking for an immediate moratorium on AI development projects has sparked significant controversies. On March 22nd, the Future of Life Institute published an open letter with approximately 2000 signs which states that the emerging AI projects could pose several future threats to technology and society.
The letter addressed the necessity of putting a 6-month pause in the development of AI systems that could outperform the abilities of GPT4. It is the successor of ChatGPT, which was launched last year.
The Microsoft-backed GPT4 already holds a human-like conversation, makes quick summaries, and composes lyrics.
After OpenAI's initiative, several companies have shown to be in a rush to introduce similar products and services. This rush looked worrisome to Musk and many other renowned names like Steve Wozniak, Yoshua Benigo, and so on. These acclaimed figures participated in the anti-AI campaign as they believed that developing advanced AI models could cause a loss of control of our civilization."
The Debated CitationsThe protestors claim that the human competitive intelligence systems pose significant risks to humanity. To support their claim, they have cited 12 research pieces from experts, including the current and former employees of OpenAI, Google, and Google subsidiary DeepMind.
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.Open LetterHowever, four experts cited in the letter have later claimed that the think tank of the letter campaign, the Future of Life Institute, has wrongly used their research to make unsupported claims. They had also claimed that when initially launched, the letter had not abided by the verification protocols for signing. Moreover, it racked up signatures from participants who never signed it in reality.
On the ground of reality, these models are being fed with sexist or racist biases, and these could be a true threat to society.The list includes the chief AI scientist of Meta, Yann LeCun, and Chinese politician Xi Jinping. In fact, Yann LeCun has leveraged Musk's own channel, Twitter, to clarify that he did not support the coordinated effort of Musk and others. It's also being said that the Future of Life Institute is funded and fueled by the Musk Foundation. Thus, it may prioritize longtermism and apocalyptic scenarios over the true concerns associated with AI or competitive human intelligence.
The letter cited research titled On the Dangers of Stochastic Parrot.' Margaret Mitchell, Timnit Gebrul, Angelina McMillan, and Emily M. Bender co-authored the paper. Margaret and her team used to work for Google previously. Currently, Mitchell works as the chief ethical scientist at Hugging Face, an emerging AI firm. She strongly criticized the letter.
According to Mitchell, it's not at all clear what the signatories wanted to refer to as more powerful than GPT4." She also expressed her concern as the letter seems to treat several questionable ideas and a narrative on AI, which is beneficial to none but the FLI supporters.
Another Rebuff and FLI's ResponseBesides Mitchell and her co-authors, Shiri Dori-Hacohem, an assistant professor at the University of Connecticut, has also raised an issue as the letter has mentioned her work without her concern. Shiri's paper came out last year, which points to the serious risks associated with the widespread use of AI.
According to Shiri, AI doesn't require human-like intelligence to exacerbate those risks.However, her research focused on more serious factors like AI's influence on decision-making in relation to different existential threats like nuclear war, climate change, etc.
In response to this controversy, the president of FLI, Max Tegmark, has stated that AI's short-term and long-term risks are concerning and society should take them seriously. Moreover, he clarified that citing someone means endorsing a specific sentence, not the thought process of the specific individual.
The post Musk's AI-Related Open Letter Ignites Controversies appeared first on The Tech Report.