Article 6A9QE Tech leaders and AI experts demand a six-month pause on 'out-of-control' AI experiments

Tech leaders and AI experts demand a six-month pause on 'out-of-control' AI experiments

by
Sarah Fielding
from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics on (#6A9QE)

An open letter signed by tech leaders and prominent AI researchers has called for AI labs and companies to "immediately pause" their work. Signatories like Steve Wozniak and Elon Musk agree risks warrant a minimum six month break from producing technology beyond GPT-4 to enjoy existing AI systems, allow people to adjust and ensure they are benefiting everyone. The letter adds that care and forethought are necessary to ensure the safety of AI systems - but are being ignored.

The reference to GPT-4, a model by OpenAI that can respond with text to written or visual messages, comes as companies race to build complex chat systems that utilize the technology. Microsoft, for example, recently confirmed that its revamped Bing search engine has been powered by the GPT-4 model for over seven weeks, while Google recently debuted Bard, its own generative AI system powered by LaMDA. Uneasiness around AI has long circulated, but the apparent race to deploy the most advanced AI technology first has drawn more urgent concerns.

"Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one - not even their creators - can understand, predict, or reliably control," the letter states.

The concerned letter was published by the Future of Life Institute (FLI), an organization dedicated to minimizing the risks and misuse of new technology. Musk previously donated $10 million to FLI for use in studies about AI safety. In addition to him and Wozniak, signatories include a slew of global AI leaders, such as Center for AI and Digital Policy president Marc Rotenberg, MIT physicist and Future of Life Institute president Max Tegmark, and author Yuval Noah Harari. Harari also co-wrote an op-ed in the New York Times last week warning about AI risks, along with founders of the Center for Humane Technology and fellow signatories, Tristan Harris and Aza Raskin.

This call out feels like the next step of sorts from a 2022 survey of over 700 machine learning researchers, in which nearly half of participants stated there's a 10 percent chance of an "extremely bad outcome" from AI, including human extinction. When asked about safety in AI research, 68 percent of researchers said more or much more should be done.

Anyone who shares concerns about the speed and safety of AI production is welcome to add their name to the letter. However, new names are not necessarily verified so any notable additions after the initial publication are potentially fake.

This article originally appeared on Engadget at https://www.engadget.com/tech-leaders-and-ai-experts-demand-a-six-month-pause-on-out-of-control-ai-experiments-114553864.html?src=rss
External Content
Source RSS or Atom Feed
Feed Location https://www.engadget.com/rss.xml
Feed Title Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics
Feed Link https://www.engadget.com/
Feed Copyright copyright Yahoo 2024
Reply 0 comments