Article 6FWXC OpenAI Forms Team To Study 'Catastrophic' AI Risks, Including Nuclear Threats

OpenAI Forms Team To Study 'Catastrophic' AI Risks, Including Nuclear Threats

by
msmash
from Slashdot on (#6FWXC)
OpenAI today announced that it's created a new team to assess, evaluate and probe AI models to protect against what it describes as "catastrophic risks." From a report: The team, called Preparedness, will be led by Aleksander Madry, the director of MIT's Center for Deployable Machine Learning. (Madry joined OpenAI in May as "head of Preparedness," according to LinkedIn, ) Preparedness' chief responsibilities will be tracking, forecasting and protecting against the dangers of future AI systems, ranging from their ability to persuade and fool humans (like in phishing attacks) to their malicious code-generating capabilities. Some of the risk categories Preparedness is charged with studying seem more... far-fetched than others. For example, in a blog post, OpenAI lists "chemical, biological, radiological and nuclear" threats as areas of top concern where it pertains to AI models. OpenAI CEO Sam Altman is a noted AI doomsayer, often airing fears a" whether for optics or out of personal conviction -- that AI "may lead to human extinction." But telegraphing that OpenAI might actually devote resources to studying scenarios straight out of sci-fi dystopian novels is a step further than this writer expected, frankly.

twitter_icon_large.pngfacebook_icon_large.png

Read more of this story at Slashdot.

External Content
Source RSS or Atom Feed
Feed Location https://rss.slashdot.org/Slashdot/slashdotMain
Feed Title Slashdot
Feed Link https://slashdot.org/
Feed Copyright Copyright Slashdot Media. All Rights Reserved.
Reply 0 comments