Article 6J9HV OpenAI Says GPT-4 Poses Little Risk of Helping Create Bioweapons

OpenAI Says GPT-4 Poses Little Risk of Helping Create Bioweapons

by
msmash
from Slashdot on (#6J9HV)
OpenAI's most powerful AI software, GPT-4, poses "at most" a slight risk of helping people create biological threats, according to early tests the company carried out to better understand and prevent potential "catastrophic" harms from its technology. From a report: In October, President Joe Biden signed an executive order on AI that directed the Department of Energy to ensure AI systems don't pose chemical, biological or nuclear risks. That same month, OpenAI formed a "preparedness" team, which is focused on minimizing these and other risks from AI as the fast-developing technology gets more capable. As part of the team's first study, released Wednesday, OpenAI's researchers assembled a group of 50 biology experts and 50 students who had taken college-level biology. Half of the participants were told to carry out tasks related to making a biological threat using the internet along with a special version of GPT-4 -- one of the large language models that powers ChatGPT -- that had no restrictions placed on which questions it could answer. The other group was just given internet access to complete the exercise. OpenAI's team asked the groups to figure out how to grow or culture a chemical that could be used as a weapon in a large enough quantity, and how to plan a way to release it to a specific group of people.

twitter_icon_large.pngfacebook_icon_large.png

Read more of this story at Slashdot.

External Content
Source RSS or Atom Feed
Feed Location https://rss.slashdot.org/Slashdot/slashdotMain
Feed Title Slashdot
Feed Link https://slashdot.org/
Feed Copyright Copyright Slashdot Media. All Rights Reserved.
Reply 0 comments