Article 69VKS OpenAI checked to see whether GPT-4 could take over the world

OpenAI checked to see whether GPT-4 could take over the world

by
Benj Edwards
from Ars Technica - All content on (#69VKS)
AI_world_hero-800x450.jpg

Enlarge (credit: Ars Technica)

As part of pre-release safety testing for its new GPT-4 AI model, launched Tuesday, OpenAI allowed an AI testing group to assess the potential risks of the model's emergent capabilities-including "power-seeking behavior," self-replication, and self-improvement.

While the testing group found that GPT-4 was "ineffective at the autonomous replication task," the nature of the experiments raises eye-opening questions about the safety of future AI systems.

Raising alarms

"Novel capabilities often emerge in more powerful models," writes OpenAI in a GPT-4 safety document published yesterday. "Some that are particularly concerning are the ability to create and act on long-term plans, to accrue power and resources (power-seeking"), and to exhibit behavior that is increasingly 'agentic.'" In this case, OpenAI clarifies that "agentic" isn't necessarily meant to humanize the models or declare sentience but simply to denote the ability to accomplish independent goals.

Read 21 remaining paragraphs | Comments

External Content
Source RSS or Atom Feed
Feed Location http://feeds.arstechnica.com/arstechnica/index
Feed Title Ars Technica - All content
Feed Link https://arstechnica.com/
Reply 0 comments