Article 6N4C2 Anthropic Hires Former OpenAI Safety Lead To Head Up New Team

Anthropic Hires Former OpenAI Safety Lead To Head Up New Team

by
BeauHD
from Slashdot on (#6N4C2)
Jan Leike, one of OpenAI's "superalignment" leaders, who resigned last week due to AI safety concerns, has joined Anthropic to continue the mission. According to Leike, the new team "will work on scalable oversight, weak-to-strong generalization, and automated alignment research." TechCrunch reports: A source familiar with the matter tells TechCrunch that Leike will report directly to Jared Kaplan, Anthropic's chief science officer, and that Anthropic researchers currently working on scalable oversight -- techniques to control large-scale AI's behavior in predictable and desirable ways -- will move to report to Leike as Leike's team spins up. In many ways, Leike's team sounds similar in mission to OpenAI's recently-dissolved Superalignment team. The Superalignment team, which Leike co-led, had the ambitious goal of solving the core technical challenges of controlling superintelligent AI in the next four years, but often found itself hamstrung by OpenAI's leadership. Anthropic has often attempted to position itself as more safety-focused than OpenAI.

twitter_icon_large.pngfacebook_icon_large.png

Read more of this story at Slashdot.

External Content
Source RSS or Atom Feed
Feed Location https://rss.slashdot.org/Slashdot/slashdotMain
Feed Title Slashdot
Feed Link https://slashdot.org/
Feed Copyright Copyright Slashdot Media. All Rights Reserved.
Reply 0 comments