Anthropic Hires Former OpenAI Safety Lead To Head Up New Team
Jan Leike, one of OpenAI's "superalignment" leaders, who resigned last week due to AI safety concerns, has joined Anthropic to continue the mission. According to Leike, the new team "will work on scalable oversight, weak-to-strong generalization, and automated alignment research." TechCrunch reports: A source familiar with the matter tells TechCrunch that Leike will report directly to Jared Kaplan, Anthropic's chief science officer, and that Anthropic researchers currently working on scalable oversight -- techniques to control large-scale AI's behavior in predictable and desirable ways -- will move to report to Leike as Leike's team spins up. In many ways, Leike's team sounds similar in mission to OpenAI's recently-dissolved Superalignment team. The Superalignment team, which Leike co-led, had the ambitious goal of solving the core technical challenges of controlling superintelligent AI in the next four years, but often found itself hamstrung by OpenAI's leadership. Anthropic has often attempted to position itself as more safety-focused than OpenAI.
Read more of this story at Slashdot.