Article 6CS0W OpenAI Forms New Team to Master ‘Superintelligent’ AI

OpenAI Forms New Team to Master ‘Superintelligent’ AI

by
Krishi Chowdhary
from Techreport on (#6CS0W)
shutterstock_2253221991-1024x683.jpg

shutterstock_2253221991-300x200.jpg

OpenAI, the ChatGPT creator, has recently taken a momentous step to safeguard the future of superintelligent AI systems. The company's chief scientist and co-founder, Ilya Sutskever, and Jan Leike, a lead on the alignment team, unveiled an audacious new project, the Superalignment Initiative.

As articulated in Sutskever and Leike's blog post, AI is advancing at a pace that may soon surpass human intelligence.

However, the current methods employed for aligning AI with human interests, like reinforcement learning from human feedback, may prove inadequate as AI begins to outstrip human comprehension.

We currently lack a failsafe for steering or restraining a superintelligent AI, preventing it from going rogue.Blog postThe Superalignment Team

Responding to this monumental task, OpenAI has marshaled a new team, called the Superalignment team, co-led by Sutskever and Leike.

This unit holds 20% of OpenAI's computational resources. It unites the company's alignment division and multi-disciplinary researchers with a singular mission - to gain control over superintelligent AI.

The team's strategy is to develop a human-level automated alignment researcher" that can learn from human feedback and assist in the evaluation of other AI systems.

Over the next four years, they intend to overcome the formidable technical hurdles that currently obstruct humans' mastery over such AI.

Ultimately, they envision AI as capable of independently researching alignment solutions. As these AI systems progress, they could potentially take over the alignment work currently done by humans. Furthermore, they might even improve upon these processes, ensuring their successors align more accurately with human values.

However, the team acknowledges the risks associated with this approach.These include the potential amplification of biases, inconsistencies, or vulnerabilities, especially as they rely on AI for evaluation. Despite these risks, they believe in the potential of machine learning in solving alignment challenges.

They commit to sharing the results of their efforts, not just within OpenAI but with the larger AI community as well.

Securing a Safer Future for Superintelligent AI

Sutskever, Leike, and their team view the task of superintelligence alignment as primarily a machine learning issue.

They argue that even if the top experts in machine learning are not currently focused on alignment, this situation could change. Besides, they believe that the expertise of these professionals will be vital in solving this problem.

They envision the Superalignment team's mandate reaching beyond OpenAI. They plan to share their discoveries and innovations with the wider community. This effort will contribute to the safety and alignment of not just OpenAI models but all AI models.

By establishing the Superalignment team, OpenAI reaffirms its commitment to creating AI that is both beneficial and safe.

Their ambition stretches beyond the confines of OpenAI, aiming to build a larger ecosystem of aligned and safe AI, ultimately facilitating the safe incorporation of superintelligent AI into society.

OpenAI's Superalignment initiative comes at a pivotal time in the evolution of AI. As AI becomes more powerful and complex, ensuring its alignment with human values is increasingly urgent.

Though the magnitude of this task is intimidating, OpenAI is not deterred. Their bold and innovative approach marks a substantial step in the right direction. The global AI community and the world at large eagerly anticipate the outcomes of this venture.

The post OpenAI Forms New Team to Master Superintelligent' AI appeared first on The Tech Report.

External Content
Source RSS or Atom Feed
Feed Location https://techreport.com/feed/
Feed Title Techreport
Feed Link https://techreport.com/
Reply 0 comments