OpenAI Dissolves AI Safety Team after Co-founder Resignation
- OpenAI has now dissolved the team that focused on the safe growth of AI systems.
- The recent resignation of Ilya Sutskever and Jan Leike might be due to the shifting focus of OpenAI's core leadership.
OpenAI has reportedly dissolved its team that focused on the long-term risks and the safe growth of Artificial Intelligence.
The team was formed in early July last year with the aim of aligning AI with human interests. However, within less than a year, OpenAI seems to have swayed away from its initial commitment of AI safety.The series of recent resignations also point in the same direction. Co-founders Jan Leike and Ilya Sutskever resigned on Friday, just hours apart. In fact, Jan has penned down an extensive note on X explaining the circumstances under which he resigned.
Leike added that OpenAI should first focus on becoming a safety-first AGI company.As per Jan, there were widening disagreements between him and OpenAI's leadership regarding the core principles of the company. Over the last few months, Jan's team was deprived of sufficient computing resources, which made it difficult for him to carry on his crucial research.
He also stressed the fact that developing machines that are smarter than humans is inherently dangerous, and that OpenAI is walking on a tightrope with enormous responsibility. However, safety and processes seem to have been dropped from OpenAI's list of priorities during the last few years.
These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there." - Jan Leike
This statement only suggests that the company is now focusing more on economic and social success rather than upholding its moral values. Remember, Elon Musk in March filed a lawsuit against OpenAI for digressing from the original company mission.
Similar Reasons Behind Altman's RemovalLast November, Altman was removed as the CEO of the company, citing a lack of conversations with the board. However, as per reports, Altman was secretly working on project Q* - a super-intelligent AI system.
Several experts had warned him about the detrimental effects of such a massively powerful AI platform. However, Altman paid no heed to such advice and continued his research. The recent resignations also voice similar concerns.Interestingly, Sutskever had played played a major role in Altman's removal in November 2023. However, after Altman's return within a week, Sutskever apologized for his actions and continued on the OpenAI board as a member.
At the time, Altman said in a meeting with reporters that he hoped to work with Sutskever for the rest of their careers. Even after Ilya Sutskever's resignation, Altman was full of praise for him. He hailed him as the greatest mind of the generation and a guiding light in the AI field.
His brilliance and vision are well known; his warmth and compassion are less well known but no less important." - Sam Altman
However, with both Ilya and Leike gone now, and the AI safety team being dissolved, it remains to be seen what Altman does next, which he could otherwise not have done with both of them being there. Dangers loom large for the human race? More job cuts and fewer hirings on the cards?
Whatever OpenAI's next move is, we can only hope it's in the best interest of humanity and Altman doesn't run down the evil capitalist road.
The post OpenAI Dissolves AI Safety Team after Co-founder Resignation appeared first on The Tech Report.