Article 6GH1R What’s next for OpenAI

What’s next for OpenAI

by
Melissa Heikkilä, Will Douglas Heaven
from MIT Technology Review on (#6GH1R)
Story Image

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

OpenAI, are you okay, babe? This past weekend has been a fever dream in the AI world. The board of OpenAI, the world's hottest AI company, shocked everyone by firing CEO Sam Altman. Cue an AI-safety coup, chaos, and a new job at Microsoft for Altman.

If you were offline this weekend, my colleague Will Douglas Heaven and I break down what you missed and what's next for the AI industry.

What happened

Friday afternoon
Sam Altman wassummonedto a Google Meet meeting, where chief scientific officer Ilya Sutskever announced that OpenAI's board had decided Altman had been not consistently candid in his communications" with them, and he was fired. OpenAI president and cofounder Greg Brockman and a string of senior researchers quit soon after, and CTO Mira Murati became the interim CEO.

Saturday
Murati madeattemptsto hire Altman and Brockman back, while the board was simultaneously looking for its own successor to Altman. Altman and OpenAI staffers pressured the board to quit and demanded that Altman be reinstated, giving the board adeadline, which was not met.

Sunday night
Microsoftannouncedit had hired Altman and Brockman to lead its new AI research team. Soon after that, OpenAI announced it hadhiredEmmett Shear, the former CEO of the streaming company Twitch, as its CEO.

Monday morning
Over 500OpenAI employees havesigned a letterthreatening to quit and join Altman at Microsoft unless OpenAI's board steps down. Bizarrely, Sutskever also signed the letter, andposted on Xthat he deeply regrets" participating in the board's actions.

What's next for OpenAI

Two weeks ago, atOpenAI's first DevDay, Altman interrupted his presentation of an AI cornucopia to ask the whooping audience to calm down. There's a lot-you don't have to clap each time," he said, grinning wide.

OpenAI is now a very different company from the one we saw at DevDay. With Altman and Brockman gone, a number of senior OpenAI employees chose to resign in support. Many others, including Murati, soon took to social media to postOpenAI is nothing without its people."Especially given the threat of a mass exodus to Microsoft, expect more upheaval before things settle.

Tension between Sutskever and Altman may have been brewing for some time. When you have an organization like OpenAI that's moving at a fast pace and pursuing ambitious goals, tension is inevitable," Sutskever told MIT Technology Review in September (comments that have not previously been published). I view any tension between product and research as a catalyst for advancing us, because I believe that product wins are intertwined with research success." Yet it is now clear that Sutskever disagreed with OpenAI leadership about how product wins and research success should be balanced.

New interim CEO Shear, who cofounded Twitch, appears to be a world away from Altman when it comes to the pace of AI development. I specifically say I'm in favor of slowing down, which is sort of like pausing except it's slowing down," heposted on Xin September. If we're at a speed of 10 right now, a pause is reducing to 0. I think we should aim for a 1-2 instead."

It's possible that an OpenAI led by Shear will double down on its original lofty mission to build (in Sutskever's words) AGI that benefits humanity,"whatever that means in practice. In the short term, OpenAI may slow down or even switch off its product pipeline.

This tension between trying to launch products quickly and slowing down development to ensure they are safe has vexed OpenAIfrom the very beginning. It was the reason key players in the company decided to leave OpenAI and start the competing AI safety startup Anthropic.

With Altman and his camp gone, the firm could pivot more toward Sutskever's work on what he calls superalignment, a research project that aims to come up withways to control a hypothetical superintelligence(future technology that Sutskever speculates will outmatch humans in almost every way). I'm doing it for my own self-interest," Sutskever told us. It's obviously important that any superintelligence anyone builds does not go rogue. Obviously."

Shear's public comments make him exactly the kind of cautious leader who would heed Sutskever's concerns. As Shearalso posted on X: The way you make it safely through a dangerous jungle at night is not to sprint forward at full speed, nor to refuse to proceed forward. You poke your way forward, carefully."

With the company orienting itself even more toward tech that does not yet-and may never-exist, will it continue to lead the field? Sutskever thought so. He said there were enough good ideas in play for others at the company to continue pushing the envelope of what's possible with generative AI. Over the years, we've cultivated a robust research organization that's delivering the latest advancements in AI," he told us. We have unbelievably good people in the company, and I trust them it's going to work out."

Of course, that was what he said in September. With top talent now jumping ship, OpenAI's future is far less certain than it was.

What next for Microsoft?

The tech giant, and its CEO Satya Nadella, seem to have emerged from the crisis as the winners. With Altman, Brockman, and likely many more top people from OpenAI joining its ranks-or even the majority of the company, if today's open letter from 500 OpenAI employees is to be believed-Microsoft has managed to concentrate its power in AI further. The company has the most to gain from embedding generative AI into its less sexy but very profitable productivity and developer tools.

The big question remains how necessary Microsoft will deem its expensive partnership with OpenAI to create cutting-edge tech in the first place. In apost on Xannouncing how extremely excited" he was to have hired Altman and Brockman, Nadella said his company remains committed" to OpenAI and its product road map.

But let's be real.Inan exclusive interview with MIT Technology Review, Nadella called the two companies codependent." They depend on us to build the best systems; we depend on them to build the best models, and we go to market together," Nadella told our editor in chief, Mat Honan, last week. If OpenAI's leadership roulette and talent exodus slows down its product pipeline, or leads to AI models less impressive than those it can build itself, Microsoft will have zero problems ditching the startup.

What next for AI?

Nobody outside the inner circle of Sutskever and the OpenAI board saw this coming-not Microsoft, not other investors, not the tech community as a whole. It has rocked the industry, says Amir Ghavi, a lawyer at the firm Fried Frank, which represents a number of generative AI companies, including Stability AI: As a friend in the industry said, I definitely didn't have this on my bingo card.'"

It remains to be seen whether Altman and Brockman make something new at Microsoft or leave to start a new company themselves down the line. The pair are two of the best-connected people in VC funding circles, and Altman, especially, is seen by many as one of the best CEOs in the industry. They will have big names with deep pockets lining up to support whatever they want to do next. Who the money comes from could shape the future of AI. Ghavi suggests that potential backers could be anyone from Mohammed bin Salman to Jeff Bezos.

The bigger takeaway is that OpenAI's crisis points to a wider rift emerging in the industry as a whole, between AI safety" folk who believe that unchecked progress could one dayprove catastrophic for humansand those who find such doomer" talk aridiculous distractionfrom the real-world risks of any technological revolution, such as economic upheaval, harmful biases, and misuse.

This year has seen a race to put powerful AI tools into everyone's hands, with tech giants like Microsoft and Google competing to usethe technologyfor everythingfrom email to search to meeting summaries. But we're still waiting to see exactly what generative AI's killer app will be. If OpenAI's rift spreads to the wider industry and the pace of development slows down overall, we may have to wait a little longer.

Deeper Learning

Text-to-image AI models can be tricked into generating disturbing images

Speaking of unsafe AI ... Popular text-to-image AI models can be prompted to ignore their safety filters and generate disturbing images. A group of researchers managed to jailbreak" both Stability AI's Stable Diffusion and OpenAI's DALL-E 2 to disregard their policies and create images of naked people, dismembered bodies, and other violent or sexual scenarios.

How they did it:A new jailbreaking method, dubbed SneakyPrompt" by its creators from Johns Hopkins University and Duke University, uses reinforcement learning to create written prompts that look like garbled nonsense to us but that AI models learn to recognize as hidden requests for disturbing images. It essentially works by turning the way text-to-image AI models function against them.

Why this matters:That AI models can be prompted to break out" of their guardrails is particularly worrying in the context of information warfare. They have already been exploited to produce fake content related to wars, such as the recent Israel-Hamas conflict.Read more from Rhiannon Williams here.

Bits and Bytes

Meta has split up its responsible AI team
Meta is reportedly getting rid of its responsible AI team and redeploying its employees to work on generative AI. But Meta uses AI in many other ways beyond generative AI-such as recommending news and political content. So this raises questions around how Meta intends to mitigate AI harms in general. (The Information)

Google DeepMind wants to define what counts as artificial general intelligence
A team of Google DeepMind researchers has put out a paper that cuts through the cross talk with not just one new definition for AGI but a whole taxonomy of them.(MIT Technology Review)

This company is building AI for African languages
Most tools built by AI companies are woefully inadequate at recognizing African languages. Startup Lelapa wants to fix that. It's launched a new tool called Vulavula, which can identify four languages spoken in South Africa-isiZulu, Afrikaans, Sesotho, and English. Now the team is working to include other languages from across the continent.(MIT Technology Review)

Google DeepMind's weather AI can forecast extreme weather faster and more accurately
The model, GraphCast, can predict weather conditions up to 10 days in advance, more accurately and much faster than the current gold standard. (MIT Technology Review)

How Facebook went all in on AI
In an excerpt fromBroken Code: Inside Facebook and the Fight to Expose Is Harmful Secrets, journalist Jeff Horwitz reveals how the company came to rely on artificial intelligence-and the price it (and we) have ended up having to pay in the process. (MIT Technology Review)

Did Argentina just have the first AI election?
AI played a big role in the campaigns of the two men campaigning to be the country's next president. Both campaigns used generative AI to create images and videos to promote their candidate and attack each other. Javier Milei, a far-right outsider, won the election. Although it's hard to say how big a role AI played in his victory, the AI campaigns illustrate how much harder it will be to know what is real and what is not in other upcoming elections. (The New York Times)

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments