OpenAI teases an amazing new generative video model called Sora
OpenAI has built a striking new generative video model called Sora that can take a short text description and turn it into a detailed, high-definition film clip up to a minute long.
Based on four sample videos that OpenAI shared with MIT Technology Review ahead of today's announcement, the San Francisco-based firm has pushed the envelope of what's possible with text-to-video generation (a hot new research direction that we flagged as a trend to watch in 2024).
We think building models that can understand video, and understand all these very complex interactions of our world, is an important step for all future AI systems," says Tim Brooks, a scientist at OpenAI.
But there's a disclaimer. OpenAI gave us a preview of Sora (which means sky in Japanese) under conditions of strict secrecy. In an unusual move, the firm would only share information about Sora if we agreed to wait until after news of the model was made public to seek the opinions of outside experts. [Editor's note: We've updated this story with outside comment below.] OpenAI has not yet released a technical report or demonstrated the model actually working. And it says it won't be releasing Sora anytime soon. [Update: OpenAI has now shared more technical details on its website.]
PROMPT: Animated scene features a close-up of a short fluffy monster kneeling beside a melting red candle. The art style is 3D and realistic, with a focus on lighting and texture. the mood of the painting is one of wonder and curiosity, as the monster gazes at the flame with wide eyes and open mouth. Its pose and expression convey a sense of innocence and playfulness, as if it is exploring the world around it for the first time. The use of warm colors and dramatic lighting further enhances the cozy atmosphere of the image. (Credit: OpenAI)PROMPT: A gorgeously rendered papercraft world of a coral reef, rife with colorful fish and sea creatures (Credit: OpenAI)The first generative models that could produce video from snippets of text appeared in late 2022. But early examples from Meta, Google, and a startup called Runway were glitchy and grainy. Since then, the tech has been getting better fast. Runway's gen-2 model, released last year, can produce short clips that come close to matching big-studio animation in their quality. But most of these examples are still only a few seconds long.
The sample videos from OpenAI's Sora are high-definition and full of detail. OpenAI also says it can generate videos up to a minute long. One video of a Tokyo street scene shows that Sora has learned how objects fit together in 3D: the camera swoops into the scene to follow a couple as they walk past a row of shops.
OpenAI also claims that Sora handles occlusion well. One problem with existing models is that they can fail to keep track of objects when they drop out of view. For example, if a truck passes in front of a street sign, the sign might not reappear afterward.
In a video of a papercraft underwater scene, Sora has added what look like cuts between different pieces of footage, and the model has maintained a consistent style between them.
It's not perfect. In the Tokyo video, cars to the left look smaller than the people walking beside them. They also pop in and out between the tree branches. There's definitely some work to be done in terms of long-term coherence," says Brooks. For example, if someone goes out of view for a long time, they won't come back. The model kind of forgets that they were supposed to be there."
Tech teaseImpressive as they are, the sample videos shown here were no doubt cherry-picked to show Sora at its best. Without more information, it is hard to know how representative they are of the model's typical output.
It may be some time before we find out. OpenAI's announcement of Sora today is a tech tease, and the company says it has no current plans to release it to the public. Instead, OpenAI will today begin sharing the model with third-party safety testers for the first time.
In particular, the firm is worried about the potential misuses of fake but photorealistic video. We're being careful about deployment here and making sure we have all our bases covered before we put this in the hands of the general public," says Aditya Ramesh, a scientist at OpenAI, who created the firm's text-to-image model DALL-E.
But OpenAI is eyeing a product launch sometime in the future. As well as safety testers, the company is also sharing the model with a select group of video makers and artists to get feedback on how to make Sora as useful as possible to creative professionals. The other goal is to show everyone what is on the horizon, to give a preview of what these models will be capable of," says Ramesh.
To build Sora, the team adapted the tech behind DALL-E 3, the latest version of OpenAI's flagship text-to-image model. Like most text-to-image models, DALL-E 3 uses what's known as a diffusion model. These are trained to turn a fuzz of random pixels into a picture.
Sora takes this approach and applies it to videos rather than still images. But the researchers also added another technique to the mix. Unlike DALL-E or most other generative video models, Sora combines its diffusion model with a type of neural network called a transformer.
Transformers are great at processing long sequences of data, like words. That has made them the special sauce inside large language models like OpenAI's GPT-4 and Google DeepMind's Gemini. But videos are not made of words. Instead, the researchers had to find a way to cut videos into chunks that could be treated as if they were. The approach they came up with was to dice videos up across both space and time. It's like if you were to have a stack of all the video frames and you cut little cubes from it," says Brooks.
The transformer inside Sora can then process these chunks of video data in much the same way that the transformer inside a large language model processes words in a block of text. The researchers say that this let them train Sora on many more types of video than other text-to-video models, varied in terms of resolution, duration, aspect ratio, and orientation. It really helps the model," says Brooks. That is something that we're not aware of any existing work on."
PROMPT: Several giant wooly mammoths approach treading through a snowy meadow, their long wooly fur lightly blows in the wind as they walk, snow covered trees and dramatic snow capped mountains in the distance, mid afternoon light with wispy clouds and a sun high in the distance creates a warm glow, the low camera view is stunning capturing the large furry mammal with beautiful photography, depth of field (Credit: OpenAI)PROMPT: Beautiful, snowy Tokyo city is bustling. The camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. Gorgeous sakura petals are flying through the wind along with snowflakes. (Credit: OpenAI)
From a technical perspective it seems like a very significant leap forward," says Sam Gregory, executive director at Witness, a human rights organization that specializes in the use and misuse of video technology. But there are two sides to the coin," he says. The expressive capabilities offer the potential for many more people to be storytellers using video. And there are also real potential avenues for misuse."
OpenAI is well aware of the risks that come with a generative video model. We are already seeing the large-scale misuse of deepfake images. Photorealistic video takes this to another level.
Gregory notes that you could use technology like this to misinform people about conflict zones or protests. The range of styles is also interesting, he says. If you could generate shaky footage that looked like something shot with a phone, it would come across as more authentic.
The tech is not there yet, but generative video has gone from zero to Sora in just 18 months. We're going to be entering a universe where there will be fully synthetic content, human-generated content and a mix of the two," says Gregory.
The OpenAI team plans to draw on the safety testing it did last year for DALL-E 3. Sora already includes a filter that runs on all prompts sent to the model that will block requests for violent, sexual, or hateful images, as well as images of known people. Another filter will look at frames of generated videos and block material that violates OpenAI's safety policies.
OpenAI says it is also adapting a fake-image detector developed for DALL-E 3 to use with Sora. And the company will embed industry-standard C2PA tags, metadata that states how an image was generated, into all of Sora's output. But these steps are far from foolproof. Fake-image detectors are hit-or-miss. Metadata is easy to remove, and most social media sites strip it from uploaded images by default.
We'll definitely need to get more feedback and learn more about the types of risks that need to be addressed with video before it would make sense for us to release this," says Ramesh.
Brooks agrees. Part of the reason that we're talking about this research now is so that we can start getting the input that we need to do the work necessary to figure out how it could be safely deployed," he says.
Update 2/15: Comments from Sam Gregory were added.