Article 6JV7X OpenAI Teases a New Generative Video Model Called Sora

OpenAI Teases a New Generative Video Model Called Sora

by
hubie
from on (#6JV7X)

upstart writes:

OpenAI teases an amazing new generative video model called Sora [technologyreview.com]:

OpenAI has built a striking new generative video model called Sora that can take a short text description and turn it into a detailed, high-definition film clip up to a minute long.

Based on four sample videos that OpenAI shared with MIT Technology Review ahead of today's announcement, the San Francisco-based firm has pushed the envelope of what's possible with text-to-video generation (a hot new research direction that we flagged as a trend to watch in 2024).

"We think building models that can understand video, and understand all these very complex interactions of our world, is an important step for all future AI systems," says Tim Brooks, a scientist at OpenAI.

[...] Impressive as they are, the sample videos shown here were no doubt cherry-picked to show Sora at its best. Without more information, it is hard to know how representative they are of the model's typical output.

It may be some time before we find out. OpenAI's announcement of Sora today is a tech tease, and the company says it has no current plans to release it to the public. Instead, OpenAI will today begin sharing the model with third-party safety testers for the first time.

In particular, the firm is worried about the potential misuses [technologyreview.com] of fake but photorealistic video [technologyreview.com]. "We're being careful about deployment here and making sure we have all our bases covered before we put this in the hands of the general public," says Aditya Ramesh, a scientist at OpenAI, who created the firm's text-to-image model DALL-E [technologyreview.com].

But OpenAI is eyeing a product launch sometime in the future. As well as safety testers, the company is also sharing the model with a select group of video makers and artists to get feedback on how to make Sora as useful as possible to creative professionals. "The other goal is to show everyone what is on the horizon, to give a preview of what these models will be capable of," says Ramesh.

Read more of this story at SoylentNews.

External Content
Source RSS or Atom Feed
Feed Location https://soylentnews.org/index.rss
Feed Title
Feed Link https://soylentnews.org/
Reply 0 comments