Article 73H56 ByteDance’s next-gen AI model can generate clips based on text, images, audio, and video

ByteDance’s next-gen AI model can generate clips based on text, images, audio, and video

by
Emma Roth
from The Verge on (#73H56)
seedance-clip.png?quality=90&strip=all&crop=0,0,100,100ByteDance says its new AI video model can more accurately follow prompts. | Image: ByteDance

Big Tech's race to leapfrog the latest AI models continues with the launch of ByteDance's next-gen video generator. In a blog post, ByteDance - the China-based company behind TikTok - says Seedance 2.0 supports prompts that combine text, images, video, and audio.

The company claims it "delivers a substantial leap in generation quality," offering improvements in generating complex scenes with multiple subjects and its ability to follow instructions. Users can refine their text prompts by feeding Seedance 2.0 up to nine images, three video clips, and three audio clips.

The model can generate up to 15-second clips with audio, while taking cam ...

Read the full story at The Verge.

External Content
Source RSS or Atom Feed
Feed Location http://www.theverge.com/rss/index.xml
Feed Title The Verge
Feed Link https://www.theverge.com/
Reply 0 comments