Native Dubbing That Seamlessly Translates Video Content Into Various Languages Through Generative AI
Synthesia, a company focused on using artificial intelligence for good, has created ENACT, a native dubbing feature that uses generative AI to seamlessly translate video content into various languages. The program synchronizes the lip movements of an actor with that of a voice actor speaking a specific language, thereby lifting the barriers of language and allowing content creators further reach.
ENACT enables translation without the creative casualty of dubbing or subtitling, resulting in a seamless experience for the viewer. Native dubbing is a new method of translating video content that utilises AI to synchronise the lip movements of an actor to a new dialogue track. "Our goal is to remove the language barrier from video and allow content from both YouTube influencers and high-end productions to reach a much larger audience than today.
Related Laughing Squid PostsHappy to announce that @synthesiaIO is moving out of stealth. We do exciting native movie dubbing with cutting edge deep learning technology, empowering storytellers with AI :)https://t.co/W48vxRLpHOhttps://t.co/KHzHfSWvO7
No more bad lip sync on TV!#deeplearning #vfx #AI pic.twitter.com/sxZPBK6OJP
- Matthias Niessner (@MattNiessner) November 15, 2018
- David Hasselhoff Dresses in White Linen & Sings a Sexy Ode to Iced Coffee in Music Video Ad
- Kung POW, Rhett & Link's Wild West Chinese Restaurant Commercial
- Benevolent Students Win Esteemed Prize for Gloves That Translate American Sign Language Into Speech
The post Native Dubbing That Seamlessly Translates Video Content Into Various Languages Through Generative AI appeared first on Laughing Squid.