Article 6A3N2 Epic’s new motion-capture animation tech has to be seen to be believed

Epic’s new motion-capture animation tech has to be seen to be believed

by
Kyle Orland
from Ars Technica - All content on (#6A3N2)
Screen-Shot-2023-03-22-at-5.35.08-PM-800

Enlarge / Would you believe that creating this performance took only minutes of video processing and no human tweaking? (credit: Ninja Theory / Epic)

SAN FRANCISCO-Every year at the Game Developers Conference, a handful of competing companies show off their latest motion-capture technology, which transforms human performances into 3D animations that can be used on in-game models. Usually, these technical demonstrations involve a lot of specialized hardware for the performance capture and a good deal of computer processing and manual artist tweaking to get the resulting data into a game-ready state.

Epic's upcoming MetaHuman facial animation tool looks set to revolutionize that kind of labor- and time-intensive workflow. In an impressive demonstration at Wednesday's State of Unreal stage presentation, Epic showed off the new machine-learning-powered system, which needed just a few minutes to generate impressively real, uncanny-valley-leaping facial animation from a simple head-on video taken on an iPhone.

The potential to get quick, high-end results from that kind of basic input "has literally changed how [testers] work or the kind of work they can take on," Epic VP of Digital Humans Technology Vladimir Mastilovic said in a panel discussion Wednesday afternoon.

Read 13 remaining paragraphs | Comments

External Content
Source RSS or Atom Feed
Feed Location http://feeds.arstechnica.com/arstechnica/index
Feed Title Ars Technica - All content
Feed Link https://arstechnica.com/
Reply 0 comments