Microsoft Unveils Deepfake Tech That's Too Good To Release
Arthur T Knackerbracket has processed the following story:
Microsoft this week demoed VASA-1, a framework for creating videos of people talking from a still image, audio sample, and text script, and claims - rightly - it's too dangerous to be released to the public.
These AI-generated videos, in which people can be convincingly animated to speak scripted words in a cloned voice, are just the sort of thing the US Federal Trade Commission warned about last month, after previously proposing a rule to prevent AI technology from being used for impersonation fraud.
Microsoft's team acknowledge as much in their announcement, which explains the technology is not being released due to ethical considerations. They insist that they're presenting research for generating virtual interactive characters and not for impersonating anyone. As such, there's no product or API planned.
"Our research focuses on generating visual affective skills for virtual AI avatars, aiming for positive applications," the Redmond boffins state. "It is not intended to create content that is used to mislead or deceive.
"However, like other related content generation techniques, it could still potentially be misused for impersonating humans. We are opposed to any behavior to create misleading or harmful contents of real persons, and are interested in applying our technique for advancing forgery detection."
Kevin Surace, Chair of Token, a biometric authentication biz, and frequent speaker on generative AI, told The Register in an email that while there have been prior technology demonstrations of faces animated from a still frame and cloned voice file, Microsoft's demonstration reflects the state of the art.
"The implications for personalizing emails and other business mass communication is fabulous," he opined. "Even animating older pictures as well. To some extent this is just fun and to another it has solid business applications we will all use in the coming months and years."
Read more of this story at SoylentNews.