Article 4WDZM Facebook AI Launches Its Deepfake Detection Challenge

Facebook AI Launches Its Deepfake Detection Challenge

by
Eliza Strickland
from IEEE Spectrum on (#4WDZM)

In September, Facebook sent out a strange casting call: We need all types of people to look into a webcam or phone camera and say very mundane things. The actors stood in bedrooms, hallways, and backyards, and they talked about topics such as the perils of junk food and the importance of arts education. It was a quick and easy gig-with an odd caveat. Facebook researchers would be altering the videos, extracting each person's face and fusing it onto another person's head. In other words, the participants had to agree to become deepfake characters.

Facebook's artificial intelligence (AI) division put out this casting call so it could ethically produce deepfakes-a term that originally referred to videos that had been modified using a certain face-swapping technique but is now a catchall for manipulated video. The Facebook videos are part of a training data set that the company assembled for a global competition called the Deepfake Detection Challenge. In this competition-produced in cooperation with Amazon, Microsoft, the nonprofit Partnership on AI, and academics from eight universities-researchers around the world are vying to create automated tools that can spot fraudulent media.

The competition launched today, with an announcement at the AI conference NeurIPS, and will accept entries through March 2020. Facebook has dedicated more than US $10 million for awards and grants.

Cristian Canton Ferrer helped organize the challenge as research manager for Facebook's AI Red Team, which analyzes the threats that AI poses to the social media giant. He says deepfakes are a growing danger not just to Facebook but to democratic societies. Manipulated videos that make politicians appear to do and say outrageous things could go viral before fact-checkers have a chance to step in.

"We're thinking about what will be happening a year from now. It's a cat-and-mouse approach." -Cristian Canton Ferrer, Facebook AI

While such a full-blown synthetic scandal has yet to occur, the Italian public recently got a taste of the possibilities. In September, a satirical news show aired a deepfake video featuring a former Italian prime minister apparently lavishing insults on other politicians. Most viewers realized it was a parody, but a few did not.

The U.S. presidential elections in 2020 are an added incentive to get ahead of the problem, says Canton Ferrer. He believes that media manipulation will become much more common over the coming year, and that the deepfakes will get much more sophisticated and believable. "We're thinking about what will be happening a year from now," he says. "It's a cat-and-mouse approach." Canton Ferrer's team aims to give the cat a head start, so it will be ready to pounce.

The growing threat of deepfakes

Just how easy is it to make deepfakes? A recent audit of online resources for altering videos found that the available open-source software still requires a good amount of technical expertise. However, the audit also turned up apps and services that are making it easier for almost anyone to get in on the action. In China, a deepfake app called Zao took the country by storm in September when it offered people a simple way to superimpose their own faces onto those of actors like Leonardo DiCaprio and Marilyn Monroe.

It may seem odd that the data set compiled for Facebook's competition is filled with unknown people doing unremarkable things. But a deepfake detector that works on those mundane videos should work equally well for videos featuring politicians. To make the Facebook challenge as realistic as possible, Canton Ferrer says his team used the most common open-source techniques to alter the videos-but he won't name the methods, to avoid tipping off contestants. "In real life, they will not be able to ask the bad actors, 'Can you tell me what method you used to make this deepfake?'" he says.

In the current competition, detectors will be scanning for signs of facial manipulation. However, the Facebook team is keeping an eye on new and emerging attack methods, such as full-body swaps that change the appearance and actions of a person from head to toe. "There are some of those out there, but they're pretty obvious now," Canton Ferrer says. "As they get better, we'll add them to the data set." Even after the detection challenge concludes in March, he says, the Facebook team will keep working on the problem of deepfakes.

As for how the winning detection methods will be used and whether they'll be integrated into Facebook's operations, Canton Ferrer says those decisions aren't up to him. The Partnership on AI's steering committee on AI and media integrity, which is overseeing the competition, will decide on the next steps, he says. Claire Leibowicz, who leads that steering committee, says the group will consider "coordinated efforts" to fight back against the global challenge of synthetic and manipulated media.

DARPA's efforts on deepfake detection

The Facebook challenge is far from the only effort to counter deepfakes. DARPA's Media Forensics program launched in 2016, a year before the first deepfake videos surfaced on Reddit. Program manager Matt Turek says that as the technology took off, the researchers working under the program developed a number of detection technologies, generally looking for "digital integrity, physical integrity, or semantic integrity."

Digital integrity is defined by the patterns in an image's pixels that are invisible to the human eye. These patterns can arise from cameras and video processing software, and any inconsistencies that appear are a tip-off that a video has been altered. Physical integrity refers to the consistency in lighting, shadows, and other physical attributes in an image. Semantic integrity considers the broader context. If a video shows an outdoor scene, for example, a deepfake detector might check the time stamp and location to look up the weather report from that time and place. The best automated detector, Turek says, would "use all those techniques to produce a single integrity score that captures everything we know about a digital asset."

DARPA's Media Forensics program created deepfake detectors that look at digital, physical, and semantic integrity.

Turek says his team has created a prototype Web portal (restricted to its government partners) to demonstrate a sampling of the detectors developed during the program. When the user uploads a piece of media via the Web portal, more than 20 detectors employ a range of different approaches to try to determine whether an image or video has been manipulated. Turek says his team continues to add detectors to the system, which is already better than humans at spotting fakes.

A successor to the Media Forensics program will launch in mid-2020: the Semantic Forensics program. This broader effort will cover all types of media-text, images, videos, and audio-and will go beyond simply detecting manipulation. It will also seek methods to understand the importance of the manipulations, which could help organizations decide which content requires human review. "If you manipulate a vacation photo by adding a beach ball, it really doesn't matter," Turek says. "But if you manipulate an image about a protest and add an object like a flag, that could change people's understanding of who was involved."

The Semantic Forensics program will also try to develop tools to determine if a piece of media really comes from the source it claims. Eventually, Turek says, he'd like to see the tech community embrace a system of watermarking, in which a digital signature would be embedded in the media itself to help with the authentication process. One big challenge of this idea is that every software tool that interacts with the image, video, or other piece of media would have to "respect that watermark, or add its own," Turek says. "It would take a long time for the ecosystem to support that."

A deepfake detection tool for consumers

In the meantime, the AI Foundation has a plan. This nonprofit is building a tool called Reality Defender that's due to launch in early 2020. "It will become your personal AI guardian who's watching out for you," says Rob Meadows, president and chief technology officer for the foundation.

Reality Defender "will become your personal AI guardian who's watching out for you." -Rob Meadows, AI Foundation

Reality Defender is a plug-in for Web browsers and an app for mobile phones. It scans everything on the screen using a suite of automatic detectors, then alerts the user about altered media. Detection alone won't make for a useful tool, since Photoshop and other editing tools are widely used in fashion, advertising, and entertainment. If Reality Defender draws attention to every altered piece of content, Meadows notes, "it will flood consumers to the point where they say, 'We don't care anymore, we have to tune it out.'"

To avoid that problem, users will be able to dial the tool's sensitivity up or down, depending on how many alerts they want. Meadows says beta testers are currently training the system, giving it feedback on which types of manipulations they care about. Once Reality Defender launches, users will be able to personalize their AI guardian by giving it a thumbs-up or thumbs-down on alerts, until it learns their preferences. "A user can say, 'For my level of paranoia, this is what works for me,'"" Meadows says.

He sees the software as a useful stopgap solution, but ultimately he hopes that his group's technologies will be integrated into platforms such as Facebook, YouTube, and Twitter. He notes that Biz Stone, cofounder of Twitter, is a member of the AI Foundation's board. To truly protect society from fake media, Meadows says, we need tools that prevent falsehoods from getting hosted on platforms and spread via social media. Debunking them after they've already spread is too late.

The researchers at Jigsaw, a unit of Alphabet that works on technology solutions for global challenges, would tend to agree. Technical research manager Andrew Gully says his team identified synthetic media as a societal threat some years back. To contribute to the fight, Jigsaw teamed up with sister company Google AI to produce a deepfake data set of its own in late 2018, which they contributed to the FaceForensics data set hosted by the Technical University of Munich.

Gully notes that while we haven't yet seen a political crisis triggered by a deepfake, these videos are also used for bullying and "revenge porn," in which a targeted woman's face is pasted onto the face of an actor in a porno. (While pornographic deepfakes could in theory target men, a recent audit of deepfake content found that 100 percent of the pornographic videos focused on women.) What's more, Gully says people are more likely to be credulous of videos featuring unknown individuals than famous politicians.

But it's the threat to free and fair elections that feels most crucial in this U.S. election year. Gully says systems that detect deepfakes must take a careful approach in communicating the results to users. "We know already how difficult it is to convince people in the face of their own biases," Gully says. "Detecting a deepfake video is hard enough, but that's easy compared to how difficult it is to convince people of things they don't want to believe."

nfcVgvRhJCU
External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/IeeeSpectrum
Feed Title IEEE Spectrum
Feed Link https://spectrum.ieee.org/
Reply 0 comments