Article 5E83J Scientists Prove That Deepfake Detectors Can be Duped

Scientists Prove That Deepfake Detectors Can be Duped

by
Fnord666
from SoylentNews on (#5E83J)

upstart writes in with an IRC submission:

Scientists prove that deepfake detectors can be duped:

Universities, organizations and tech giants, such as Microsoft and Facebook, have been working on tools that can detect deepfakes in an effort to prevent their use for the spread of malicious media and misinformation. Deepfake detectors, however, can still be duped, a group of computer scientists from UC San Diego has warned. The team showed how detection tools can be fooled by inserting inputs called "adversarial examples" into every video frame at the WACV 2021 computer vision conference that took place online in January.

[...] The UC San Diego scientists found that by creating adversarial examples of the face and inserting them into every video frame, they were able to fool "state-of-the-art deepfake detectors." Further, the technique they developed works even for compressed videos and even if they had no complete access to the detector model. A bad actor coming up with the same technique could then create deepfakes that can evade even the best detection tools.

So, how can developers create detectors that can't be duped? The scientists recommend using adversary training, wherein an adaptive adversary keeps generating deepfakes that can bypass the detector while it's being trained, so that the detector can continue to improve in spotting inauthentic images.

Original Submission

Read more of this story at SoylentNews.

External Content
Source RSS or Atom Feed
Feed Location https://soylentnews.org/index.rss
Feed Title SoylentNews
Feed Link https://soylentnews.org/
Feed Copyright Copyright 2014, SoylentNews
Reply 0 comments