Sick of AI Engines Scraping Your Pics for Facial Recognition? Here's a Way to Fawkes Them Right Up
chromas writes:
Sick of AI engines scraping your pics for facial recognition? Here's a way to Fawkes them right up:
Researchers at the University of Chicago's Sand Lab have developed a technique for tweaking photos of people so that they sabotage facial-recognition systems.
The project, named Fawkes in reference to the mask in the V for Vendetta graphic novel and film depicting 16th century failed assassin Guy Fawkes, is described in a paper scheduled for presentation in August at the USENIX Security Symposium 2020.
[...] "Our distortion or 'cloaking' algorithm takes the user's photos and computes minimal perturbations that shift them significantly in the feature space of a facial recognition model (using real or synthetic images of a third party as a landmark)," the researchers explain in their paper. "Any facial recognition model trained using these images of the user learns an altered set of 'features' of what makes them look like them."
The boffins claim their pixel scrambling scheme provides greater than 95 per cent protection, regardless of whether facial recognition systems get trained via transfer learning or from scratch. They also say it provides about 80 per cent protection when clean, "uncloaked" images leak and get added to the training mix alongside altered snapshots.
They claim 100 per cent success at avoiding facial recognition matches using Microsoft's Azure Face API, Amazon Rekognition, and Face++. Their tests involve cloaking a set of face photos and providing them as training data, then running uncloaked test images of the same person against the mistrained model.
- Research paper (pdf)
- GitHub page (Python)
- Project page (unencrypted, heh)
Read more of this story at SoylentNews.