Article 6FV2Q A clever shield against photo fakery

A clever shield against photo fakery

by
Melissa Heikkilä
from MIT Technology Review on (#6FV2Q)
Story Image

Remember that selfie you posted last week? There's currently nothing stopping someone from taking it and editing it with AI-and it might be impossible to prove that the resulting image is fake.

The good news is that a new tool created by researchers at MIT could prevent this.

The tool, calledPhotoGuard, works like a protective shield by altering photos in tiny ways that are invisible to the human eye but prevent them from being manipulated. If someone tries to use an editing app based on a generative AI model such as Stable Diffusion to manipulate an image that has been immunized" by PhotoGuard, the result will look unrealistic or warped.

Right now, anyone can take our image, modify it however they want, put us in very bad-looking situations, and blackmail us," says Hadi Salman, a PhD student at MIT who contributed to the research. PhotoGuard is an attempt to solve the problem of our images being manipulated maliciously by these models," says Salman. The tool could, for example, help prevent women's selfies from being made intononconsensual deepfake pornography.

The MIT team used two different techniques to stop images from being edited using Stable Diffusion.In the first, PhotoGuard adds imperceptible signals to the image so that the AI model interprets it as something else, such as a block of pure gray. In the second, it disrupts the way the AI models generate images, essentially by encoding them with secret signals that alter how they're processed by the model, so any edited image looks like that gray block.For now, the technique works reliably only on Stable Diffusion, an open-source image generation model.

In theory, people could apply this protective shield to their images before they upload them online, says Aleksander Madry, SM '09, PhD '11, a professor of electrical engineering and computer science who contributed to the research. But a more effective approach, he adds, would be for tech companies to add it to images that people upload into their platforms automatically-though it's an arms race, because new AI models that might be able to override any new protections are always coming out.

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments