Researchers Say Current AI Watermarks Are Trivial To Remove
Researchers from the University of Maryland (UMD) were able to easily evade the current methods of AI watermarking during testing and found it even easier to add fake emblems to images that weren't generated by AI. "But beyond testing how easy it is to evade watermarks, one UMD team notably developed a watermark that is near impossible to remove from content without completely compromising the intellectual property," reports Engadget. "This application makes it possible to detect when products are stolen." From the report: In a similar collaborative research effort (PDF) between the University of California, Santa Barbara and Carnegie Mellon University, researchers found that through simulated attacks, watermarks were easily removable. The paper discerns that there are two distinct methods for eliminating watermarks through these attacks: destructive and constructive approaches. When it comes to destructive attacks, the bad actors can treat watermarks like it's a part of the image. Tweaking things like the brightness, contrast or using JPEG compression, or even simply rotating an image can remove a watermark. However, the catch here is that while these methods do get rid of the watermark, they also mess with the image quality, making it noticeably worse. In a constructive attack, watermark removal is a bit more sensitive and uses techniques like the good old Gaussian blur. Although watermarking AI-generated content needs to improve before it can successfully navigate simulated tests similar to those featured in these research studies, it's easy to envision a scenario where digital watermarking becomes a competitive race against hackers. Until a new standard is developed, we can only hope for the best when it comes to new tools like Google's SynthID, an identification tool for generative art, which will continue to get workshopped by developers until it hits the mainstream. Further reading: Researchers Tested AI Watermarks -- and Broke All of Them
Read more of this story at Slashdot.