Paper: Stable Diffusion “Memorizes” Some Images, Sparking Privacy Concerns
Freeman writes:
But out of 300,000 high-probability images tested, researchers found a 0.03% memorization rate:
On Monday, a group of AI researchers from Google, DeepMind, UC Berkeley, Princeton, and ETH Zurich released a paper outlining an adversarial attack that can extract a small percentage of training images from latent diffusion AI image synthesis models like Stable Diffusion. It challenges views that image synthesis models do not memorize their training data and that training data might remain private if not disclosed.
Recently, AI image synthesis models have been the subject of intense ethical debate and even legal action. Proponents and opponents of generative AI tools regularly argue over the privacy and copyright implications of these new technologies. Adding fuel to either side of the argument could dramatically affect potential legal regulation of the technology, and as a result, this latest paper, authored by Nicholas Carlini et al., has perked up ears in AI circles.
Related:
Getty Images Targets AI Firm For 'Copying' Photos
Read more of this story at SoylentNews.