Article 6DVC8 Researchers Develop Tools to Prevent Unauthorized AI Intrusion

Researchers Develop Tools to Prevent Unauthorized AI Intrusion

by
Krishi Chowdhary
from Techreport on (#6DVC8)
shutterstock_2252840845-1.jpg

shutterstock_2252840845-1.jpg

Glaze, a tool developed by the University of Chicago scientists, brings some relief to artists reeling under the impact of AI. Modern generative AI is capable of producing art within seconds with only a few prompts.

For example, you can ask AI to generate a Van Gough-like painting, and it'll do it within minutes. This puts the work of thousands of artists at risk of being stolen and used without consent.

The Glaze prototype was first released in March and since has had more than a million downloads.

Glaze protects original pieces of art by putting an invisible layer of protection around it through machine learning algorithms. For example, if an artist uploads their oil painting work through Glaze, AI wouldn't be able to recognize the work in its original form.

AI would perceive it as something entirely different, say a charcoal painting. This would prevent AI machines from replicating original artworks.

Jon Lam, a California-based artist, has been using the tool since its release. Lam admits that he, along with many artists, has uploaded their original artwork on different social media platforms with pride.

But they had no idea that these works would be copied by AI and put them out of work. Recently, Pope Francis also stressed the threat of AI eating into millions of jobs.

Entire, multiple, human creative industries are under threat to be replaced by automated machines.Jon Lam

Eveline Frohlich is yet another artist who's relieved with the introduction of Glaze. She expressed how helpless she felt when AI used her original artwork without any prior consent.

It was just like, this is mine now. It's on the internet; I'm going to get to use it, which is ridiculous.Eveline FrohlichPhotoguard - Another Ray of Hope

A similar tool named Photoguard was also released by Hadi Salman (a researcher at MIT) and his team. It works similarly to Glaze by adding an invisible immunization layer over photos that isn't readable by AI models.

The tool adjusts the images' pixels in such a way that it's unnoticeable to the human eye. But this little change is enough to prevent AI models from editing those pictures.

Thus, if anyone tries to edit those images through AI models, the results wouldn't be relevant at all. Photoguard aims to prevent the spread of deepfake AI images, including deepfake porn.

Although Photoguard is still in the development phase, and there may be ways to go around the tool's modus operandi, it brings hope to fight against the growing misuse of AI.

With these tools being developed, authors and artists hope that they'll finally be able to do something about AI's unchecked intrusions.

The post Researchers Develop Tools to Prevent Unauthorized AI Intrusion appeared first on The Tech Report.

External Content
Source RSS or Atom Feed
Feed Location https://techreport.com/feed/
Feed Title Techreport
Feed Link https://techreport.com/
Reply 0 comments