Article 6JKA4 Why Big Tech’s watermarking plans are some welcome good news

Why Big Tech’s watermarking plans are some welcome good news

by
Melissa Heikkilä
from MIT Technology Review on (#6JKA4)
Story Image

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

This week I am happy to bring you some encouraging news from the world of AI. Following the depressingTaylor Swift deepfake porn scandaland the proliferation of political deepfakes, such asAI-generated robocalls of President Bidenasking voters to stay home, tech companies are stepping up and putting into place measures to better detect AI-generated content.

On February 6, Meta said it was going tolabel AI-generated imageson Facebook, Instagram, and Threads. When someone uses Meta's AI tools to create images, the company will add visible markers to the image, as well as invisible watermarks and metadata in the image file. The company says its standards are in line with best practices laid out by thePartnership on AI, an AI research nonprofit.

Big Tech is also throwing its weight behind a promising technical standard that could add a nutrition label" to images, video, and audio.CalledC2PA, it's an open-source internet protocolthat relies on cryptography to encode details about the origins of a piece of content, or what technologists refer to as provenance" information. The developers of C2PA often compare the protocol to a nutrition label, but one that says where content came from and who-or what-created it.Read more about it here.

On February 8, Googleannouncedit is joining other tech giants such as Microsoft and Adobe in the steering committee of C2PA and will include its watermarkSynthIDin all AI-generated images in itsnew Gemini tools. Meta says it is also participating in C2PA. Having an industry-wide standard makes it easier for companies to detect AI-generated content, no matter which system it was created with.

OpenAI too announcednew content provenance measures last week. It says it will add watermarks to the metadata of images generated with ChatGPT and DALL-E 3, its image-making AI. OpenAI says it will now include a visible label in images to signal they have been created with AI.

These methods are a promising start, but they're not foolproof.Watermarks in metadata are easy to circumvent by taking a screenshot of images and just using that, while visual labels can be cropped or edited out. There is perhaps more hope for invisible watermarks like Google's SynthID, which subtly changes the pixels of an image so that computer programs can detect the watermark but the human eye cannot. These are harder to tamper with. What's more, there aren't reliable ways to label and detect AI-generated video, audio, or even text.

But there is still value in creating these provenance tools. As Henry Ajder, a generative-AI expert, told me a couple of weeks ago whenI interviewed him about how to prevent deepfake porn, the point is to create a perverse customer journey." In other words, add barriers and friction to the deepfake pipeline in order to slow down the creation and sharing of harmful content as much as possible. A determined person will likely still be able to override these protections, but every little bit helps.

There are also many nontechnical fixes tech companies could introduce to prevent problems such as deepfake porn. Major cloud service providers and app stores, such as Google, Amazon, Microsoft, and Apple could move to ban services that can be used to create nonconsensual deepfake nudes. And watermarks should be included in all AI-generated content across the board, even by smaller startups developing the technology.

What gives me hope is that alongside these voluntary measures we're starting to see binding regulations,such as theEU's AI Actand theDigital Services Act, which require tech companies to disclose AI-generated content and take down harmful content faster. There's also renewed interest among US lawmakers in passing some binding rules on deepfakes. And following AI-generated robocalls of President Biden telling voters not to vote, the US Federal Communications Commissionannouncedlast week that it was banning the use of AI in these calls.

In general I'm pretty skeptical about voluntary guidelines and rules, because there's no real accountability mechanism and companies can choose to change these rules whenever they want. The tech sector has a really bad track record for regulating itself. In the cutthroat, growth-driven tech world, things like responsible AI are often the first to face cuts.

But despite that, these announcements are extremely welcome. They're also much better than the status quo, which is next to nothing.

Deeper Learning

Google's Gemini is now in everything. Here's how you can try it out.

In the biggest mass-market AI launch yet, Google is rolling out Gemini, its family of large language models, across almost all its products, from Android to the iOS Google app to Gmail to Docs and more. You can now get your hands on Gemini Ultra, the most powerful version of the model, for the first time.

Bard is dead; long live Gemini:Google is also sunsetting Bard, its ChatGPT rival. Bard, which has been powered by a version of Gemini since December, will now be known as Gemini too. By baking Gemini into its ubiquitous tools, Google is hoping to make up lost ground and even overtake its rival OpenAI.Read more from Will Douglas Heaven.

Bits and Bytes

A chatbot helped more people access mental-health services
An AI chatbot from a startup called Limbic helped increase the number of patients referred for mental-health services through England's National Health Service (particularly among members of underrepresented groups, who are less likely to seek help), new research has found. (MIT Technology Review)

This robot can tidy a room without any help
A new system called OK-Robot could train robots to pick up and move objects in settings they haven't encountered before. It's an approach that might be able to plug the gap between rapidly improving AI models and actual robot capabilities, because it doesn't require any additional costly, complex training. (MIT Technology Review)

Inside OpenAI's plan to make AI more democratic"
This feature looks at how computer scientists at OpenAI are trying to address the technical problem of how to align their AIs to human values. But a bigger question remains unanswered: Exactly whose values should AI reflect? And who should get to decide?
(Time)

OpenAI's Sam Altman wants trillions to build chips for AI
The CEO has often complained that the company does not have enough computing power to train and run its powerful AI models. Altman is reportedly talking with investors in the United Arab Emirates government to raise up to $7 trillion to boost the world's chip-building capacity. (The Wall Street Journal)

A new app to dignify" women
Ugh. In contrast to apps that sexualize images of women, some 4Chan users are using generative AI to add clothes, erase their tattoos and piercings, and make them look more modest. How about ... we just leave women alone. (404 Media)

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments