Article 6BC02 We need to bring consent to AI

We need to bring consent to AI

by
Melissa Heikkilä
from MIT Technology Review on (#6BC02)
Story Image

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

This week's big news is that Geoffrey Hinton, a VP and Engineering Fellow at Google, and a pioneer of deep learning who developed some of the most important techniques at the heart of modern AI, is leaving the company after 10 years.

But first, we need to talk about consent in AI.

Last week, OpenAI announced it is launching an incognito" mode that does not save users' conversation history or use it to improve its AI language model ChatGPT. The new feature lets users switch off chat history and training and allows them to export their data. This is a welcome move in giving people more control over how their data is used by a technology company.

OpenAI's decision to allow people to opt out comes as the firm is under increasing pressure from European data protection regulators over how it uses and collects data. OpenAI had until yesterday, April 30, to accede to Italy's requests that it comply with the GDPR, the EU's strict data protection regime. Italy restored access to ChatGPT in the country after OpenAI introduced a user opt out form and the ability to object to personal data being used in ChatGPT. The regulator had argued that OpenAI has hoovered people's personal data without their consent, and hasn't given them any control over how it is used.

In an interview last week with my colleague Will Douglas Heaven, OpenAI's chief technology officer, Mira Murati, said the incognito mode was something that the company had been taking steps toward iteratively" for a couple of months and had been requested by ChatGPT users. OpenAI told Reuters its new privacy features were not related to the EU's GDPR investigations.

We want to put the users in the driver's seat when it comes to how their data is used," says Murati. OpenAI says it will still store user data for 30 days to monitor for misuse and abuse.

But despite what OpenAI says, Daniel Leufer, a senior policy analyst at the digital rights group Access Now, reckons that GDPR-and the EU's pressure-has played a role in forcing the firm to comply with the law. In the process, it has made the product better for everyone around the world.

Good data protection practices make products safer [and] better [and] give users real agency over their data," he said on Twitter.

A lot of people dunk on the GDPR as an innovation-stifling bore. But as Leufer points out, the law shows companies how they can do things better when they are forced to do so. It's also the only tool we have right now that gives people some control over their digital existence in an increasingly automated world.

Other experiments in AI to grant users more control show that there is clear demand for such features.

Since late last year, people and companies have been able to opt out of having their images included in the open-source LAION data set that has been used to train the image-generating AI model Stable Diffusion.

Since December, around 5,000 people and several large online art and image platforms, such as Art Station and Shutterstock, have asked to have over 80 million images removed from the data set, says Mat Dryhurst, who cofounded an organization called Spawning that is developing the opt-out feature. This means that their images are not going to be used in the next version of Stable Diffusion.

Dryhurst thinks people should have the right to know whether or not their work has been used to train AI models, and that they should be able to say whether they want to be part of the system to begin with.

Our ultimate goal is to build a consent layer for AI, because it just doesn't exist," he says.

Deeper Learning

Geoffrey Hinton tells us why he's now scared of the tech he helped build

Geoffrey Hinton is a pioneer of deep learning who helped develop some of the most important techniques at the heart of modern artificial intelligence, but after a decade at Google, he is stepping down to focus on new concerns he now has about AI. MIT Technology Review's senior AI editor Will Douglas Heaven met Hinton at his house in north London just four days before the bombshell announcement that he is quitting Google.

Stunned by the capabilities of new large language models like GPT-4, Hinton wants to raise public awareness of the serious risks that he now believes may accompany the technology he ushered in.

And oh boy did he have a lot to say. I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they're very close to it now and they will be much more intelligent than us in the future," he told Will. How do we survive that?" Read more from Will Douglas Heaven here.

Even Deeper Learning

A chatbot that asks questions could help you spot when it makes no sense

AI chatbots like ChatGPT, Bing, and Bard often present falsehoods as facts and have inconsistent logic that can be hard to spot. One way around this problem, a new study suggests, is to change the way the AI presents information.

Virtual Socrates: A team of researchers from MIT and Columbia University found that getting a chatbot to ask users questions instead of presenting information as statements helped people notice when the AI's logic didn't add up. A system that asked questions also made people feel more in charge of decisions made with AI, and researchers say it can reduce the risk of overdependence on AI-generated information. Read more from me here.

Bits and Bytes

Palantir wants militaries to use language models to fight wars
The controversial tech company has launched a new platform that uses existing open-source AI language models to let users control drones and plan attacks. This is a terrible idea. AI language models frequently make stuff up, and they are ridiculously easy to hack into. Rolling these technologies out in one of the highest-stakes sectors is a disaster waiting to happen. (Vice)

Hugging Face launched an open-source alternative to ChatGPT
HuggingChat works in the same way as ChatGPT, but it is free to use and for people to build their own products on. Open-source versions of popular AI models are on a roll-earlier this month Stability.AI, creator of the image generator Stable Diffusion, also launched an open-source version of an AI chatbot, StableLM.

How Microsoft's Bing chatbot came to be and where it's going next
Here's a nice behind-the-scenes look at Bing's birth. I found it interesting that to generate answers, Bing does not always use OpenAI's GPT-4 language model but Microsoft's own models, which are cheaper to run. (Wired)

AI Drake just set an impossible legal trap for Google
My social media feeds have been flooded with AI-generated songs copying the styles of popular artists such as Drake. But as this piece points out, this is only the start of a thorny copyright battle over AI-generated music, scraping data off the internet, and what constitutes fair use. (The Verge)

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments