The AI Act is done. Here’s what will (and won’t) change
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
It's official. After three years, the AI Act, the EU's new sweeping AI law, jumped through its final bureaucratic hoop last week when the European Parliament voted to approve it. (You can catch up on the five main things you need to know about the AI Act withthis storyI wrote last year.)
This also feels like the end of an era for me personally: I was the first reporter to get the scoop on an early draft of the AI Act in 2021, and have followed the ensuing lobbying circus closely ever since.
But the reality is that the hard work starts now. The law will enter into force in May, and people living in the EU will start seeing changes by the end of the year. Regulators will need to get set up in order to enforce the law properly, and companies will have between up to three years to comply with the law.
Here's what will (and won't) change:
1. Some AI uses will get banned later this year
The Act places restrictions on AI use cases that pose a high risk to people's fundamental rights, such as in healthcare, education, and policing. These will be outlawed by the end of the year.
It also bans some uses that are deemed to pose an unacceptable risk." They include some pretty out-there and ambiguous use cases, such as AI systems that deploy subliminal, manipulative, or deceptive techniques to distort behavior and impair informed decision-making," or exploit vulnerable people. The AI Act also bans systems that infer sensitive characteristics such as someone's political opinions or sexual orientation, and the use of real-time facial recognition software in public places. The creation of facial recognition databases byscraping theinterneta la Clearview AI will also be outlawed.
There are some pretty huge caveats, however. Law enforcement agencies are still allowed to use sensitive biometric data, as well as facial recognition software in public places to fight serious crime, such as terrorism or kidnappings. Some civil rights organizations, such as digital rights organizationAccess Now, have called the AI Act a failure for human rights" because it did not ban controversial AI use cases such as facial recognition outright. And while companies and schools are not allowed to use software that claims to recognize people's emotions, they can if it's for medical or safety reasons.
2. It will be more obvious when you're interacting with an AI system
Tech companies will be required to label deepfakes and AI-generated content and notify people when they are interacting with a chatbot or other AI system. The AI Act will also require companies to develop AI-generated media in a way that makes it possible to detect. This is promising news in the fight against misinformation, and will give research aroundwatermarkingand content provenance a big boost.
However, this is all easier said than done, and research lags far behind what the regulation requires. Watermarks are still an experimental technology and easy to tamper with. It is stilldifficult to reliably detect AI-generated content. Some efforts show promise, such as theC2PA, an open-source internet protocol, but far more work is needed to make provenance techniques reliable, and to build an industry-wide standard.
3. Citizens can complain if they have been harmed by an AI
The AI Act will set up a newEuropean AI Office to coordinate compliance, implementation, and enforcement (and they are hiring). Thanks to the AI Act, citizens in the EU cansubmit complaints about AI systems when they suspect they have been harmed by one, and can receive explanations on why the AI systems made decisions they did. It's an important first step toward giving people more agency in an increasingly automated world. However, this will require citizens to have a decent level ofAI literacy,and to be aware of how algorithmic harms happen. For most people, these are still very foreign and abstract concepts.
4. AI companies will need to be more transparent
Most AI uses will not require compliance with the AI Act. It's only AI companies developing technologies in high risk" sectors, such as critical infrastructure or healthcare, that will have new obligations when the Act fully comes into force in three years. These include better data governance, ensuring human oversight and assessing how these systems will affect people's rights.
AI companies that are developing general purpose AI models," such as language models, will also need to create and keep technical documentation showing how they built the model, how they respect copyright law, and publish a publicly available summary of what training data went into training the AI model.
This is a big change from the current status quo, where tech companies are secretive about the data that went into their models, and will require an overhaul of theAI sector's messy data management practices.
The companies with the most powerful AI models, such as GPT-4 and Gemini, will face more onerous requirements, such as having to perform model evaluations and risk-assessments and mitigations, ensure cybersecurity protection, and report any incidents where the AI system failed. Companies that fail to comply will face huge fines or their products could be banned from the EU.
It's also worth noting that free open-source AI models that share every detail of how the model was built, including the model's architecture, parameters, and weights, are exempt from many of the obligations of the AI Act.
Now read the rest of The AlgorithmDeeper LearningAfrica's push to regulate AI starts now
The projected benefit of AI adoption on Africa's economy is tantalizing. Estimates suggest that Nigeria, Ghana, Kenya, and South Africa alone could rake in up to $136 billion worth of economic benefits by 2030 if businesses there begin using more AI tools. Now the African Union-made up of 55 member nations-is trying to work out how to develop and regulate this emerging technology.
It's not going to be easy:If African countries don't develop their own regulatory frameworks to protect citizens from the technology's misuse, some experts worry that Africans will be hurt in the process. But if these countries don't also find a way to harness AI's benefits, others fear their economies could be left behind. (Read more from Abdullahi Tsanni.)
Bits and BytesAn AI that can play Goat Simulator is a step toward more useful machines
A new AI agent from Google DeepMind can play different games, including ones it has never seen before such as Goat Simulator 3, a fun action game with exaggerated physics. It's a step toward more generalized AI that can transfer skills across multiple environments. (MIT Technology Review)
This self-driving startup is using generative AI to predict traffic
Waabi says its new model can anticipate how pedestrians, trucks, and bicyclists move using lidar data. If you prompt the model with a situation, like a driver recklessly merging onto a highway at high speed, it predicts how the surrounding vehicles will move, then generates a lidar representation of 5 to 10 seconds into the future (MIT Technology Review)
LLMs become more covertly racist with human intervention
It's long been clear that large language models like ChatGPT absorb racist views from the millions of pages of the internet they are trained on. Developers have responded by trying to make them less toxic. But new research suggests that those efforts, especially as models get larger, are only curbing racist views that are overt, while letting more covert stereotypes grow stronger and better hidden. (MIT Technology Review)
Let's not make the same mistakes with AI that we made with social media
Social media's unregulated evolution over the past decade holds a lot of lessons that apply directly to AI companies and technologies, argue Nathan E. Sanders and Bruce Schneier. (MIT Technology Review)
OpenAI's CTO Mira Murati fumbled when asked about training data for Sora
In this interview with theWall Street Journal, the journalist asks Murati whether OpenAI's new video-generation AI system, Sora, was trained on videos from YouTube. Murati says she is not sure, which is an embarrassing answer from someone who should really know. OpenAI has been hit with copyright lawsuits about the data used to train its other AI models, and I would not be surprised if video was its next legal headache. (Wall Street Journal)
Among the AI doomsayers
I really enjoyed this piece. Writer Andrew Marantz spent time with people who fear that AI poses an existential risk to humanity, and tried to get under their skin. The details in this story are both hilarious and juicy-and raise questions about who we should be listening to when it comes to AI's harms. (The New Yorker)