Article 6C9DB Five big takeaways from Europe’s AI Act

Five big takeaways from Europe’s AI Act

by
Tate Ryan-Mosley
from MIT Technology Review on (#6C9DB)
Story Image

This article is from The Technocrat, MIT Technology Review's weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

It was a big week in tech policy in Europe withthe European Parliament's vote to approve its draft rules for the AI Acton the same day EU lawmakersfiled a new antitrust lawsuit against Google.

The AI Act vote passed with an overwhelming majority, and has been heralded as one of the world's most important developments in AI regulation. The European Parliament's president, Roberta Metsola, described it as legislation that will no doubt be setting the global standard for years to come."

Don't hold your breath for any immediate clarity, though. The European system is a bit complicated. Next, members of the European Parliament will have to thrash out details with the Council of the European Union and the EU's executive arm, the European Commission, before the draft rules become legislation. The final legislation will be a compromise between three different drafts from the three institutions, which vary a lot. It will likely take around two years before the laws are actually implemented.

What Wednesday's vote accomplished was to approve the European Parliament'sposition in the upcoming final negotiations. Structured similarly to the EU's Digital Services Act, a legal framework for online platforms, the AI Act takes a risk-based approach" by introducing restrictions based on how dangerous lawmakers predict an AI application could be. Businesses will also have to submit their own risk assessments about their use of AI.

Some applications of AI will be banned entirely if lawmakers considerthe risk unacceptable," while technologies deemed high risk" will have new limitations on their use and requirements around transparency.

Here are some of the major implications:

  1. Ban on emotion-recognition AI.The European Parliament's draft text bans the use of AI that attempts to recognize people's emotions in policing, schools, and workplaces. Makers of emotion-recognition software claim that AI is able to determine when a student is not understanding certain material, or when a driver of a car might be falling asleep. The use of AI to conduct facial detection and analysishas been criticized for inaccuracy and bias, but it has not been banned in the draft text from the other two institutions, suggesting there's a political fight to come.
  2. Ban on real-time biometrics and predictive policing in public spaces.This will be amajor legislative battle, because the various EU bodies will have to sort out whether, and how, the ban is enforced in law. Policing groups are not in favor of a ban on real-time biometric technologies, which they say are necessary for modern policing. Some countries, like France, are actuallyplanning to increase their use of facial recognition.
  3. Ban on social scoring.Social scoring by public agencies, or the practice of using data about people's social behavior to make generalizations and profiles, would be outlawed. That said, the outlook on social scoring, commonly associated with China and other authoritarian governments, isn'treally as simple as it may seem. The practice ofusing social behavior data to evaluate peopleis common in doling out mortgages and setting insurance rates, as well as in hiring and advertising.
  4. New restrictions for gen AI.This draft is the first to propose ways to regulate generative AI, andban the use of any copyrighted material in the training set of large language models like OpenAI's GPT-4. OpenAI has already come under thescrutiny of European lawmakersfor concerns about data privacy and copyright. The draft bill also requires that AI generated content be labeled as such. That said, the European Parliament now has to sell its policy to the European Commission and individual countries, which are likely to face lobbying pressure from the tech industry.
  5. New restrictions on recommendation algorithms on social media.The new draftassigns recommender systems to a high risk" category, which is an escalation from the other proposed bills. This means that if it passes, recommender systems on social media platforms will be subject to much more scrutiny about how they work, and tech companies could be more liable for the impact of user-generated content.

The risks of AI as described by Margrethe Vestager, executive vice president of the EU Commission, are widespread. She has emphasized concerns about the future of trust in information, vulnerability to social manipulation by bad actors, and mass surveillance.

If we end up in a situation where we believe nothing, then we have undermined our society completely," Vestagertold reporters on Wednesday.

What I am reading this week
  • A Russian soldier surrendered to a Ukrainian assault drone, according to video footage published by the Wall Street Journal. The surrender took place back in May in the eastern city of Bakhmut, Ukraine. The drone operator decided to spare the life of the soldier, according to international law, upon seeing his plea via video. Drones have been critical in the war, and the surrender is a fascinating look at the future of warfare.
  • Many Redditors are protesting changes to the site's API that would eliminate or reduce the function of third-party apps and tools many communities use. In protest, those communities have gone private," which means that the pages are no longer publicly accessible. Reddit is known for the power it gives to its user base, but the company may now be regretting that, according to Casey Newton's sharp assessment.
  • Contract workers who trained Google's large language model, Bard, say they were fired after raising concerns about their working conditions and safety issues with the AI itself. The contractors say they were forced to meet unreasonable deadlines, which led to concerns about accuracy. Google says the responsibility lies with Appen, the contract agency employing the workers. If history tells us anything, there will be a human cost in the race to dominate generative AI.
What I learned this week

This week, Human Rights Watch released an in-depth report about an algorithm used to dole out welfare benefits in Jordan. The agency found some major issues with the algorithm, which was funded by the World Bank, and says the system was based on incorrect and oversimplified assumptions about poverty. The report's authors also called out the lack of transparency and cautioned against similar projects run by the World Bank. I wrote a short story about the findings.

Meanwhile, the trend toward using algorithms in government services is growing. Elizabeth Renieris, author of Beyond Data: Reclaiming Human Rights at the Dawn of the Metaverse, wrote to me about the report, and emphasized the impact these sort of systems will have going forward: As the process to access benefits becomes digital by default, these benefits become even less likely to reach those who need them the most and only deepen the digital divide. This is a prime example of how expansive automation can directly and negatively impact people, and is the AI risk conversation that we should be focused on now."

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments