This has just become a big week for AI regulation
It's a bumper week for government pushback on the misuse of artificial intelligence.
Today the EU released its long-awaited set of AI regulations, an early draft of which leaked last week. The regulations are wide ranging, with restrictions on mass surveillance and the use of AI to manipulate people.
But a statement of intent from the US Federal Trade Commission, outlined in a short blog post by staff lawyer Elisa Jillson on April 19, may have more teeth in the immediate future. According to the post, the FTC plans to go after companies using and selling biased algorithms.
A number of companies will be running scared right now, says Ryan Calo, a professor at the University of Washington, who works on technology and law. It's not really just this one blog post," he says. This one blog post is a very stark example of what looks to be a sea change."
Woah, woah, WOAH. An official @FTC blog post by a staff attorney noting that "The FTC Act prohibits unfair or deceptive practices. That would include the sale or use of - for example - racially biased algorithms." https://t.co/kM4u3gOEC5
- Ryan Calo (@rcalo) April 19, 2021
The EU is known for its hard line against Big Tech, but the FTC has taken a softer approach, at least in recent years. The agency is meant to police unfair and dishonest trade practices. Its remit is narrow-it does not have jurisdiction over government agencies, banks, or nonprofits. But it can step in when companies misrepresent the capabilities of a product they are selling, which means firms that claim their facial recognition systems, predictive policing algorithms or healthcare tools are not biased may now be in the line of fire. Where they do have power, they have enormous power," says Calo.
Taking actionThe FTC has not always been willing to wield that power. Following criticism in the 1980s and '90s that it was being too aggressive, it backed off and picked fewer fights, especially against technology companies. This looks to be changing.
In the blog post, the FTC warns vendors that claims about AI must be truthful, non-deceptive, and backed up by evidence."
For example, let's say an AI developer tells clients that its product will provide 100% unbiased hiring decisions,' but the algorithm was built with data that lacked racial or gender diversity. The result may be deception, discrimination-and an FTC law enforcement action."
The FTC action has bipartisan support in the Senate, where commissioners were asked yesterday what more they could be doing and what they needed to do it. There's wind behind the sails," says Calo.
Over the last decade, the FTC has shown it lacks the will to meaningfully hold large firms like @Google accountable when they repeatedly violate the law. At today's Senate hearing, I'll argue that we must turn the page on the FTC's perceived powerlessness. https://t.co/APX8BSjATZ
- Rohit Chopra (@chopraftc) April 20, 2021
Meanwhile, though they draw a clear line in the sand, the EU's AI regulations are guidelines only. As with the GDPR rules introduced in 2018, it will be up to individual EU member states to decide how to implement them. Some of the language is also vague and open to interpretation. Take one provision against subliminal techniques beyond a person's consciousness in order to materially distort a person's behaviour" in a way that could cause psychological harm. Does that apply to social media news feeds and targeted advertising? We can expect many lobbyists to attempt to explicitly exclude advertising or recommender systems," says Michael Veale, a faculty member at University College London who studies law and technology.
It will take years of legal challenges in the courts to thrash out the details and definitions. That will only be after an extremely long process of investigation, complaint, fine, appeal, counter-appeal, and referral to the European Court of Justice," says Veale. At which point the cycle will start again." But the FTC, despite its narrow remit, has the autonomy to act now.
One big limitation common to both the FTC and European Commission is the inability to rein in governments' use of harmful AI tech. The EU's regulations include carve-outs for state use of surveillance, for example. And the FTC is only authorized to go after companies. It could intervene by stopping private vendors from selling biased software to law enforcement agencies. But implementing this will be hard, given the secrecy around such sales and the lack of rules about what government agencies have to declare when procuring technology.
Yet this week's announcements reflect an enormous worldwide shift toward serious regulation of AI, a technology that has been developed and deployed with little oversight so far. Ethics watchdogs have been calling for restrictions on unfair and harmful AI practices for years.
Artificial Intelligence is a fantastic opportunity for Europe.
- Ursula von der Leyen (@vonderleyen) April 21, 2021
And citizens deserve technologies they can trust.
Today we present new rules for trustworthy AI. They set high standards based on the different levels of risk. pic.twitter.com/EuzaIUBW9i
The EU sees its regulations bringing AI under existing protections for human liberties. Artificial intelligence must serve people, and therefore artificial intelligence must always comply with people's rights," said Ursula von der Leyen, president of the European Commission, in a speech ahead of the release.
Regulation will also help AI with its image problem. As von der Leyen also said: We want to encourage our citizens to feel confident to use it."