Article 5Z1BH The Dutch Tax Authority Was Felled by AI—What Comes Next?

The Dutch Tax Authority Was Felled by AI—What Comes Next?

by
Rahul Rao
from IEEE Spectrum on (#5Z1BH)
a-woman-at-the-front-of-a-crowd-stands-a

Until recently, it wasn't possible to say that AI had a hand in forcing a government to resign. But that's precisely what happened in the Netherlands in January 2021, when the incumbent cabinet resigned over the so-called kinderopvangtoeslagaffaire: the childcare benefits affair.

When a family in the Netherlands sought to claim their government childcare allowance, they needed to file a claim with the Dutch tax authority. Those claims passed through the gauntlet of a self-learning algorithm, initially deployed in 2013. In the tax authority's workflow, the algorithm would first vet claims for signs of fraud, and humans would scrutinize those claims it flagged as high risk.

In reality, the algorithm developed a pattern of falsely labeling claims as fraudulent, and harried civil servants rubber-stamped the fraud labels. So, for years, the tax authority baselessly ordered thousands of families to pay back their claims, pushing many into onerous debt and destroying lives in the process.

When there is disparate impact, there needs to be societal discussion around this, whether this is fair. We need to define what fair' is," says Yong Suk Lee, a professor of technology, economy, and global affairs at the University of Notre Dame, in the United States. But that process did not exist."

Postmortems of the affair showed evidence of bias. Many of the victims had lower incomes, and a disproportionate number had ethnic minority or immigrant backgrounds. The model saw not being a Dutch citizen as a risk factor.

The performance of the model, of the algorithm, needs to be transparent or published by different groups," says Lee. That includes things like what the model's accuracy rate is like, he adds.

The tax authority's algorithm evaded such scrutiny; it was an opaque black box, with no transparency into its inner workings. For those affected, it could be nigh impossible to tell exactly why they had been flagged. And they lacked any sort of due process or recourse to fall back upon.

The government had more faith in its flawed algorithm than in its own citizens, and the civil servants working on the files simply divested themselves of moral and legal responsibility by pointing to the algorithm," says Nathalie Smuha, a technology legal scholar at KU Leuven, in Belgium.

As the dust settles, it's clear that the affair will do little to halt the spread of AI in governments-60 countries already have national AI initiatives. Private-sector companies no doubt see opportunity in helping the public sector. For all of them, the tale of the Dutch algorithm-deployed in an E.U. country with strong regulations, rule of law, and relatively accountable institutions-serves as a warning.

If even within these favorable circumstances, such a dangerously erroneous system can be deployed over such a long time frame, one has to worry about what the situation is like in other, less regulated jurisdictions," says Lewin Schmitt, a predoctoral policy researcher at the Institut Barcelona d'Estudis Internacionals, in Spain.

So, what might stop future wayward AI implementations from causing harm?

In the Netherlands, the same four parties that were in government prior to the resignation have now returned to government. Their solution is to bring all public-facing AI-both in government and in the private sector-under the eye of a regulator in the country's data authority, which a government minister says would ensure that humans are kept in the loop.

On a larger scale, some policy wonks place their hope in the European Parliament's AI Act, which puts public-sector AI under tighter scrutiny. In its current form, the AI Act would ban some applications, such as government social-credit systems and law enforcement use of face recognition, outright.

Something like the tax authority's algorithm would abide, but due to its public-facing role in government functions, the AI Act would have marked it a high-risk system. That means that a broad set of regulations would apply, including a risk-management system, human oversight, and a mandate to remove bias from the data involved.

The tale of the Dutch algorithm-deployed in an E.U. country with strong regulations, rule of law, and relatively accountable institutions-serves as a warning.

If the AI Act had been put in place five years ago, I think we would have spotted [the tax algorithm] back then," says Nicolas Moes, an AI policy researcher in Brussels for the Future Society think tank.

Moes believes that the AI Act provides a more concrete scheme for enforcement than its overseas counterparts, such as the one that recently took effect in China-which focuses less on public-sector use and more on reining in private companies' use of customers' data-and proposed U.S. regulations that are currently floating in the legislative ether.

The E.U. AI Act is really kind of policing the entire space, while others are still kind of tackling just one facet of the issue, very softly dealing with just one issue," says Moes.

Lobbyists and legislators are still busy hammering the AI Act into its final form, but not everyone believes that the act-even if it's tightened-will go far enough.

We see that even the [General Data Protection Regulation], which came into force in 2018, is still not properly being implemented," says Smuha. The law can only take you so far. To make public-sector AI work, we also need education."

That, she says, will need to come through properly informing civil servants of an AI implementation's capabilities, limitations, and societal impacts. In particular, she believes that civil servants must be able to question its output, regardless of whatever temporal or organizational pressures they might face.

It's not just about making sure the AI system is ethical, legal, and robust; it's also about making sure that the public service in which the AI system [operates] is organized in a way that allows for critical reflection," she says.

External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/IeeeSpectrum
Feed Title IEEE Spectrum
Feed Link https://spectrum.ieee.org/
Reply 0 comments