UK commits to redesign visa streaming algorithm after challenge to ‘racist’ tool
The UK government is suspending the use of an algorithm used to stream visa applications after concerns were raised the technology bakes in unconscious bias and racism.
The tool had been the target of a legal challenge. The Joint Council for the Welfare of Immigrants (JCWI) and campaigning law firm Foxglove had asked a court to declare the visa application streaming algorithm unlawful and order a halt to its use, pending a judicial review.
The legal action had not run its full course but appears to have forced the Home Office's hand as it has committed to a redesign of the system.
A Home Office spokesperson confirmed to us that from August 7 the algorithm's use will be suspended, sending us this statement via email: We have been reviewing how the visa application streaming tool operates and will be redesigning our processes to make them even more streamlined and secure."
Although the government has not accepted the allegations of bias, writing in a letter to the law firm: The fact of the redesign does not mean that the [Secretary of State] accepts the allegations in your claim form [i.e. around unconscious bias and the use of nationality as a criteria in the streaming process]."
The Home Office letter also claims the department had already moved away from use of the streaming tool in many application types". But it adds that it will approach the redesign with an open mind in considering the concerns you have raised".
The redesign is slated to be completed by the autumn, and the Home Office says an interim process will be put in place in the meanwhile, excluding the use of nationality as a sorting criteria.
HUGE news. From this Friday, the Home Office's racist visa algorithm is no more! Thanks to our lawsuit (with @JCWI_UK) against this shadowy, computer-driven system for sifting visa applications, the Home Office have agreed to discontinue the use of the Streaming Tool".
- Foxglove (@Foxglovelegal) August 4, 2020
The JCWI has claimed a win against what it describes as a shadowy, computer-driven" people sifting system - writing on its website: Today's win represents the UK's first successful court challenge to an algorithmic decision system. We had asked the Court to declare the streaming algorithm unlawful, and to order a halt to its use to assess visa applications, pending a review. The Home Office's decision effectively concedes the claim."
The department did not respond to a number of questions we put to it regarding the algorithm and its design processes - including whether or not it sought legal advice ahead of implementing the technology in order to determine whether it complied with the UK's Equality Act.
We do not accept the allegations Joint Council for the Welfare of Immigrants made in their Judicial Review claim and whilst litigation is still on-going it would not be appropriate for the Department to comment any further," the Home Office statement added.
The JCWI's complaint centered on the use, since 2015, of an algorithm with a traffic-light system" to grade every entry visa application to the UK.
The tool, which the Home Office described as a digital streaming tool', assigns a Red, Amber or Green risk rating to applicants. Once assigned by the algorithm, this rating plays a major role in determining the outcome of the visa application," it writes, dubbing the technology racist" and discriminatory by design, given its treatment of certain nationalities.
The visa algorithm discriminated on the basis of nationality - by design. Applications made by people holding suspect' nationalities received a higher risk score. Their applications received intensive scrutiny by Home Office officials, were approached with more scepticism, took longer to determine, and were much more likely to be refused.
We argued this was racial discrimination and breached the Equality Act 2010," it adds. The streaming tool was opaque. Aside from admitting the existence of a secret list of suspect nationalities, the Home Office refused to provide meaningful information about the algorithm. It remains unclear what other factors were used to grade applications."
Since 2012 the Home Office has openly operated an immigration policy known as the hostile environment' - applying administrative and legislative processes that are intended to make it as hard as possible for people to stay in the UK.
The policy has led to a number of human rights scandals. (We also covered the impact on the local tech sector by telling the story of one UK startup's visa nightmare last year.) So applying automation atop an already highly problematic policy does look like a formula for being taken to court.
The JCWI's concern around the streaming tool was exactly that it was being used to automate the racism and discrimination many argue underpin the Home Office's hostile environment' policy. In other words, if the policy itself is racist any algorithm is going to pick up and reflect that.
The Home Office's own independent review of the Windrush scandal, found that it was oblivious to the racist assumptions and systems it operates," said Chai Patel, legal policy director of the JCWI, in a statement. This streaming tool took decades of institutionally racist practices, such as targeting particular nationalities for immigration raids, and turned them into software. The immigration system needs to be rebuilt from the ground up to monitor for such bias and to root it out."
We're delighted the Home Office has seen sense and scrapped the streaming tool. Racist feedback loops meant that what should have been a fair migration process was, in practice, just speedy boarding for white people.' What we need is democracy, not government by algorithm," added Cori Crider, founder and director of Foxglove. Before any further systems get rolled out, let's ask experts and the public whether automation is appropriate at all, and how historic biases can be spotted and dug out at the roots."
In its letter to Foxglove, the government has committed to undertaking Equality Impact Assessments and Data Protection Impact Assessments for the interim process it will switch to from August 7 - when it writes that it will use person-centric attributes (such as evidence of previous travel", to help sift some visa applications, further committing that nationality will not be used".
Some types of applications will be removed from the sifting process altogether, during this period.
The intent is that the redesign will be completed as quickly as possible and at the latest by October 30, 2020," it adds.
Asked for thoughts on what a legally acceptable visa streaming algorithm might look like, Internet law expert Lilian Edwards told TechCrunch: It's a tough one... I am not enough of an immigration lawyer to know if the original criteria applied re suspect nationalities would have been illegal by judicial review standard anyway even if not implemented in a sorting algorithm. If yes then clearly a next generation algorithm should aspire only to discriminate on legally acceptable grounds.
The problem as we all know is that machine learning can reconstruct illegal criteria - though there are now well known techniques for evading that."
You could say the algorithmic system did us a favour by confronting illegal criteria being used which could have remained buried at individual immigration officer informal level. And indeed one argument for such systems used to be consistency and non-arbitrary' nature. It's a tough one," she added.
Earlier this year the Dutch government was ordered to halt use of an algorithmic risk scoring system for predicting the likelihood social security claimants would commit benefits or tax fraud - after a local court found it breached human rights law.
In another interesting case, a group of UK Uber drives are challenging the legality of the gig platform's algorithmic management of them under Europe's data protection framework - which bakes in data access rights, including provisions attached to legally significant automated decisions.