London Metropolitan Police's Facial Recognition System Is Now Only Misidentifying People 81% Of The Time

The London Metropolitan Police's spectacular run of failure continues. Sky News reports the latest data shows the Met's facial recognition tech is still better at fucking up than doing what it says on the tin.
Researchers found that the controversial system is 81% inaccurate - meaning that, in the vast majority of cases, it flagged up faces to police when they were not on a wanted list.
Needless to say, this has raised "significant concerns" by the sort of people most likely to be concerned about false positives. Needless to say, this does not include the London Metropolitan Police, which continues to deploy this tech despite its only marginally-improved failure rate.
In 2018, it was reported the Metropolitan Police's tech was misidentifying people at an astounding 100% rate. False positives were apparently the only thing the system was capable of. Things had improved by May 2019, bringing the Met's false positive rate down to 96%. The sample size was still pretty small, meaning this had a negligible effect on the possibility of the Metropolitan Police rounding up the unusual suspects the system claimed were the usual suspects.
Perhaps this should be viewed as a positive development, but when a system has only managed to work its way up to being wrong 81% of the time, we should probably hold our applause until the end of the presentation.
As it stands now, the tech is better at being wrong than identifying criminals. But what's just as concerning is the Met's unshaken faith in its failing tech. It defends its facial recognition software with stats that are literally unbelievable.
The Met police insists its technology makes an error in only one in 1,000 instances, but it hasn't shared its methodology for arriving at that statistic.
This much lower error rate springs from Metropolitan Police's generous accounting of its facial recognition program. Its method compares successful and unsuccessful matches with the total number of faces processed. That's how it arrives at a failure rate that sounds much, much better than a system that is far more often wrong than right.
No matter which math is used, it's not acceptable to deploy tech that's wrong so often when the public is routinely stripped of its agency by secret discussions and quiet rollouts. Here in the US, two cities have banned this tech, citing its unreliability and the potential harms caused by its deployment. Out in London, law enforcement has never been told "No." A city covered by cameras is witnessing surveillance mission creep utilizing notoriously unreliable tech.
The tech is being challenged in court by Big Brother Watch, which points out that every new report of the tech's utter failure only strengthens its case. Government officials, however, aren't so sure. And by "not so sure," I mean, "mired in deep denial."
The Home Office defended the Met, telling Sky News: "We support the police as they trial new technologies to protect the public, including facial recognition, which can help them identity criminals.
But it clearly does not do that.
It misidentifies people as criminals, which isn't even remotely close to "identifying criminals." It's the exact opposite and it's going to harm London residents. And the government offers nothing but shrugs and empty assurances of public safety.
Permalink | Comments | Email This Story