Article 6N51B Facial Recognition Still Struggles To Recognize Faces As More People Are Misidentified As Criminals

Facial Recognition Still Struggles To Recognize Faces As More People Are Misidentified As Criminals

by
Tim Cushing
from Techdirt on (#6N51B)
Story Image

We've long been critics of facial recognition tech here at Techdirt. Even though the steady march of technology inevitably means the tech will get faster and better, the problem is the first part: faster.

The tech has proven to be very fallible. And it has made things even worse for the sort of people most often targeted by cops: minorities. Pretty much every option offered by facial recognition tech purveyors performs at its worst when dealing with anyone who isn't white and male.

So, the people who have spent their entire history being the target of biased policing efforts have seen nothing improve. Instead, tech advancements have, for the most part, simply automated bigotry and provided cop shops with plausible deniability for their own innate racism. The machine made me do it."

The UK - especially London and the area overseen by the Metro Police - was an early adopter of this tech. The government had already blanketed the city with cameras, putting it on par with China and India in terms of always-on surveillance of its residents.

The private sector was ahead of the curve on facial recognition tech adoption. Early concerns were raised about rights violations, but most of those issues simply didn't apply to business owners and their cameras. The influx of cameras and add-on facial recognition AI has only increased the opportunity to falsely accuse people of crimes and/or violate their rights (if the government is involved).

And so it goes here in this recent report from the BBC, which details a few more instances where people have been converted to criminals by software screwups.

Sara needed some chocolate - she had had one of those days - so wandered into a Home Bargains store.

Within less than a minute, I'm approached by a store worker who comes up to me and says, You're a thief, you need to leave the store'."

Sara - who wants to remain anonymous - was wrongly accused after being flagged by a facial-recognition system called Facewatch.

She says after her bag was searched she was led out of the shop, and told she was banned from all stores using the technology.

While this may seem somewhat innocuous when compared to false arrests and bogus criminal charges, it's far from harmless. While Sara may still have other shopping options, this false flagging may have prevented her from using her favorite - or most convenient - option.

That's not nothing. That's a private company making a decision based on flawed tech that can heavily alter the way a person lives and moves around. And since it's often not immediately clear which multi-national conglomerate owns which retail store, people dealing with this sort of ban can unintentionally violate it just by heading to Option B. And repeat violations can likely bring law enforcement into play, even if the violations were entirely unintentional.

But the UK's grand experiment is still harming people the old way, with additional harassment, duress, and invasive searches predicated on little more than who some tech product thought a person walking by a camera resembled.

Mr Thompson, who works for youth-advocacy group Streetfathers, didn't think much of it when he walked by a white van near London Bridge in February.

Within a few seconds, though, he was approached by police and told he was a wanted man.

That's when I got a nudge on the shoulder, saying at that time I'm wanted".

He was asked to give fingerprints and held for 20 minutes. He says he was let go only after handing over a copy of his passport.

But it was a case of mistaken identity.

Sure, these might be anomalies, given the sheer number of facial recognition tech options being deployed by the UK government and any number of private companies that call that country home. But, again, the fact that it's so common means these experiences are bound to be far more common than they might be in areas where deployments are more limited or subject to better regulation.

Live facial recognition - the tech responsible for this blown call - still remains a relative rarity in London. The Metro Police only used it nine times between 2020 and 2022. But in 2024, it had already used it 67 times, which makes it clear the plan is to steadily increase use. And that number only covers deployments. It says nothing about how long people were subjected to live facial recognition, nor how many faces were scanned by the tech.

The Metro Police claim any concern about false positives is misplaced. According to the Metro Police, the false positive rates is one per 33,000 people who come in range of its cameras.

But that's not a good excuse for subjecting people to flawed tech. First, it says nothing about false negatives, which would be every time the tech fails to flag someone who should be flagged as a suspected criminal.

Furthermore, the percentage of false positives skyrockets when people are flagged by the live AI system:

One in 40 alerts so far this year has been a false positive.

That's an insanely terrible error rate. These are the hits" that matter - the ones that can result in detainment, arrest, questioning, searches, and other applications of government force against someone a computer said was someone officers should subject to any number of indignities.

For now, a lot of live facial recognition is being deployed by easily identified mobile police units, usually via unmarked" white vans. Criminals who fear being spotted may simply choose to avoid areas where these vans are found or steer clear of camera range. If that's how it's being handled, it's highly unlikely the public safety gains outweigh the collateral damage of a 1-in-40 error rate.

Worse, the Metro Police may realize its surveillance tech is no longer useful when it's being carried around in easily recognizable vehicles. At that point, it may start angling to add this tech to the thousands of cameras the government has installed all over London and other areas of the UK. And when it becomes standard operating procedure for thousands of cameras, the error rate may remain the same, but the number of false positives will increase exponentially. And once that happens, the anomalies will be so numerous, it will be difficult for the government to pretend it isn't a problem. But by that point, the tech will already be in place and that much more difficult to curtail, much less root out entirely, if the systemic failures prove to be too much for the public to accept.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments