Article 6GPN1 Facial Recognition Tech Is Encouraging Cops To Ignore The Best Suspects In Favor Of The *Easiest* Suspects

Facial Recognition Tech Is Encouraging Cops To Ignore The Best Suspects In Favor Of The *Easiest* Suspects

by
Tim Cushing
from Techdirt on (#6GPN1)
Story Image

Facial recognition tech has slowly gone mainstream over the past half-decade. Not just in acceptance, but also in opposition. Kashmir Hill exposed perhaps the worst purveyor of this tech - Clearview - with a series of articles exposing the company's tactics as well as its far right backers.

Clearview has managed to become a pariah in a tech field mostly populated by would-be pariahs. Facial recognition tech only works for those who believe it works. For everyone else, it's an existential threat to their freedom. For a few people (most of them located in the Detroit, Michigan area) the threat is very real.

Facial recognition tech does its best work when it's trained properly. And most training, unfortunately, involves the people least likely to find themselves harassed by law enforcement: white males. None other than the National Institute of Standards and Technology (NIST) recognized this fact back in 2019.

This is from the NIST's study of 189 facial recognition tech algorithms:

Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false-positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.

The faces of African American women were falsely identified more often in the kinds of searches used by police investigators where an image is compared to thousands or millions of others in hopes of identifying a suspect.

Not white or male? Good luck. You're screwed. But for those who've benefited the most from this nation's predilection for catering to/electing certain people, the status remains quo.

Middle-aged white men generally benefited from the highest accuracy rates.

And that's just with algorithms and databases compiled in a somewhat ethical fashion. Clearview, however (the subject of Kashmir Hill's latest book), chose to go a different route. It scrapes any data not locked down from the internet and sells search access to government agencies and private customers anywhere it can get away with it. Clearview's market reach continues to be trimmed by litigation or eviction notices from European governments, but the company continues to remain a large part of the facial recognition scene.

Mainstream attention isn't helping these tech purveyors. The latest large-scale journalistic outlet to call bullshit on the tech is the New Yorker, with a withering examination of the tech performed by Eyal Press.

The article opens with the description of a wrongful arrest by Maryland Transit Administration Police - one triggered by facial recognition tech that led the MTAP to believe a black man in his mid-fifties, Alonzo Sawyer, was involved in the physical assault of a female cab driver. This assumption was backed by a records check - one that showed nothing more than a handful of traffic violations. Sawyer was roughed up by law enforcement, arrested, denied bail and... ultimately cleared of all charges.

The problem isn't necessarily the use of facial recognition tech to identify suspects, although - given the tech's know issues with accuracy when it doesn't involve middle-aged white males - this is definitely still a problem. The problem is cops are treating this tech as the beginning, middle, and end of investigations, even though facial recognition tech suppliers always caution their law enforcement customers that matches should be considered the starting point for investigations, not something equivalent to probable cause for an arrest.

The reality of day-to-day facial recognition tech use undermines law enforcement's arguments that it's nothing more than part of extensive mesh network of investigative tools - one of the many excuses it uses to keep documents out of the hands of public records requesters.

Law-enforcement officials argue that they aren't obligated to disclose such information because, in theory at least, facial-recognition searches are being used only to generate leads for a fuller investigation, and do not alone serve as probable cause for making an arrest. Yet, in a striking number of the wrongful arrests that have been documented, the searches represented virtually the entire investigation. No other evidence seemed to link Randal Reid, who lives in Georgia, to the thefts in Louisiana, a state he had never even visited. No investigator from the Detroit police checked the location data on Robert Williams's phone to verify whether he had been in the store on the day that he allegedly robbed it. The police did consult a security contractor, who reviewed surveillance video of the shoplifting incident and then chose Williams from a photo lineup of six people. But the security contractor had not been in the store when the incident occurred and had never seen Williams in person.

Advocates of this tech - a group mainly composed of facial recognition tech and their law enforcement customers - claim this problem isn't as bad as it looks. Most proponents claim AI matches are backstopped by human beings, reducing the risk that someone - especially a person of color - will be misidentified and subjected to the sort of things the people listed above have been subjected to.

But it's clear the human backstops aren't always following the guidelines laid down by tech providers, which strongly caution against treating matches like probable cause. For those that do bother to backstop matches with human beings, their confidence that they're able to determine match errors is misplaced. They're often no better than the software and hardware they're asked to oversee, not just because they're wrong about their own innate ability to recognize faces, but because they're, for the most part, given little to no training before being asked to vet AI judgment calls.

If comparing and identifying unfamiliar faces were tasks that human beings could easily master, the lack of training might not be cause for concern. But, in one study in which participants were asked to identify someone from a photographic pool of suspects, the error rate was as high as thirty per cent. And the study used mainly high-quality images of people in straightforward poses-a luxury that law-enforcement agents examining images extracted from grainy surveillance video usually don't have. Studies using low-quality images have resulted in even higher error rates. You might assume that professionals with experience performing forensic face examinations would be less likely to misidentify someone, but this isn't the case. A study comparing passport officials with college students found that the officers performed as poorly as the students.

It's errors being compounded. Most people tend to believe they're smarter than computers, even if they have no rational reason for believing this. And other people assume tech is infallible, simply because they believe the people who created the tech are smarter than they are. In both cases, the subjective estimation of skill level is off. Fallible tech doesn't get better when it's backstopped by fallible humans, especially humans most government agencies feel don't need any specific training prior to operating facial recognition systems.

Those incorrect assumptions result in things like this:

The photograph of Robert Williams that led to his arrest for robbing the Detroit store came from an old driver's license. The analyst at the Michigan State Police who conducted the search didn't bother to check whether the license had expired-it had. Nor did she appear to consider why more recent pictures of Williams, which were also in the database, didn't turn up as candidates. The dated picture of Williams was only the ninth most likely match for the probe photograph, which was obtained from surveillance video of the incident. But the analyst who ran the search did a morphological assessment of Williams's face, including the shape of his nostrils, and found that his was the most similar to the suspect's. Two other algorithms were then run. In one of them, which returned two hundred and forty-three results, Williams wasn't even on the candidate list. In the other-of an F.B.I. database-the probe photograph generated no results at all.

A whole lot of AI-generated information strongly suggested Robert Williams wasn't the thief. And all of that was ignored by the human backstop, who decided the initial match was all that mattered.

This is not just a law enforcement problem. Some of the blame lies with legislators, who - outside of a few pockets of codified resistance - have been unwilling to step in to regulate the tech tools used by law enforcement. A few bans and moratoriums have been enacted in a handful of US cities, but for most of the nation, facial recognition tech use by government agencies is still the Wild West.

In addition, the tech firms providing cops with facial recognition AI aren't simply willing to provide face-matching software. In the case of the AI used to identify" Alonzo Sawyer, the provider - DataWorks Plus - allows its cop customers to make the description fit someone who doesn't necessarily fit the description.

DataWorks Plus notes on its Web site that probe images fed into its software can be edited using pose correction, light normalization, rotation, cropping."

There's not a lot of positive stuff to draw from this coverage, other than the fact that if large news agencies are asking questions, it makes it much more difficult for government officials to pretend there's nothing wrong with facial recognition tech. But that's hardly heartening news. For most tech providers, the fact that they have paying customers is enough to allow them to ignore the long-term societal effects of pushing algorithms tainted by bias. For law enforcement agencies, the existence of arrests and successful prosecutions initiated by facial recognition tech is all the reason they need to keep using it, no matter how often it results in wrongful arrests of future civil rights lawsuits.

For the rest of us, it just means we're at the mercy of more than just the government. Our freedom is in the hands of unproven, often erroneous tech. And until someone with actual power cares enough about that fact, we'll remain the government's lab rats, expected to suffer the consequences until the bugs in the system can be worked out.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments