NIST Study Of 189 Facial Recognition Algorithms Finds Minorities Are Misidentified Almost 100 Times More Often Than White Men

The development and deployment of facial recognition tech continues steadily, but the algorithms involved don't seem to be getting much better at recognizing faces. Recognizing faces is pretty much the only thing it's expected to do and it can't seem to get the job done well enough to entrust with it things like determining whether or not a person is going to be detained or arrested.
That critical failure hasn't slowed down deployment by government agencies. There are a handful of facial recognition tech bans in place around the nation, but for the most part, questions about the tech are being ignored in favor of the potential benefits touted by government contractors.
Last year, members of Congress started demanding answers from Amazon after its "Rekogition" tech said 28 lawmakers were criminals. Amazon's response was: you're using the software wrong. That didn't really answer the questions raised by this experiment -- especially questions about the tech's disproportionate selection of minorities as potential perps.
This has been a problem with facial recognition tech for years now. Biases introduced into the system by developers become amplified when the systems attempt to match faces to stored photos. A recent study by the National Institute of Standards and Technology (NIST) found that multiple facial recognition programs all suffer from the same issue: an inordinate number of false positives targeting people of color.
Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false-positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.
The faces of African American women were falsely identified more often in the kinds of searches used by police investigators where an image is compared to thousands or millions of others in hopes of identifying a suspect.
As usual, the group most likely to be accurately assessed by facial recognition tech is also the group that most often promotes and deploys facial recognition systems.
Middle-aged white men generally benefited from the highest accuracy rates.
That this study was performed by NIST makes it a lot tougher to ignore. While other studies could be brushed off as anomalies or biased themselves (when performed by civil rights activist groups), a federal study of 189 different facial recognition algorithms submitted by 99 companies isn't as easy to wave away as unsound.
One of the more adamant critics of facial recognition tech critics is Amazon. It's the company that told the ACLU it was using the system wrong after the rights group took the system for a spin last year and netted 28 false positives using Congressional reps' photos. Amazon had a chance to prove its system was far more accurate than the ACLU's tests showed it was but it chose to sit out the NIST trials.
The problems found by NIST exist in both "one-to-one" and "one-to-many" matching. False positives for "one-to-one" matching allows unauthorized access of devices, systems, or areas secured with biometric scanners. "One-to-many" mismatches are even more problematic, as they can result in detentions, arrests, and other infringements on people's freedoms.
The number of participants show this problem can't be solved simply by switching service providers. It's everywhere and it doesn't appear to be improving. The DHS wants to subject all travelers in international airports to this tech as soon as possible, which means we'll be seeing the collateral damage soon enough. A few lawmakers want to slow down deployment, but they remain a minority, surrounded by far too many legislators who feel US citizens should beta test facial recognition tech in a live environment with real-world consequences.
Permalink | Comments | Email This Story