Article 6HHSW FTC Hits Pharmacy Retail Chain Rite Aid With A Five-Year Facial Recognition Tech Ban

FTC Hits Pharmacy Retail Chain Rite Aid With A Five-Year Facial Recognition Tech Ban

by
Tim Cushing
from Techdirt on (#6HHSW)
Story Image

Facial recognition tech works best on white, male faces. White males have historically been the immediate beneficiaries of public policy, as well as those put in place by private companies. I say historically," but this advantageous situation has mostly proven incapable of being disrupted by tech advances.

Facial recognition tech has taken an existing problem and made it worse by providing those utilizing the tech with a thin layer of plausible deniability. Fortunately, some public entities have taken steps to mitigate the tech-washing of bigotry by doing about the only thing they can: banning the use of facial recognition tech by government agencies.

It's a bit trickier when it comes to private companies and their use of the tech. But if they're careless enough, they, too, can find themselves targeted by regulators. That's what has happened here: the Federal Trade Commission is seeking to ban retail pharmacy chain Rite Aid from using facial recognition tech for the next half-decade.

Rite Aid will be prohibited from using facial recognition technology for surveillance purposes for five years to settle Federal Trade Commission charges that the retailer failed to implement reasonable procedures and prevent harm to consumers in its use of facial recognition technology in hundreds of stores.

Rite Aid's reckless use of facial surveillance systems left its customers facing humiliation and other harms, and its order violations put consumers' sensitive information at risk,"said Samuel Levine, Director of the FTC's Bureau of Consumer Protection.Today's groundbreaking order makes clear that the Commission will be vigilant in protecting the public from unfair biometric surveillance and unfair data security practices."

The FTC's press release states this as though it's a forgone conclusion. But the proposed settlement [PDF] hasn't been approved yet.

But given what's detailed in the FTC's complaint [PDF], Rite Aid would be wise to settle, rather than angle for something a little more lenient. The preexisting problems in facial recognition tech (i.e., its inability to identify minorities and women as accurately as it identifies white men) were made worse by the actions of Rite Aid's employees, supervisors, and executives.

In whole or in part due to facial recognition match alerts, Rite Aid employees took action against the individuals who had triggered the supposed matches, including subjecting them to increased surveillance; banning them from entering or making purchases at the Rite Aid stores; publicly and audibly accusing them of past criminal activity in front of friends, family, acquaintances, and strangers; detaining them or subjecting them to searches; and calling the police to report that they had engaged in criminal activity. In numerous instances, the match alerts that led to these actions were false positives (i.e., instances in which the technology incorrectly identified a person who had entered a store as someone in Rite Aid's database).

The complaint digs into Rite Aid's reliance on facial recognition tech to do all the things listed above over a period of eight years (2012-2020). The pharmacy chain obtained the tech from two third-parties vendors and deployed it in several major cities. It did not, however, feel any compunction to inform Rite Aid customers this technology was being used. In fact, it forbade employees from revealing the use of this tech to customers or members of the press.

Internally, the company was doing much uglier things with the faulty, unproven tech. It liked what it had and pushed employees to label as many customers as possible as possibly suspicious.

In connection with its use of facial recognition technology, Rite Aid created, or directed its facial recognition vendors to create, an enrollment database of images of individuals whom Rite Aid considered persons of interest," including because Rite Aid believed the individuals had engaged in actual or attempted criminal activity at a Rite Aid physical retail location or because Rite Aid had obtained law enforcement BOLO" (Be On the Look Out") information about the individuals. Individual entries in this database are referred to herein as enrollments."

[...]

Rite Aid trained store-level security employees to push for as many enrollments as possible." Rite Aid enrolled at least tens of thousands of individuals in its database.

Since there was a concerted push to fill its private stash with pictures of alleged thieves, Rite Aid employees used whatever they could get their hands on to satisfy the higher ups' desire to (at least internally) give the impression Rite Aid stores were overrun with criminals.

Rite Aid regularly used low-quality enrollment images in its database. Rite Aid obtained enrollment images by, among other methods, excerpting images captured via Rite Aid's closed-circuit television (CCTV") cameras, saving photographs taken by the facial recognition cameras, and by taking photographs of individuals using mobile phone cameras. On a few occasions, Rite Aid obtained enrollment images from law enforcement or from media reports. In some instances, Rite Aid employees enrolled photographs of individuals' driver's licenses or other government identification cards or photographs of images displayed on video monitors.

These images were apparently retained indefinitely. And, since customers were not being told their images were being used to populate an extra large suspect list, there was no way to challenge being enrolled" by Rite Aid employee.

The facial recognition system was live." As customers were captured on camera, real-time matches" were made using the tens of thousands of (often low-quality) images in Rite Aid's database. Alerts were sent to managers and employees on their Rite Aid-issued phones. Supposedly, false positives were limited by a confidence score" that expressed the tech's degree of confidence in the match. A higher score meant a better match. Again, supposedly.

But this part of the system was never rolled out to the people who acted on these matches. Store employees generally did not have access to the confidence scores" generated by the tech, which often resulted in every match - no matter how questionable - being treated by front-line workers as a definitive indication of guilt.

Exacerbating the problem of the bloated enrollment" database employees were pressured to create was the software's general inability to deliver competent matches, much less confident" ones.

In numerous instances, Rite Aid's facial recognition technology generated match alerts that were likely false positives because they occurred in stores that were geographically distant from the store that created the relevant enrollment. For example, between December 2019 and July 2020, Rite Aid's facial recognition technology generated over 5,000 match alerts in stores that were more than 100 miles from the store that created the relevant enrollment.

[...]

Some enrollments generated high numbers of match alerts in locations throughout the United States. For instance, during a five-day period, Rite Aid's facial recognition technology generated over 900 match alerts for a single enrollment. The match alerts occurred in over 130 different Rite Aid stores

[...]

Between December 2019 and July 2020, Rite Aid's facial recognition technology generated over 2,000 match alerts that occurred within a short time of one or more other match alerts to the same enrollment in geographically distant locations within a short period of time, such that it was impossible or implausible that the same individual could have caused the alerts in the different locations. For example, for a particular enrollment image that was originally captured at a Los Angeles store, Rite Aid's facial recognition technology generated over 30 match alerts in New York City and Philadelphia between February 2020 and July 2020.

The matches" in New York and Philadelphia all occurred within 24 hours of the alert generated in Los Angeles - a time/place impossibility that strongly suggests every one of those matches was a false positive.

So, it's not against the law to purchase and deploy faulty tech. The problem here is how those faults (some endemic to facial recognition tech in general) resulted in Rite Aid and its employees engaging in discriminatory behavior.

[R]ite Aid failed to modify its policies to address increased risks to consumers based on race and gender even after its facial recognition technology generated egregious results. For example, Rite Aid conducted an internal investigation into an incident in which Rite Aid's facial recognition technology generated an alert indicating that a consumer-specifically a Black woman-was a match for an enrollment image that Rite Aid employees described as depicting a white lady with blonde hair." In response to the alert, Rite Aid employees called the police and asked the woman to leave the store before realizing the alert was a false positive.

As a result of Rite Aid's failures, Black, Asian, Latino, and women consumers were especially likely to be harmed by Rite Aid's use of facial recognition technology.

The FTC points out Rite Aid did perform a cursory review of the tech and its upsides/downsides before deploying it. But that review surfaced only one problem Rite Aid actually cared about:

An internal presentation advocating expansion of Rite Aid's facial recognition program following Rite Aid's pilot deployment of facial recognition technology identified only a single risk associated with the program: [m]edia attention and customer acceptance."

Oh, the irony. It's going to be a bit more tricky to slide this past customers now that it's receiving plenty of media attention. In fact, it will be impossible. When the ban is lifted, Rite Aid will be required to conspicuously post notification of its use of facial recognition tech in its stores, as well as notify customers who have been enrolled" by Rite Aid employees.

And it's going to continue to be closely watched by regulators because this isn't its first run-in with the FTC.

[T]he FTC also says Rite Aid violated its2010 data security orderwith the Commission by failing to adequately implement a comprehensive information security program. Among other things, the 2010 order required Rite Aid to ensure its third-party service providers had appropriate safeguards to protect consumers' personal data.

Given all of this, it would seem wise to steer clear of the tech once the ban is lifted. Falsely accusing people of theft via faulty tech and a collect them all" attitude didn't keep Rite Aid from filing for bankruptcy earlier this year. And while it's not the only entity in the retail pharmacy sector to struggle with staying afloat following an unexpected worldwide pandemic, it's the only one to be sued by the FTC for operating an AI-enabled anti-theft program that failed to keep it one step ahead of its debtors, much less its competitors.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments