Article 5FJ96 Uber under pressure over facial recognition checks for drivers

Uber under pressure over facial recognition checks for drivers

by
Natasha Lomas
from Crunch Hype on (#5FJ96)

Uber's use of facial recognition technology for a driver identity system is being challenged in the U.K., where the App Drivers & Couriers Union (ADCU) and Worker Info Exchange (WIE) have called for Microsoft to suspend the ride-hailing giant's use of B2B facial recognition after finding multiple cases where drivers were mis-identified and went on to have their licence to operate revoked by Transport for London (TfL).

The union said it has identified seven cases of failed facial recognition and other identity checks" leading to drivers losing their jobs and licence revocation action by TfL.

When Uber launched the Real Time ID Check" system in the U.K. in April 2020, it said it would verify that driver accounts aren't being used by anyone other than the licensed individuals who have undergone an Enhanced DBS check". It said then that drivers could choose whether their selfie is verified by photo-comparison software or by our human reviewers".

In one misidentification case the ADCU said the driver was dismissed from employment by Uber and his licence was revoked by TfL. The union adds that it was able to assist the member to establish his identity correctly, forcing Uber and TfL to reverse their decisions. But it highlights concerns over the accuracy of the Microsoft facial recognition technology - pointing out that the company suspended the sale of the system to U.S. police forces in the wake of the Black Lives Matter protests of last summer.

Research has shown that facial recognition systems can have an especially high error rate when used to identify people of color - and the ADCU cites a 2018 MIT study that found Microsoft's system can have an error rate as high as 20% (accuracy was lowest for dark-skinned women).

The union said it's written to the mayor of London to demand that all TfL private-hire driver licence revocations based on Uber reports using evidence from its Hybrid Real Time Identification systems are immediately reviewed.

Microsoft has been contacted for comment on the call for it to suspend Uber's licence for its facial recognition tech.

The ADCU said Uber rushed to implement a workforce electronic surveillance and identification system as part of a package of measures implemented to regain its license to operate in the U.K. capital.

Back in 2017, TfL made the shocking decision not to grant Uber a licence renewal - ratcheting up regulatory pressure on its processes and maintaining this hold in 2019 when it again deemed Uber not fit and proper" to hold a private hire vehicle licence.

Uber wins latest London licence appeal - but renewal is only for 18 months

Safety and security failures were a key reason cited by TfL for withholding Uber's licence renewal.

Uber has challenged TfL's decision in court and it won another appeal against the licence suspension last year - but the renewal granted was for only 18 months (not the full five years). It also came with a laundry list of conditions - so Uber remains under acute pressure to meet TfL's quality bar.

Now, though, Labor activists are piling pressure on Uber from the other direction too - pointing out that no regulatory standard has been set around the workplace surveillance technology that the ADCU says TfL encouraged Uber to implement. No equalities impact assessment has even been carried out by TfL, it adds.

WIE confirmed to TechCrunch that it's filing a discrimination claim in the case of one driver, called Imran Raja, who was dismissed after Uber's Real ID check - and had his licence revoked by TfL.

His licence was subsequently restored - but only after the union challenged the action.

A number of other Uber drivers who were also misidentified by Uber's facial recognition checks will be appealing TfL's revocation of their licences via the U.K. courts, per WIE.

A spokeswoman for TfL told us it is not a condition of Uber's licence renewal that it must implement facial recognition technology - only that Uber must have adequate safety systems in place.

The relevant condition of its provisional licence on driver identity" states:

ULL shall maintain appropriate systems, processes and procedures to confirm that a driver using the app is an individual licensed by TfL and permitted by ULL to use the app.

We've also asked TfL and the U.K.'s Information Commissioner's Office for a copy of the data protection impact assessment Uber says was carried before the Real-Time ID Check was launched - and will update this report if we get it.

Uber, meanwhile, disputes the union's assertion that its use of facial recognition technology for driver identity checks risks automating discrimination because it says it has a system of manual (human) review in place that's intended to prevent failures.

Albeit it accepts that that system clearly failed in the case of Raja - who only got his Uber account back (and an apology) after the union's intervention.

Uber said its Real-Time ID system involves an automated picture matching" check on a selfie that the driver must provide at the point of log in, with the system comparing that selfie with a (single) photo of them held on file.

If there's no machine match, the system sends the query to a three-person human review panel to conduct a manual check. Uber said checks will be sent to a second human panel if the first can't agree.

In a statement the tech giant told us:

Our Real-Time ID Check is designed to protect the safety and security of everyone who uses the app by ensuring the correct driver or courier is using their account. The two situations raised do not reflect flawed technology - in fact one of the situations was a confirmed violation of our anti-fraud policies and the other was a human error.

While no tech or process is perfect and there is always room for improvement, we believe the technology, combined with the thorough process in place to ensure a minimum of two manual human reviews prior to any decision to remove a driver, is fair and important for the safety of our platform.

In two of the cases referred to by the ADCU, Uber said that in one instance a driver had shown a photo during the Real-Time ID Check instead of taking a selfie as required to carry out the live ID check - hence it argues it was not wrong for the ID check to have failed as the driver was not following the correct protocol.

In the other instance Uber blamed human error on the part of its manual review team(s) who (twice) made an erroneous decision. It said the driver's appearance had changed and its staff were unable to recognize the face of the (now bearded) man who sent the selfie as the same person in the clean-shaven photo Uber held on file.

Uber was unable to provide details of what happened in the other five identity check failures referred to by the union.

It also declined to specify the ethnicities of the seven drivers the union says were misidentified by its checks.

Asked what measures it's taking to prevent human errors leading to more misidentifications in the future, Uber declined to provide a response.

Uber said it has a duty to notify TfL when a driver fails an ID check - a step that can lead to the regulator suspending the license, as happened in Raja's case. So any biases in its identity check process clearly risk having disproportionate impacts on affected individuals' ability to work.

WIE told us it knows of three TfL licence revocations that relate solely to facial recognition checks.

We know of more [UberEats] couriers who have been deactivated but no further action since they are not licensed by TfL," it noted.

TechCrunch also asked Uber how many driver deactivations have been carried out and reported to TfL in which it cited facial recognition in its testimony to the regulator - but again the tech giant declined to answer our questions.

WIE told us it has evidence that facial recognition checks are incorporated into geo-location-based deactivations Uber carries out.

It said that in one case a driver who had their account revoked was given an explanation by Uber relating solely to location but TfL accidentally sent WIE Uber's witness statement - which it said included facial recognition evidence".

That suggests a wider role for facial recognition technology in Uber's identity checks versus the one the ride-hailing giant gave us when explaining how its Real-Time ID system works. (Again, Uber declined to answer follow-up questions about this or provide any other information beyond its on-the-record statement and related background points.)

But even just focusing on Uber's Real-Time ID system there's the question of how much say Uber's human review staff actually have in the face of machine suggestions combined with the weight of wider business imperatives (like an acute need to demonstrate regulatory compliance on the issue of safety).

James Farrer, the founder of WIE, queries the quality of the human checks Uber has put in place as a backstop for facial recognition technology, which has a known discrimination problem.

Is Uber just confecting legal plausible deniability of automated decision making or is there meaningful human intervention," he told TechCrunch. In all of these cases, the drivers were suspended and told the specialist team would be in touch with them. A week or so typically would go by and they would be permanently deactivated without ever speaking to anyone."

There is research out there to show when facial recognition systems flag a mismatch humans have bias to confirm the machine. It takes a brave human being to override the machine. To do so would mean they would need to understand the machine, how it works, its limitations and have the confidence and management support to over rule the machine," Farrer added. Uber employees have the risk of Uber's license to operate in London to consider on one hand and what... on the other? Drivers have no rights and there are in excess so expendable."

He also pointed out that Uber has previously said in court that it errs on the side of customer complaints rather than give the driver benefit of the doubt. With that in mind can we really trust Uber to make a balanced decision with facial recognition?" he asked.

Farrer further questioned why Uber and TfL don't show drivers the evidence that's being relied upon to deactivate their accounts - to given them a chance to challenge it via an appeal on the actual substance of the decision.

IMHO this all comes down to tech governance," he added. I don't doubt that Microsoft facial recognition is a powerful and mostly accurate tool. But the governance of this tech must be intelligent and responsible. Microsoft are smart enough themselves to acknowledge this as a limitation.

The prospect of Uber pressured into surveillance tech as a price of keeping their licence... and a 94% BAME workforce with no worker rights protection from unfair dismissal is a recipe for disaster!"

The latest pressure on Uber's business processes follows hard on the heels of a major win for Farrer and other former Uber drivers and labor rights activists after years of litigation over the company's bogus claim that drivers are self employed", rather than workers under U.K. law.

On Tuesday Uber responded to last month's Supreme Court quashing of its appeal saying it would now treat drivers as workers in the market - expanding the benefits it provides.

However, the litigants immediately pointed out that Uber's deal" ignored the Supreme Court's assertion that working time should be calculated when a driver logs onto the Uber app. Instead Uber said it would calculate working time entitlements when a driver accepts a job - meaning it's still trying to avoid paying drivers for time spent waiting for a fare.

The ADCU therefore estimates that Uber's offer" underpays drivers by between 40%-50% of what they are legally entitled to - and has said it will continue its legal fight to get a fair deal for Uber drivers.

At an EU level, where regional lawmakers are looking at how to improve conditions for gig workers, the tech giant is now pushing for an employment law carve out for platform work - and has been accused of trying to lower legal standards for workers.

Understanding Europe's big push to rewrite the digital rulebook

In additional Uber-related news this month, a court in the Netherlands ordered the company to hand over more of the data it holds on drivers, following another ADCU+WIE challenge. Although the court rejected the majority of the drivers' requests for more data. But notably it did not object to drivers seeking to use data rights established under EU law to obtain information collectively to further their ability to collectively bargain against a platform - paving the way for more (and more carefully worded) challenges as Farrer spins up his data trust for workers.

The applicants also sought to probe Uber's use of algorithms for fraud-based driver terminations under an article of EU data protection law that provides for a right not to be subject to solely automated decisions in instances where there is a legal or significant effect. In that case the court accepted Uber's explanation at face value that fraud-related terminations had been investigated by a human team - and that the decisions to terminate involved meaningful human decisions.

But the issue of meaningful human invention/oversight of platforms' algorithmic suggestions/decisions is shaping up to be a key battleground in the fight to regulate the human impacts of and societal imbalances flowing from powerful platforms which have both god-like view of users' data and an allergy to complete transparency.

The latest challenge to Uber's use of facial recognition-linked terminations shows that interrogation of the limits and legality of its automated decisions is far from over - really, this work is just getting started.

Uber's use of geolocation for driver suspensions is also facing legal challenge.

While pan-EU legislation now being negotiated by the bloc's institutions also aims to increase platform transparency requirements - with the prospect of added layers of regulatory oversight and even algorithmic audits coming down the pipe for platforms in the near future.

Last week the same Amsterdam court that ruled on the Uber cases also ordered India-based ride-hailing company Ola to disclose data about its facial-recognition-based Guardian" system - aka its equivalent to Uber's Real-Time ID system. The court said Ola must provide applicants with a wider range of data than it currently does - including disclosing a fraud probability profile" it maintains on drivers and data within a Guardian" surveillance system it operates.

Farrer says he's thus confident that workers will get transparency - one way or another". And after years fighting Uber through U.K. courts over its treatment of workers his tenacity in pursuit of rebalancing platform power cannot be in doubt.

Dutch court rejects Uber drivers' robo-firing' charge but tells Ola to explain algo-deductions

Uber says it will treat UK drivers as workers in wake of Supreme Court ruling

Uber loses gig workers rights challenge in UK Supreme Court

Techcrunch?d=2mJPEYqXBVI Techcrunch?d=7Q72WNTAKBA Techcrunch?d=yIl2AUoC8zA Techcrunch?i=5uQKMB8LmB8:u_b-Kdo4eTw:-BT Techcrunch?i=5uQKMB8LmB8:u_b-Kdo4eTw:D7D Techcrunch?d=qj6IDK7rITs5uQKMB8LmB8
External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/TechCrunch/
Feed Title Crunch Hype
Feed Link https://techncruncher.blogspot.com/
Reply 0 comments