Article 6JMNW Transport For London Adds AI To Its Cameras To Bust Fare Jumpers, Bike Riders

Transport For London Adds AI To Its Cameras To Bust Fare Jumpers, Bike Riders

by
Tim Cushing
from Techdirt on (#6JMNW)
Story Image

London is covered with cameras. They're everywhere people are. That includes the London Underground, the city's massive subway system.

But these days, it's not enough to have thousands of unblinking, passive eyes watching Londoners go about their days. AI is the special sauce. Facial recognition is pretty much a given in London. Added to the mix in the Underground is another layer of AI, this one trained to search for weapons and flag certain commuter behavior.

Transport for London is trying to pick a winner in its AI-added race. As Matt Burgess reports for Wired, public records show the Tube operator tested out 11 different algorithms on people utilizing the Willesden Green Tube station. The initial test ran a little less than a year (October 2022 - September 2023), but generated plenty of hits.

The proof of concept trial is the first time the transport body has combined AI and live video footage to generate alerts that are sent to frontline staff. More than 44,000 alerts were issued during the test, with 19,000 being delivered to station staff in real time.

That's a pretty big number. That's more than 125 alerts a day. And that means Transport for London is likely going to need to pick up some more AI to sift through the stuff generated by its other in-camera, real-time AI.

Or maybe it will just need to tweak the filters. The tested algorithms behaved like digital shotguns loaded with bird shot, hitting everything in sight but rarely making a significant impact.

Three documents provided to WIRED detail how AI models were used to detect wheelchairs, prams, vaping, people accessing unauthorized areas, or putting themselves in danger by getting close to the edge of the train platforms.

Given these parameters, which appear to allow more than they restrict, it's unsurprising the AI trial run rang up some false positives.

The documents, which are partially redacted, also show how the AI made errors during the trial, such as flagging children who were following their parents through ticket barriers as potential fare dodgers; or not being able to tell the difference between a folding bike and a non-folding bike.

Flagging non-criminals as criminals is always a possibility. And always a problem. Misidentifying a bike? Not so much. According to the documents, the only reason to identify bikes is to prevent them from being taken onto trains, which is against Transport policy.

What's a bit more worrying is something partially redacted in the documents obtained by Wired. It appears Transport for London is also interested in detecting something as vague as aggression" via AI. AI still struggles to reliably detect gunshots, so it's a bit much to expect it to reliably detect the sort of behavior that may result in gunshots (or other acts of violence).

What wasn't redacted in the documents shows a mixture of common sense and wishful thinking.

The TfL report on the trial says it wanted to include acts of aggression" but found it was unable to successfully detect" them. It adds that there was a lack of training data-other reasons for not including acts of aggression were blacked out. Instead, the system issued an alert when someone raised their arms, described as a common behaviour linked to acts of aggression" in the documents.

While it's good to see TfL recognizes the tech just isn't capable of reliably performing this task, it's more than little worrying that an alert" can be issued when someone does something that might just be an expression of frustration or an attempt to make someone else aware of their presence in a crowded Tube station.

Elsewhere, the documents show TfL engaging in mission creep as the trials went on. Originally, all faces were blurred and data was only held for two weeks. But six months into the trial, TfL decided it wanted to unblur certain faces, purely for pecuniary reasons: to identify fare-dodgers. However, that alteration soon proved overwhelming and Transport started sending these alerts (which apparently involved kids following parents) to the AI spam folder.

However, due to the large number of daily alerts (in some days over 300) and the high accuracy in detections, we configured the system to auto-acknowledge the alerts," the documents say.

And that's the real reason for the implementation of AI in Tube stations. While officials and legislators might say lofty things about knife crime or terrorism, the real reason Transport for London wants AI assistance is to claw back some of the millions of pounds lost" to Tube riders who've jumped the turnstile.

So, that's the point of this investment of time and public money: fare-dodging and bike-toting. Sure, the AI may eventually prove useful in detecting more serious crime, but the initial push is little more than an effort to punish the most petty of criminal acts. That hardly seems worth it, especially when it further nudges the London needle towards all-consuming surveillance state." If these tiny crimes are worth this much attention, maybe TfL should just add some staffing and see how long it takes for everyone to realize pursuing the most insignificant of scofflaws is a ridiculous waste of time and money.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments