Article 6T2P2 ACLU Points Out More Problems With AI-Generated Police Reports

ACLU Points Out More Problems With AI-Generated Police Reports

by
Tim Cushing
from Techdirt on (#6T2P2)
Story Image

It often seems that when people have no good ideas or, indeed, any ideas at all, the next thing out of their mouths is maybe some AI?" It's not that AI can't be useful. It's that so many use cases are less than ideal.

Enter Axon, formerly Taser, which has moved from selling modified cattle prods to cops to selling them body cameras. The shift makes sense. Policy makers want to believe body cameras will create more accountability in police forces that have long resisted this. Cops don't mind this push because it's far more likely body cam footage will deliver criminal convictions than it will force them to behave better when wielding the force of law.

Axon wants to keep cops hooked on body cams. It hands them out like desktop printers: cheap entry costs paired with far more expensive, long-term contractual obligations. Buy a body cam from Axon on the cheap and expect to pay fees for access and storage for years to come. Now, there's another bit of digital witchery on top of the printer cartridge-esque access fees: AI assistance for police reports.

Theoretically, it's a win. Cops will spend less time bogged down in paperwork and more time patrolling the streets. In reality, it's something else entirely: the abdication of responsibility to algorithms and a little more space separating cops from accountability.

AI can't be relied on to recap news items coherently. It's already shown it's capable of hallucinating" narratives due to the data it relies on or has been trained on. There's no reason to believe that, at this point, AI is capable of performing tasks cops have been doing for years: writing up arrest/interaction reports.

The problem here is that a bogus AI-generated report causes far more real-world pain than that experienced by news agencies that endure momentary public shaming or lawyers being chastised by judges. People can lose their rights and their actual freedom if AI concocts a narrative that supports the actions taken by officers. Even at its best, AI should not be allowed to determine whether or not people have access to their rights or literal freedom.

The ACLU, following up on an earlier report criticizing adoption of AI-assisted police paperwork, has released its own take on the tech proposed and pushed by companies like Axon. Unsurprisingly, it's not in favor of abdicating human rights to AI armchair quarterbacking.

Police reports play a crucial role in our justice system," ACLU speech, privacy and technology senior policy analyst and report author Jay Stanley wrote. Concerns include the unreliability and biased nature of AI, evidentiary and memory issues when officers resort to this technology, and issues around transparency.

In the end, we do not think police departments should use this technology," Stanley concluded.

There's more in this article from The Register than just some summarizing of the ACLU's comprehensive report [PDF]. It also features input from people who've actually done this sort of work on the ground level who align themselves with the ACLU's criticism, rather than the government agencies they worked for. This is from Brandon Vigliarolo, who wrote this op-ed for El Reg:

In my time as a Military Policeman in the US Army, I spent plenty of time on shifts writing boring, formulaic, and necessarily granular reports on incidents, and it was easily the worst part of my job. I can definitely sympathize with police in the civilian world, who deal with far worse - and more frequent - crimes than I had to address on small bases in South Korea.

That said, I've also had a chance to play with modern AI and report on many of its shortcomings, and the ACLU seems to definitely be on to something in Stanley's report. After all, if we can't even trust AI to write something as legally low-stakes asnewsor abug report, how can we trust it to do decent police work?

The answer is we can't. We can't do it now. And there's a solid chance we can't do it ever.

Both Axon and law enforcement agencies choosing to utilize this tech will claim human backstops will prevent AI from hallucinating someone into jail or manufacturing justification for civil rights violations. But that's obviously not true. And that's been confirmed by Axon itself, whose future business relies on future uptake of its latest tech offering.

In an ideal world, Stanley added, police would be carefully reviewing AI-generated drafts, but that very well may not be the case. The report notes that Draft One includes a feature that can intentionally insertsilly sentences into AI-produced drafts as a test to ensure officers are thoroughly reviewing and revising the drafts. However, Axon's CEO mentioned in avideoabout Draft One that most agencies are choosing not to enable this feature.

This leading indicator suggests cop shops are looking for a cheap way to relieve the paperwork burden on officers, presumably to free them up to do the more important work of law enforcement. The lower cost/burden seems to be the only focus, though. Even when given something as simple as a single-click option to ensure better human backstopping of AI-generated police reports, agencies are opting out because, apparently, it might mean some reports will be rejected and/or the thin veil of plausible deniability might be pierced.

That's part of the bargain. If a robot writes a report, officers can plausibly claim discrepancies between reports and recordings aren't their fault. But that's not even the only problem. As the ACLU report notes, there's a chance AI-generated reports will decided something seen" or heard" in recordings supports officers' actions, even if human review of the same footage would see clear rights violations.

The other problem is inadvertent confirmation bias. In an ideal world, any arrest or interaction that has resulted in questionable force deployment - especially when cops kill someone - cops would need to give statements before they've had a chance to review recordings. This would help eliminate post facto narratives that remove contradictory statements and allow officers to agree upon an exonerative narrative. Allowing AI to craft reports from uploaded footage undercuts this necessary time-and-distance factor, giving cops' cameras the chance to tell the story before the cops have even come up with their own.

Now, it might seem that would be better. But I can guarantee you that if the AI report doesn't agree with the officer's report in disputed situations, the AI-generated report will be kicked to the curb. And it works the other way, too.

Even the early adopters of body cams found a way to make this so-called accountability" tech work for them. When the cameras weren't being turned on or off to suit narrative needs, cops were attacking compliant arrestees while yelling things like stop resisting" or claiming the suspect was trying to grab one of their weapons. The subjective angle, coupled with extremely subjective statements in the recordings, was leveraged to provide justification for any lovely of force deployed. AI is incapable of separating cop pantomime from what's captured on tape, which means all cops have to do to talk a bot into backing their play is say a bunch of stuff that sounds like probable cause while recording an arrest or search.

We already know most law enforcement agencies rarely proactively review body cam footage. And they're even less likely to review reports and question officers if things look a bit off. Most agencies don't have the personnel to handle proactive reviews, even if they have the desire to engage in better oversight. And an even larger percentage lack the desire to police their police officers, which means there will never be enough people in place to check the work (and paperwork) of law enforcers.

Adding AI won't change this equation. It will just make direct oversight that much simpler to abandon. Cops won't be held accountable because they can always blame discrepancies on the algorithm. And the tech will encourage more rights violations because it adds another layer of deniability officers and their supervisors can deploy when making statements in state courts, federal courts, or the least-effective court of all, the court of public opinion.

These are all reasons accountability-focused legislators, activists, and citizens should oppose a shift to AI-enhanced police reports. And they're the same reasons that will encourage rapid adoption of this tech by any law enforcement agency that can afford it.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments