Article 6KVT9 Judge Bans Use of AI-enhanced Video as Trial Evidence

Judge Bans Use of AI-enhanced Video as Trial Evidence

by
hubie
from SoylentNews on (#6KVT9)
'AI-enhanced' Video Evidence Got Rejected in a Murder Case Because That's Not Actually a Thing

upstart writes:

The AI hype cycle has dramatically distorted views of what's possible with image upscalers:

A judge in Washington state has blocked video evidence that's been "AI-enhanced" from being submitted in a triple murder trial. And that's a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

Judge Leroy McCullough in King County, Washington wrote in a new ruling that AI tech used, "opaque methods to represent what the AI model 'thinks' should be shown," according to a new report from NBC News Tuesday. And that's a refreshing bit of clarity about what's happening with these AI tools in a world of AI hype.

"This Court finds that admission of this Al-enhanced evidence would lead to a confusion of the issues and a muddling of eyewitness testimony, and could lead to a time-consuming trial within a trial about the non-peer-reviewable-process used by the AI model," McCullough wrote.

[...] The rise of products labeled as AI has created a lot of confusion among the average person about what these tools can really accomplish. Large language models like ChatGPT have convinced otherwise intelligent people that these chatbots are capable of complex reasoning when that's simply not what's happening under the hood. LLMs are essentially just predicting the next word it should spit out to sound like a plausible human. But because they do a pretty good job of sounding like humans, many users believe they're doing something more sophisticated than a magic trick.

And that seems like the reality we're going to live with as long as billions of dollars are getting poured into AI companies. Plenty of people who should know better believe there's something profound happening behind the curtain and are quick to blame "bias" and guardrails being too strict. But when you dig a little deeper you discover these so-called hallucinations aren't some mysterious force enacted by people who are too woke, or whatever. They're simply a product of this AI tech not being very good at its job.

Read more of this story at SoylentNews.

External Content
Source RSS or Atom Feed
Feed Location https://soylentnews.org/index.rss
Feed Title SoylentNews
Feed Link https://soylentnews.org/
Feed Copyright Copyright 2014, SoylentNews
Reply 0 comments