Due to AI Fakes, the “Deep Doubt” Era is Here
Freeman writes:
https://arstechnica.com/information-technology/2024/09/due-to-ai-fakes-the-deep-doubt-era-is-here/
Given the flood of photorealistic AI-generated images washing over social media networks like X [arstechnica.com] and Facebook [404media.co] these days, we're seemingly entering a new age of media skepticism: the era of what I'm calling "deep doubt." While questioning the authenticity of digital content stretches back [nytimes.com] decades-and analog media long before [wikipedia.org] that-easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people's existing skepticism toward online content from strangers may be reaching new heights.
[...] Legal scholars Danielle K. Citron and Robert Chesney foresaw this trend [bu.edu] years ago, coining the term "liar's dividend" in 2019 to describe the consequence of deep doubt: deepfakes being weaponized by liars to discredit authentic evidence. But whereas deep doubt was once a hypothetical academic concept, it is now our reality.
Doubt has been a political weapon since ancient times [populismstudies.org]. This modern AI-fueled manifestation is just the latest evolution of a tactic where the seeds of uncertainty are sown to manipulate public opinion, undermine opponents, and hide the truth. AI is the newest refuge of liars.
[...] In April, a panel of federal judges [arstechnica.com] highlighted the potential for AI-generated deepfakes to not only introduce fake evidence but also cast doubt on genuine evidence in court trials.
[...] Deep doubt impacts more than just current events and legal issues. In 2020, I wrote about a potential "cultural singularity [fastcompany.com]," a threshold where truth and fiction in media become indistinguishable.
[...] "Deep doubt" is a new term, but it's not a new idea. The erosion of trust in online information from synthetic media extends back to the origins of deepfakes themselves. Writing for The Guardian in 2018, David Shariatmadari spoke of [theguardian.com] an upcoming "information apocalypse" due to deepfakes and questioned, "When a public figure claims the racist or sexist audio of them is simply fake, will we believe them?"
[...] Throughout recorded history, historians and journalists have had to evaluate the reliability of sources [wm.edu] based on provenance, context, and the messenger's motives. For example, imagine a 17th-century parchment that apparently provides key evidence about a royal trial. To determine if it's reliable, historians would evaluate the chain of custody, as well as check if other sources report the same information. They might also check the historical context to see if there is a contemporary historical record of that parchment existing. That requirement has not magically changed in the age of generative AI.
Read more of this story at SoylentNews.