Article 6J1Z6 ‘AI’ Exposes Google News Quality Control Issues, Making Our Clickbait, Plagiarism, And Propaganda Problem Worse

‘AI’ Exposes Google News Quality Control Issues, Making Our Clickbait, Plagiarism, And Propaganda Problem Worse

by
Karl Bode
from Techdirt on (#6J1Z6)
Story Image

Journalists have long used Google News to track news cycles. But for years users have documented a steady decline in product quality parallel to similar complaints about the quality of Google's broader search technology. Many stories and outlets are often no longer indexed, low quality clickbait and garbage are everywhere, and customization seems broken as Google shifted its priorities elsewhere.

Now the broader problem with Google News quality control seems to have gotten worse with the rise of generative AI" (half baked language learning models). AI-crafted clickbait, garbage, and plagiarized articles are now dominating the Google News feed, reducing the already shaky service's utility even further:

Google News is boosting sites that rip-off other outlets by using AI to rapidly churn out content, 404 Media has found. Google told 404 Media that although it tries to address spam on Google News, the company ultimately does not focus on whether a news article was written by an AI or a human, opening the way for more AI-generated content making its way onto Google News."

As we've seen in the broader field of content moderation, moderating these massive systems at scale is no easy feat. Compounded by the fact that companies like Google (which feebly justified more layoffs last week despite sitting on mountains of cash) would much rather be spending time and resources on things that make them more money, instead of ensuring that existing programs and systems actually work as advertised.

But the impact of Google's cheap laziness is multi-fold. One, sloppy moderation of Google News only helps contribute to an increasingly lopsided signal to noise ratio as a dwindling number of under-funded actual journalists try to out-compete automated bullshit and well-funded propaganda mills across a broken infotainment and engagement economy. It's already not a fair fight, and when a company like Google fails to invest in functional quality control, it actively makes the problem worse.

For example, many of automated clickbait plagiarism mills are getting the attention and funding that should be going to real journalism operating on shoestring budgets, as the gents at 404 Media (whose quality work ironically isn't even making it to the Google News feed) explore in detail. For its part, Google reps had this to say:

Our focus when ranking content is on the quality of the content, rather than how it was produced. Automatically-generated content produced primarily for ranking purposes is considered spam, and we take action as appropriate under our policies."

Except they're clearly not doing a good job at any part of that. And they're not doing a good job at that because the financial incentives of the engagement economy are broadly perverse; aligned toward cranking out as much bullshit as possible to maximize impressions and end user engagement at scale, and against spending the money and time to ensure quality control at that same scale.

It's not entirely unlike problems we saw when AT&T would actively support (or turn a blind eye to) scammers and crammers on its telecom networks. AT&T made money from the volume of traffic regardless of whether the traffic was harmful, muting any financial incentive to do anything about it.

This isn't exclusively an AI problem (LLMs could be used to improve quality control). And it certainly isn't exclusively a Google problem. But it sure would be nice if Google took a more responsible lead on the issue before what's left of U.S. journalism drowns in a sea of automated garbage and engagement bait.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments