Article 6SVAF Washington Post Ingeniously Leverages ‘AI’ To Undermine History And Make Search Less Useful

Washington Post Ingeniously Leverages ‘AI’ To Undermine History And Make Search Less Useful

by
Karl Bode
from Techdirt on (#6SVAF)

While AI" (language learning models) certainly couldhelpjournalism, the fail upward brunchlords in charge of most modern media outlets instead see the technology as a way to cut corners, undermine labor, badly automate low-quality, ultra-low effort, SEO-chasing clickbait, and rush undercooked solutions to nonexistent problems to market under the pretense of progress.

For example, The Washington Post has found another, new innovative way to leverage language learning models (LLMs) to somehow make their product worse. Over at Bluesky, editor Tom Scocca noticed that the news outlet got rid of its traditional search tech, and appears to have replaced it with a new AI assistant that may or may not provide you with useful or relevant information:

1-7.jpg?resize=1024%2C412&ssl=1

So now (as of this writing) if you try to search for a subject, an LLM's sloppy interpretation of the subject is the first thing you see, followed by a list of stories you can't rank by date:

5.jpg?resize=1024%2C1004&ssl=1

If you ask the AI assistant to sort the subject matter articles by date it just... fails to do that. Which seems like a fairly rudimentary thing a next-generation AI assistant" should be able to do.

Again, the environmental and financial sustainability of AI" aside (a pretty big aside), there are numerous areas where automation could be helpful to journalism, whether it's editing, digging through court documents, writing structure advice, hunting down patterns missed by human brains, transcription, or searching vast public record archives.

But the brunchlords in charge of these outlets (in the Washington Post's case a former Rupert Murdoch ally caught up in a phone hacking scandal who failed upward into a position of prominence) see AI as a magic way to cut corners, reducing the volume of human labor required to field a useful and insightful product. Many also genuinely (and incorrectly) seem to think AI has deep awareness akin to sentience because they've bought into the hype being peddled by snake oil salesmen.

As a result we've seen an endless number of scandals where companies use LLMs to create entirely fake journalists and hollow journalism, usually without informing their staff or their readership. When they're caught (as we saw withCNET,Gannett, orSports Illustrated), they usually pretend to be concerned, throw their AI partner under the bus, then get right back to doing the same thing.

Modern corporations and some partisans also have a vested interest in undermining not only informed consensus, but our collective history. You now routinely see entire debates and stories simply disappear from the internet in the wink of an eye thanks to executives that either don't value history, or realize that an informed understanding of it might make you actually learn something from repeated experience.

Not to say that this was the Washington Post's thinking in this case, but it's certainly not something that's absent from the logic of the kind of folks falling upward into positions of influence across sagging American establishment media.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments