‘AI’ Journalism Continues To Be A Lazy, Error-Prone Mess

While recent evolutions in AI" have netted some profoundly interesting advancements in creativity and productivity, its early implementation in journalism has been a sloppy mess thanks to some decidedly human-based problems: namely greed and laziness.
If you remember, the cheapskates over at Red Ventures implemented AI over at CNET without telling anybody. The result: articles rife with accuracy problemsandplagiarism. Of the 77 articles published, more than halfhad significant errors. It ultimately cost them more to have humans editors come in and fix the mistakes than the money they'd actually saved. After backlash, Red Ventures paused the effort.
Until last week, where another Red Ventures website, the financial news outlet Bankrate, started, once again, publishing AI-generated articles. And, once again, the articles were filled with all kinds of basic errors, like misstating the median income or median home prices of the markets it was writing about. And, once again, humans failed to adequately fact check any of it before publication:
With so many eyes on the company's use of AI, you would expect that these first few new AI articles - at the very least - would be thoroughly scrutinized internally before publication. Instead, a basic examination reveals that the company's AI is still making rudimentary mistakes, and that its human staff, nevermind the executives pushing the use of AI, are still not catching them before they end up in front of unsuspecting readers."
When contacted, Bankrate deleted the article but defended the AI, blaming the errors on an outdated dataset (which still would have been an AI and editing error):
Overall, it feels like one more installment in a familiar pattern: publishers push their newsrooms to post hastily AI-generated articles with no serious fact-checking, in a bid to attract readers from Google without making sure they're being provided with accurate information. Called out for easily-avoidable mistakes, the company mumbles an excuse, waits for the outrage to die down, and then tries again."
Like so many problems with modern tech, the problem is often the humans, not the technology.
You've probably noticed that U.S. journalism was already a hot mess. We can clearly monetize everything from Nazis to foot fetishes, yet somehow can't figure out a the kind of innovative funding models needed to keep the media industry's lights on or pay journalists a living wage. Not that we've tried very hard.
It's because VCs and slash-and-burn hedge fund bros are trying to make a quick buck on a not particularly profitable public service. Conversations about publicly funding journalism are basically a nonstarter in the U.S., where even NPR and its tiny government contributions are mindlessly demonized by rabid partisans who increasingly view truth, reality, academia, expertise, and journalism as a mortal enemy.
Greedy idiots, not journalists, are running most U.S. newsrooms. And their first impulse is to not use AI to create better journalism, but to use AI to cheaply mass produce clickbait and gibberish, and use it as a blunt weapon against already comically underpaid labor. The end result is going to create more media distrust and an even worse signal to noise ratio in a country already drowning in bullshit and propaganda.
It's a shame because the underlying chatbot and AI technology could very well be a useful tool to create real journalism and tools to help consume journalism. But real journalism isn't what most media owners are interested in, and this dynamic likely isn't fixed until the underlying greed and hubris is addressed. Which, if you've looked around, doesn't appear to be a top priority anytime soon.