Please don’t get your news from AI chatbots
This is your periodic reminder that AI-powered chatbots still make up things and lie with all the confidence of a GPS system telling you that the shortest way home is to drive through the lake.
My reminder comes courtesy of Nieman Lab, which ran an experiment to see if ChatGPT would provide correct links to articles from news publications it pays millions of dollars to. It turns out that ChatGPT does not. Instead, it confidently makes up entire URLs, a phenomenon that the AI industry calls hallucinating," a term that seems more apt for a real person high on their own bullshit.
Nieman Lab's Andrew Deck asked the service to provide links to high-profile, exclusive stories published by 10 publishers that OpenAI has struck deals worth millions of dollars with. These included the Associated Press, The Wall Street Journal, the Financial Times, The Times (UK), Le Monde, El Pais, The Atlantic, The Verge, Vox, and Politico. In response, ChatGPT spat back made-up URLs that led to 404 error pages because they simply did not exist. In other words, the system was working exactly as designed: by predicting the most likely version of a story's URL instead of actually citing the correct one. Nieman Lab did a similar experiment with a single publication - Business Insider - earlier this month and got the same result.
An OpenAI spokesperson told Nieman Lab that the company was still building an experience that blends conversational capabilities with their latest news content, ensuring proper attribution and linking to source material - an enhanced experience still in development and not yet available in ChatGPT." But they declined to explain the fake URLs.
We don't know when this new experience will be available or how reliable it will be. Despite this, news publishers continue to feed years of journalism into OpenAI's gaping maw in exchange for cold, hard cash because the journalism industry has consistently sucked at figuring out how to make money without selling its soul to tech companies. Meanwhile, AI companies are chowing down on content published by anyone who hasn't signed these Faustian bargains and using it to train their models anyway. Mustafa Suleiman, Microsoft's AI head, recently called anything published on the internet freeware" that is fair game for training AI models. Microsoft was valued at $3.36 trillion at the time I wrote this.
There's a lesson here: If ChatGPT is making up URLs, it's also making up facts. That's how generative AI works - at its core, the technology is a fancier version of autocomplete, simply guessing the next plausible word in a sequence. It doesn't understand" what you say, even though it acts like it does. Recently, I tried getting our leading chatbots to help me solve the New York Times Spelling Bee and watched them crash and burn.
If generative AI can't even solve the Spelling Bee, you shouldn't use it to get your facts.