AI journalism is getting harder to tell from the old-fashioned, human-generated kind | Ian Tucker
I rumbled a chatbot ruse - but as the tech improves, and news outlets begin to adopt it, how easy will it be to spot it next time?
A couple of weeks ago I tweeted a call-out for freelance journalists to pitch me feature ideas for the science and technology section of the Observer's New Review. Unsurprisingly, given headlines, fears and interest in LLM (large language model) chatbots such as ChatGPT, many of the suggestions that flooded in focused on artificial intelligence - including a pitch about how it is being employed to predict deforestation in the Amazon.
One submission however, from an engineering student who had posted a couple of articles on Medium, seemed to be riding the artificial intelligence wave with more chutzpah. He offered three feature ideas - pitches on innovative agriculture, data storage and the therapeutic potential of VR. While coherent, the pitches had a bland authority about them, repetitive paragraph structure, and featured upbeat endings, which if you've been toying with ChatGPT or reading about Google chatbot Bard's latest mishaps, are hints of chatbot-generated content.
Continue reading...