Reddit Moderators Brace for a ChatGPT Spam Apocalypse
Reddit moderators say they already see an increase in spam and that the future will "require a lot of human labor." From a report: In December last year, the moderators of the popular r/AskHistorians Reddit forum noticed posts popping up that appeared to carry the hallmarks of AI-generated text. "They were pretty easy to spot," said Sarah Gilbert, one of the forum's moderators and a postdoctoral associate at Cornell University. "They're not in-depth, they're not comprehensive, and they often contain false information." The team quickly realized their little corner of the internet had become a target for ChatGPT-created content. When ChatGPT launched last year, it set off a seemingly never-ending carousel of hype. According to evangelists, the tech behind ChatGPT may eradicate hundreds of millions of jobs, exhibit "sparks" of singularity-esque artificial general intelligence, and quite possibly destroy the world, but in a way that means you must buy it right now. The less glamorous impacts, like unleashing a tidal wave of AI-produced effluvium on the internet, haven't garnered the same attention so far. The two-million-strong AskHistorians forum allows non-expert Redditors to submit questions about history topics, and receive in-depth answers from historians. Recent popular posts have probed the hive mind on whether the stress of being "on time" is a modern concept; what a medieval scribe would've done if the monastery cat left an inky paw print on their vellum; and how Genghis Khan got fiber in his diet. Shortly after ChatGPT launched, the forum was experiencing five to 10 ChatGPT posts per day, says Gilbert, which soon ramped up as more people found out about the tool. The frequency has tapered off now, which the team believes may be a consequence of how rigorously they've dealt with AI-produced content: even if the posts aren't being deleted for being written by ChatGPT, they tend to violate the sub's standards for quality.
Read more of this story at Slashdot.