No, The Internet Hasn’t Gotten Worse: Just Your Outlook
Ah, the good old days of the internet - a utopian paradise where everyone was kind, respectful, and definitely not arguing about Hitler. Or was it? A recent study published in Nature has some surprising findings that might just shatter your rose-tinted glasses about this past internet that never actually existed. Brace yourself for a shocking revelation: the internet has always been a bit of a dumpster fire.
The study in question has the compelling title of persistent interaction patterns across social media platforms and over time." Caitlin Dewey summarizes it more simply as actually, the internet's always been this bad."
There is a tendency in all things to assume that everything is progressively getting worse and everything is falling apart in a way that is uniquely new. And yet, history keeps telling us that it's not true. Violent crime rates? They're hitting historic lows, despite what you may have heard. The wave of shoplifting? Probably didn't happen.
And how about the internet? Is the internet awash in hate, disinfo, and toxicity way more than in the good old days?
Well, nope.
Not according to the study. It exists, certainly, but it's no worse than in the past.
The researchers went deep:
To obtain a comprehensive picture of online social media conversations, we analysed a dataset of about 500million comments from Facebook, Gab, Reddit, Telegram, Twitter, Usenet, Voat and YouTube, covering diverse topics and spanning over three decades
Three decades, 500 million comments, eight platforms. Seems like a good place to start.
The team used Google's Perspective API for classifying toxicity. Some may quibble with this, but the Perspective API has a history of being pretty reliable. Nothing is perfect, but when dealing with this much data, it seems like a reasonable approach. On top of that, they spot checked the results as well.
The researchers found: Godwin's Law is legit. If you'll recall, the original formulation is: As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1." Godwin himself admits it was written in the form of statistical language as a joke to make it seem more scientific. And the researchers determined that, well, yeah, pretty much:
The toxicity of threads follows a similar pattern. To understand the association between the size and toxicity of a conversation, we start by grouping conversations according to their length to analyse their structural differences. The grouping is implemented by means of logarithmic binning (see the Logarithmic binning and conversation size' section of the Methods) and the evolution of the average fraction of toxic comments in threads versus the thread size intervals is reported in Fig. 2. Notably, the resulting trends are almost all increasing, showing that, independently of the platform and topic, the longer the conversation, the more toxic it tends to be.
That said, the research also shows that when a thread gets toxic, that doesn't necessarily stop the conversation.
The common beliefs that (1) online interactions inevitably devolve into toxic exchanges over time and (2) once a conversation reaches a certain toxicity threshold, it would naturally conclude, are not modern notions but they were also prevalent in the early days of the World Wide Web. Assumption 2 aligns with the Perspective API's definition of toxic language, suggesting that increased toxicity reduces the likelihood of continued participation in a conversation. However, this observation should be reconsidered, as it is not only the peak levels of toxicity that might influence a conversation but, for example, also a consistent rate of toxic content. To test these common assumptions, we used a method similar to that used for measuring participation; we select sufficiently long threads, divide each of them into a fixed number of equal intervals, compute the fraction of toxic comments for each of these intervals, average it over all threads and plot the toxicity trend through the unfolding of the conversations. We find that the average toxicity level remains mostly stable throughout, without showing a distinctive increase around the final part of threads
I would suggest that seems consistent with Techdirt's experience...
But, the study also found that there's no particular evidence that conversations today are particularly more toxic than in the past when looking over this historical data. The key factor, as always, is just the length of the conversation. Average toxicity over time remains pretty constant. However, toxicity increases with the length of any conversation (though at different rates on different platforms).
As Dewey's report notes, the approaches of different platforms can matter, but it doesn't appear as if the world is somehow getting worse. It's just people suck. And some platforms maybe attract more of the worst people.
That finding held true across seven of the eight platforms the team researched. By and large, those platforms also exhibited similar shares of toxic comments. On Facebook, for instance, roughly 4 to 6% of the sampled comments failed Perspective AI's toxicity test, depending on the community/subject matter. On YouTube, by comparison, it's 4 to 7%. On Usenet, 5 to 9%.
Even infamously lawless, undermoderated communities like Gab and Voat didn't fall so far from the norm for more mainstream platforms: About 13% of Gab's comments were toxic, the researchers found, and between 10 and 19% were toxic on Voat.
There's something deeply unfashionable and counterintuitive about all of this. The suggestion that online platforms have not single-handedly poisoned public life is entirely out of step with the very political discourse the internet is said to have polluted.
Dewey also quotes one of the study's authors, Walter Quattrociocchi, pointing out that this isn't an argument for giving up moderating.
Quattrociocchi said it would be a mistake to assume his team's findings suggest that moderation policies or other platform dynamics don't matter - they absolutely influence the visibility and spread of toxic content," he said. But if the root behaviors driving toxicity are more deeply ingrained in human interaction," than effective moderation might involve both removing toxic content and implementing larger strategies to encourage positive discourse," he added.
Interventions do matter, but the internet isn't inherently making people terrible. And, I guess that's a bit of good news these days?