Once Again, Algorithms Can't Tell The Difference Between 'Bad Stuff' And 'Reporting About Bad Stuff'

We've discussed many times just how silly it is to expect internet platforms to actually do a good job of moderating their own platforms. Can they do better? Yes, absolutely. Should they put more resources towards it? For the most part, yes. But there seems to be this weird belief among many -- often people who don't like or trust the platforms -- that if only they "nerded harder" they could magically smarts their way to better content moderation algorithms. And, in many cases, they're demanding such filters be put in place and threatening criminal liability for failing to magically block the "right" content.
This is all silly, because so much of this stuff involves understanding nuance and context. And algorithms still suck at context. For many years, we've pointed to the example of YouTube shutting down an account of a human rights group documenting war crimes in Syria, as part of demands to pulldown "terrorist propaganda." You see, "terrorist propaganda" and "documenting war crimes" can look awfully similar. Indeed, it may be exactly the same. So how can you teach a computer to recognize which one is which?
There have been many similar examples over the years, and here's another good one. The Atlantic is reporting that, for a period of time, YouTube removed a video that The Atlantic had posted of white nationalist Richard Spencer addressing a crowd with "Hail, Trump." You remember the video. It made all the rounds. It doesn't need to be seen again. But... it's still troubling that YouTube removed it. YouTube removed it claiming that it was "borderline" hate speech.
And, sure, you can understand why a first-pass look at the video might have someone think that. It's someone rallying a bunch of white nationalists and giving a pretty strong wink-and-a-nod towards the Nazis. But it was being done in the context of reporting. And YouTube (whether by algorithm, human, or some combination of both) failed to comprehend that context.
Reporting on "bad stuff" is kind of indistinguishable from just promoting "bad stuff."
And sometimes, reporting on bad stuff and bad people is... kind of important. But if we keep pushing towards a world where platforms are ordered to censor at the drop of a hat if anything offensive shows up, we're going to lose out on a lot of important reporting as well. And, on top of that, we lose out on a lot of people countering that speech, and responding to it, mocking it and diminishing its power as well.
So, yes, I can understand the kneejerk reaction that "bad stuff" doesn't belong online. But we should be at least a bit cautious in demanding that it all disappear. Because it's going to remain close to impossible to easily determine the difference between bad stuff and reporting on that bad stuff. And we probably want to keep reporting on bad stuff.
Permalink | Comments | Email This Story