Once Again: Expecting Social Media Companies To Police 'Bad' Stuff Is A Bad Idea

It's not clear how many times we're going to need to repeat this, but when people call for internet platforms to wave magic wands and get rid of the "bad" people, they may not like how things actually turn out. As you may have heard, last month Twitter rewrote its guidelines, and promised that it would be using those updated guidelines to kick off more "bad" people. Twitter, as a private company can set up its service however it likes, but it was striking how many people were giddily awaiting yesterday when the new rules were set to take effect. There was talk of how Twitter was magically about to become fun and nice again. The reality was a little bit more mundane.
A few extremists, like the leaders of the nutty Britain First party in the UK were barred, but lots of others, including "famous" white nationalists were allowed to remain:
Prominent white nationalist Richard Spencer, for example, was not suspended. Nor was former grand wizard of the KKK David Duke - although Duke is reporting that some of his posts are hidden behind the "sensitive material" warning. Curiously enough, his "It's Ok To Be White" message appears in the header image, but is censored in his timeline behind a sensitive material warning.
But, of course, the takedowns also shut down some accounts in a manner that some people felt was wrong. A key example: Egyptian journalist Wael Abbas, who has a long history of documenting human rights abuses.
Abbas is blogger-in-chief for the website Misr Digit@l, which posted about the suspension, saying it involved the deletion of "over 250,000 tweets. Dozens of thousands of pictures, videos and live streams from the middle of every crisis in Egypt with date stamp on them, reporting on people who got tortured, killed or missing. Live coverage of events as they happened in the street."
This kind of takedown is not unprecedented. Indeed, we've written multiple times about YouTube taking down the accounts of people documenting war crimes and human rights abuses, because these platforms have difficulty determining the difference between promoting war crimes and documenting them.
What's really troubling about all of this, though, is that many are still focusing on why Twitter should be waving a magic wand to fix this problem, while at the same time criticizing the company for leaving up some accounts, while taking down others. It's easy to sit behind your laptop and insist it's "easy" to know which accounts are "good" and which accounts are "bad," but the truth is that it's almost impossible for a company to actually make such a determination without tons of false positives and false negatives. And that's because there is no objective measure of "good" or "bad" that they can go on, and the scale of the problem is completely unfathomable to most users.
Some think that that answer to all this is that the platforms need to "do better" about this, but it still seems like a situation where people are expecting too much of the platforms and not understanding the difficulty in making these kinds of determinations. Yes, Twitter can manage its platform any way it wants, but people should be cautious about demanding Twitter silence people (or, even worse, that it be legally required to do so), because you're not going to like many of the choices that it makes -- either in leaving up people you don't like, or taking down those you do.
Instead, we really need to be thinking about better overall systems to encourage good behavior online, without assuming that the only possible thing that can be done is to have the platforms act as speech police. They're not good at it, and no amount of yelling at them is going to make them good at it.
Permalink | Comments | Email This Story