Article 4E6MS The platforms suck at content moderation and demanding they do more won't make them better at it -- but there ARE concrete ways to improve moderation

The platforms suck at content moderation and demanding they do more won't make them better at it -- but there ARE concrete ways to improve moderation

by
Cory Doctorow
from on (#4E6MS)
Story Image

Concentration in the tech sector has left us with just a few gigantic online platforms, and they have turned into playgrounds for some of the worst people on earth: Nazis, misogynists, grifters, ultranationalists, trolls, genocidal mobs and more. The platforms are so big and their moderation policies are so screwed up, and their use of "engagement" algorithms to increase pageviews, that it's making many of us choose between having a social life with the people we care about and being tormented by awful people. Even if you opt out of social media, you can't opt out of being terrorized by psychopathic trolls who have been poisoned by Alex Jones and the like.

The platforms have completely failed to deal with this problem, and it's getting worse. But the "solutions" that many people I agree with on other issues are likely to make things worse, not better. Specifically, the platforms' inability to moderate bad speech will not be improved by making them do more of it -- it'll just make them more indiscriminate.

Remember, before the platforms knocked Alex Jones offline, they took down "Moroccan atheists, trans models, drag performers, indigenous women" and many others whose speech threatened and discomfited the rich and powerful, who were able to use the platforms' moderation policies to deny their adversaries access to online organizing and communications tools.

The Electronic Frontier Foundation's Jillian C York (an expert on moderation policies) and Corynne McSherry (EFF's legal director) have written the best article on content moderation I've read to date, in which they comprehensively identify the ways that current content moderation is broken, the ways that proposals to "fix moderation" (especially AI-based content filters) (uuuugghghghg) will make it worse, and then, finally, a set of proposals for genuinely improving moderation -- without sacrificing the speech and organizing capacity of marginalized and threatened people.

Advocates, companies, policymakers, and users have a choice: try to prop up and reinforce a broken system-or remake it. If we choose the latter, which we should, here are some preliminary recommendations:

  • Censorship must be rare and well-justified, particularly by tech giants.At a minimum, that means (1) Before banning a category of speech, policymakers and companies must explain what makes that category so exceptional, and the rules to define its boundaries must be clear and predictable. Any restrictions on speech should be both necessary and proportionate. Emergency takedowns, such as those that followed the recent attack in New Zealand, must be well-defined and reserved for true emergencies. And (2) when content is flagged as violating community standards, absent exigent circumstances companies must notify the user and give them an opportunity to appeal before the content is taken down. If they choose to appeal, the content should stay up until the question is resolved. But (3) smaller platforms dedicated to serving specific communities may want to take a more aggressive approach. That's fine, as long as Internet users have a range of meaningful options with which to engage.
  • Consistency. Companies should align their policies with human rights norms. In a paper published last year, David Kaye-the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression-recommends that companies adopt policies that allow users to "develop opinions, express themselves freely and access information of all kinds in a manner consistent with human rights law." We agree, and we're joined in that opinion by a growing coalition of civil liberties and human rights organizations.
  • Tools. Not everyone will be happy with every type of content, so users should be provided with more individualized tools to have control over what they see. For example, rather than banning consensual adult nudity outright, a platform could allow users to turn on or off the option to see it in their settings. Users could also have the option to share their settings with their community to apply to their own feeds.
  • Evidence-based policymaking.Policymakers should tread carefully when operating without facts, and not fall victim to political pressure. For example, while we know that disinformation spreads rapidly on social media, many of the policies created by companies in the wake of pressure appear to have had little effect. Companies should work with researchers and experts to respond more appropriately to issues.

Recognizing that something needs to be done is easy. Looking to AI to help do that thing is also easy. Actually doing content moderation well is very, very difficult, and you should be suspicious of any claim to the contrary.

Content Moderation is Broken. Let Us Count the Ways. [Jillian C York and Corynne McSherry/EFF Deeplinks]

External Content
Source RSS or Atom Feed
Feed Location https://boingboing.net/feed
Feed Title
Feed Link https://boingboing.net/
Reply 0 comments