Article 6JDXS Content Moderation At Scale Is Impossible To Do Well, Child Safety Edition

Content Moderation At Scale Is Impossible To Do Well, Child Safety Edition

by
Mike Masnick
from Techdirt on (#6JDXS)

Last week, as you likely heard, the Senate had a big hearing on child safety" where they grandstanded in front of a semi-random collection of tech CEOs, with zero interest in actually learning about the actual challenges of child safety online, or what the companies had done that worked, or where they might need help. The companies, of course, insisted they were working hard on the problem, and the Senators could just keep shouting not enough," without getting into any of the details.

But, of course, the reality is that this isn't an easy problem to solve. At all. I've talked about Masnick's Impossibility Theorem over the years, that content moderation is impossible to do well at scale, and that applies to child safety material as well.

Part of the problem is that much of it is a demand-side problem, not a supply side problem. If people are demanding certain types of content, they will go to great lengths to get it, and that means doing what they can to hide from the platforms trying to stop them. We've talked about this in the context of eating disorder content. Multiple studies found that as sites tried to crack down on that content, it didn't work, because users demanded it. So they would keep coming up with new ways to talk about the content that the site kept trying to block. So, there's always the demand side part of the equation to keep in mind.

But also, there are all sorts of false positives, where content is declared to violate child safety policies, when it clearly doesn't. Indeed, the day after the hearing I saw two examples of social media sites blocking content which they claimed were child sexual abuse material, when it is clear that neither one actually was.

The first came from Alex Macgillivray, former General Counsel at Twitter and former deputy CTO for the US government. He was using Meta's Threads app, and wanted to see what people thought of a recent article in the NY Times raising concerns about AI generated CSAM. But, when he searched for the URL of the article, which contains the string ai-child-sex-abuse," Meta warned him that he was violating its policies:

d103c13f-1916-4e04-bc89-104aaa167119-Rac

In response to his search on the NY Times URL, Threads popped up a message saying:

Child sexual abuse is illegal

We think that your search might be associated with child sexual abuse. Child sexual abuse or viewing sexual imagery of children can lead to imprisonment and other severe personal consequences. This abuse causes extreme harm to children and searching and viewing such material adds to that harm. To get confidential help or learn how to report any content as inappropriate, visit our Help Center.

So, first off, this does show that Meta, obviously, is trying to prevent people from finding such material (contrary to what various Senators have claimed), but it also shows that false positives are a very real issue.

The second example comes from Bluesky, which is a much smaller platform, and has been (misleadingly...) accused of not caring about trust and safety issues over its approximate one year since opening up as a private beta. There, journalist Helen Kennedy said she tried post about the ridiculous situation in which the group Moms For Liberty were apparently scandalized by the classic children's book The Night Kitchen" by Maurice Sendak, which includes some drawings of a naked child in a very non-sexual manner.

Apparently, Moms For Liberty has been drawing underpants on the protagonist of that book. Kennedy tried to post side by side images of the kid with underpants and the original drawing... and got dinged by Bluesky's content moderators.

5927f02f-fbb2-4e23-8573-df2e307ad023-Rac

Again, there, the moderation effort falsely claims that Kennedy was trying to post underage nudity or sexual content, which is in violation of our Community Guidelines."

And, immediately, you might spot the issue. This is posting underage nudity," but it is clearly not sexual in nature, nor is it sexual abuse material. This is one of those speed run" lessons that all trust and safety teams learn eventually. Facebook dealt with the same issue when it banned the famous Terror of War photo, sometimes known as the Napalm Girl" photo taken during the Vietnam War.

Obviously, it's good that companies are taking this issue seriously, and trying to stop the distribution of CSAM. But one of the reasons why this is so difficult is that there are false positives like the two above. They happen all the time. And one of the issues in getting stricter" about blocking content that your systems flag as CSAM, is that you get more such false positives, which doesn't help anyone.

A useful and productive Senate hearing might have explored the actual challenges that the companies face in trying to stop CSAM. But we don't have a Congress that is even remotely interested in useful and productive.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments