Google Says Our Article On The Difficulty Of Good Content Moderation Is... Dangerous
Back in August, I wrote a big post about the impossible choices that large internet platforms have to make concerning content moderation. A large part of the point of that post is that there is no perfect content moderation, and especially at scale, there are going to be large swaths of people who disagree with any choice (leaving content up, taking it down, demonetizing it, putting a flag on it, whatever). And expecting these platforms to magically get things right is going to end in serious disappointment for everyone.
In its own hamfisted way, Google has now proven that point (and, no, they're not doing this on purpose). About a month after that post went up, we got a notification from Google, telling us that this article violated Google's AdSense policies (we use AdSense to backfill ads when we don't have a better solution -- it pays us close to nothing) and therefore they were restricting AdSense from appearing on that page. The only details we received were that it was "dangerous or derogatory."

If you can't see that, it says that our link is "dangerous or derogatory" in that it:
- Threatens or advocates for harm of oneself or others;
- Harasses, intimidates or bullies an individual or group of individuals;
- Incites hatred against, promotes discrimination of, or disparages an individual or group on the basis of their race or ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, or other characteristic that is associated with systemic discrimination or marginalization.
As you can see, at the bottom there's a button to "request a review." We did that, and the next day were told that they had lifted the restriction. This surprised us, as the "review" button usually does nothing. Fast forward to this past weekend, and we get another notice... on the same article, again saying that it is "dangerous or derogatory." No further explanation. No recognition that we already went through this.
We asked for a review again... and got the following:

If you can't read that, it says:
1 page was reviewed at your request and found to be non-compliant with our policies at the time of the review. Ad serving continues to be restricted or disabled on this page.
There appears to be no further information. We are told that the only thing we can do is "fix any violations." But they won't tell us what the "violations" are (because there aren't any) or how to "fix" them. And, seriously, fuck that. There's nothing to "fix" and it's our general policy -- and the policy of any good journalism site not to allow advertisers to dictate anything having to do with content. And we're sticking by that.
Now, to be clear: Google has every right to make whatever awful decisions it wants to make about where its ads appear. If it really wants to demonetize our article highlighting the impossible choices that Google and others have to make, well, that's quite ironic, but that's on them. I certainly don't think this was a "choice" that Google made. I think that it is likely constantly scanning all pages that use AdSense for "flag" words, and maybe something like the curses (or the mention of Alex Jones or the holocaust?) flagged the article for review. At that point, I'm guessing it was handed off to some low-paid individual tasked with "reviewing" content policy violations, who has somewhere between 5 and 30 seconds to make a judgment call. That person maybe sees the curses and says "BAD!". When we click "review" it probably goes to another person like that. The first time we asked for a review we got someone who voted one way, and the other time... the other way.
It's a marginal pain for us, though it's more amusing than anything else (though, hey, if you'd like to help cover our losses from having that page demonetized, go for it!). But, again, all it really serves to do is highlight the point that these all come down to judgment calls, and you can't really scale judgment calls to the level that these platforms operate on without making a whole bunch that are highly questionable.
We shouldn't want giant platforms with a bunch of "content moderators" determining what is acceptable content and what is "dangerous and derogatory," because they're going to do a shitty job of it (now watch this post get demonetized too...). It really is time to search for better solutions.
Permalink | Comments | Email This Story