Incredibly, Facebook Is Still Figuring Out That Content Moderation At Scale Is Impossible To Do Well

For years now, I've talked about the impossibility of doing content moderation well at scale. I know that execs at various tech companies often point to my article on this, and that includes top executives at Meta, who have cited my work on this issue. But it still amazes me when those companies act as if it's not true, and there's some simple solution to all of the moderation troubles they face. The WSJ recently had a fascinating article about how Facebook thought that by simply silencing political opinions, they'd solve a bunch of their moderation problems. Turns out: it didn't work. At all.
Meta's leaders decided, however, that wasn't enough. In late 2021, tired of endless claims about political bias and censorship, Chief Executive Mark Zuckerberg and Meta's board pushed for the company to go beyond incremental adjustments, according to people familiar with the discussions. Presented with a range of options, Mr. Zuckerberg and the board chose the most drastic, instructing the company to demote posts on sensitive" topics as much as possible in the newsfeed that greets users when they open the app-an initiative that hasn't previously been reported.
The plan was in line with calls from some of the company's harshest critics, who have alleged that Facebook is either politically biased or commercially motivated to amplify hate and controversy. For years, advertisers and investors have pressed the company to clean up its messy role in politics, according to people familiar with those discussions.
It became apparent, though, that the plan to mute politics would have unintended consequences, according to internal research and people familiar with the project.
This plan to demote" posts on sensitive topics apparently... actually resulted in an increase in the flow of less trustworthy content, because oftentimes it's the more established media outlets that are covering those sensitive topics."
The result was that views of content from what Facebook deems high quality news publishers" such as Fox News and CNN fell more than material from outlets users considered less trustworthy. User complaints about misinformation climbed, and charitable donations via the company's fundraiser product through Facebook fell in the first half of 2022. And perhaps most important, users didn't like it.
I am guessing that some in the comments here will quibble with the idea that Fox News is a high quality news publisher" but I assure you that compared to some of the other options, it's much more along the spectrum towards quality.
The details of what Facebook did to deal with this kind of content are interesting... as were the unintended" consequences:
The announcement played down the magnitude of the change. Facebook wasn't just de-emphasizing reshares and comments on civic topics. It was stripping those signals from its recommendation system entirely.
We take away all weight by which we uprank a post based on our prediction that someone will comment on it or share it," a later internal document said.
That was a more aggressive version of what some Facebook researchers had been pushing for years: addressing inaccurate information and other integrity issues by making the platform less viral. Because the approach didn't involve censoring viewpoints or demoting content via imprecise artificial-intelligence systems, it was deemed defensible," in company jargon.
The report notes that this did drive down usage of Facebook, but (contrary to the public narrative that Facebook only cares about greater engagement), the company apparently felt that this cost was worth it if it resulted in less anger and disinformation flowing across its platform. But, a big part of the issue seems to be that the narrative that Facebook is focused on unhealthy engagement is so prevalent, that nothing will get rid of it:
An internal presentation warned that while broadly suppressing civic content would reduce bad user experiences, we're likely also targeting content users do want to see....The majority of users want the same amount or more civic content than they see in their feeds today. The primary bad experience was corrosiveness/divisiveness and misinformation in civic content."
Even more troubling was that suppressing civic content didn't appear likely to convince users that Facebook wasn't politically toxic. According to internal research, the percentage of users who said they thought Facebook had a negative effect on politics didn't budge with the changes, staying consistently around 60% in the U.S.
To some extent, this is not surprising. The narrative always outlasts reality. I still, frequently, have people making claims to me about how social media works that haven't been true in a decade.
Indeed, it seems like perhaps this plan would have had more of an impact if Facebook had been a lot more transparent about it - which often seems to be the problem in nearly everything that the company does on these subjects. It makes vague public statements and claims without the important details, and no one can really tell what's going on.
But, the simple fact is that, again, doing content moderation at scale well is impossible. Making changes in one area will have far reaching impacts that are equally impossible to anticipate.
There is no perfect answer here, but much more transparency from the company - combined with maybe giving end users a lot more control and say in how the platform works for them - seems like it would be more effective than this kind of thing where a bunch of decisions are made behind closed doors and then only leak out to the press way later.