Facebook Announces Its Pilot Plans To 'Deal' With Fake News -- Not With Censorship, But With More Info
We've made it clear that we think that people freaking out about fake news on Facebook are overreacting, when they try to "blame" the results of the election, or even how people feel about certain candidates on Facebook. And, we've also warned that the end result of much of the complaining is inevitable calls for censorship, which is dangerous. In fact, we've already seen that both China and Iran are using the hubbub over "fake news" to justify their own draconian censorship and surveillance efforts.
But, that doesn't mean that Facebook should do nothing about it. "Fake news" is unlikely to be influencing the election, but that doesn't mean it's not a nuisance -- though, if we're going to talk about "fake news" at the very least we should divide it into more accurate subgroupings: there's outright made up false news, there's propaganda (those first two can overlap, admittedly), there's erroneous (but well-intentioned) reporting and then there's actually good reporting that people dislike. That last one some may argue is not fake news, but trust me when I say that some people are using this "fake news" storm to lump in those kinds of articles too -- which does a good job of showing why the label "fake news" is a real risk of being abused for censorship.
As for Facebook's actual approach, it appears to take some of the suggestions many have made over the last month or so, and starts to implement some tests to see how good a job they do in real life. The plan is not one based on censorship, but more about providing individuals with more information. There's certainly still the possibility of abuse here, but the initial implementation does seem reasonable. Here are the key changes:
- Making it easier for users to "report" hoax stories. This has been available within Facebook's system already but it's pretty buried, and apparently not widely used. This is very much targeted at purely 100% fake stories, things that were made up solely for clicks, but the real challenge will be to see how it handles vote brigading by people who, say, want to attack a particular article from another bucket described above. In the end, it's likely that some users will be unhappy about this, especially if they attack a story that they disagree with, but which is factually accurate, and Facebook doesn't agree with their assessment.
- Partnering with fact checking organizations to help flag "disputed" stories. Facebook says that it will use the user-flagging (along with the black box known as "other signals") to alert a number of third party fact checking organizations to look into stories that reach a certain threshold of flagging:
We'll use the reports from our community, along with other signals, to send stories to these organizations. If the fact checking organizations identify a story as fake, it will get flagged as disputed and there will be a link to the corresponding article explaining why. Stories that have been disputed will also appear lower in News Feed.
Again, this seems reasonable, and should be common (in a slightly different format...) to the way some subreddits use a "disputed" flag. Instead of blocking the content, more information is provided, including links to third party sources that people can check out, and such stories are less likely to appear at the top of the algorithmic feed. This, of course, will also make some people nervous, and you can already predict the viral reaction that's going to shoot around the internet the second some stories get hit with the "disputed" flag. We've already seen lots of claims in our comments from people who don't trust fact checking organizations, and so it's likely that there will be some anger over some of these decisions. Overall, though, this seems like a good solution.
It will still be possible to share these stories, but you will see a warning that the story has been disputed as you share. Once a story is flagged, it can't be made into an ad and promoted, either.
I've also been told that eventually (perhaps not immediately) if someone goes to share a link that's been hit with the "disputed" flag, the user will get a prompt, just making sure they recognize it's disputed and making sure they really still want to share it. This seems like a reasonable approach -- one that involves providing more information, rather than jumping straight to "blocking" information. - New signals of what's hoax content: Above I mentioned that some of this is a black box, but it looks like one signal that Facebook will now be using is seeing if people actually go and read an article, and don't share it, compared to those who just share it without reading it first:
We've found that if reading an article makes people significantly less likely to share it, that may be a sign that a story has misled people in some way. We're going to test incorporating this signal into ranking, specifically for articles that are outliers, where people who read the article are significantly less likely to share it.
It would be fascinating to see the data on this. It makes sense, but I do wonder how clear and obvious the different sharing rates are on that particular signal. - No boosting/promoting spam content: This one seems to just be a version of what Facebook already announced last month. But it looks like they're going a bit further in trying to just stop the pure 100% hoaxers-in-it-for-the-money sites from being able to totally overrun Facebook news feeds:
On the buying side we've eliminated the ability to spoof domains, which will reduce the prevalence of sites that pretend to be real publications. On the publisher side, we are analyzing publisher sites to detect where policy enforcement actions might be necessary.
Again, this is a case where how this is actually done is going to matter a lot. However, it does seem that Facebook is suffiiciently concerned about being seen as putting its hands too heavily on the scales, and it seems likely that the company will really target this kind of action only towards the pure spammers. Otherwise, it's likely to face backlash.
Permalink | Comments | Email This Story