Article 2PT4A Well, Duh: Facebook's System To Stop 'Fake News' Isn't Working -- Because Facebook Isn't The Problem

Well, Duh: Facebook's System To Stop 'Fake News' Isn't Working -- Because Facebook Isn't The Problem

by
Mike Masnick
from Techdirt on (#2PT4A)
Story Image

It's not like we didn't say right away that those rushing to blame Facebook for "fake news" were missing the point and that the problem was always with the nature of confirmation bias, rather than the systems people use to support their own views. But, alas, the roar of "but Facebook must be the problem, because we saw "fake news" on Facebook" along with the related "but, come on, it must 'take responsibility'" arguments kept getting louder and louder, to the point that Facebook agreed to start trying to warn people of fake news.

And, guess what? Just like basically every attempt to stifle speech without looking at the underlying causes of that speech... it's backfiring. The new warning labels are not stopping the spread of "fake news" and may, in fact, be helping it.

When Facebook's new fact-checking system labeled a Newport Buzz article as possible "fake news", warning users against sharing it, something unexpected happened. Traffic to the story skyrocketed, according to Christian Winthrop, editor of the local Rhode Island website.

"A bunch of conservative groups grabbed this and said, 'Hey, they are trying to silence this blog - share, share share,'" said Winthrop, who published the story that falsely claimed hundreds of thousands of Irish people were brought to the US as slaves. "With Facebook trying to throttle it and say, 'Don't share it,' it actually had the opposite effect."

Again, this isn't a surprise. Fake news was never the issue. People weren't changing their minds based on fake news. They were using it for confirmation of their views. And when you get contradictory information, cognitive dissonance kicks in, and you rationalize why your beliefs were right. In fact, studies haves shown that when questionable beliefs are attacked with facts, it often makes the believers dig in even stronger. And that seems to be what's happening here. With efforts made to call out "fake news" the people who believe it just see this as "fake news" itself -- and an attack on what they believe is true. It's easy to chalk up any fake news labels as just part of the grand conspiracy to suppress info "they" don't want you to see.

The article goes on to talk to a bunch of different people who operate sites that had articles dinged with the "fake news" scarlet letter from Facebook, and most of them (though, not all) say they saw no real impact on traffic.

Of course, because we've seen this kind of thing play out before, it's likely that rather than recognizing Facebook isn't the issue, people who are angry about what they believe to be the scourge of "fake news" will also double down -- just like those who fall for "fake news." They'll insist that it's Facebook's fault that the fake news issue didn't just go away when Facebook put warning labels on stories. They'll ignore the fact that they were the ones demanding such things in the first place, and that they insisted such labels would work. Instead, they'll argue that Facebook should be doing even more to suppress "fake news" and never consider that maybe they're targeting the symptoms and not the actual disease.

Facebook has always been an easy target, but Facebook isn't the problem. People want to share bogus, fake, or misleading news, because it confirms their biases and beliefs and makes them feel good. That's not Facebook's fault. It's a problem in how we educate people and how we teach basic media literacy. That's not going to be fixed with warning labels.



Permalink | Comments | Email This Story
External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments