Article 3GBNE Wired's Big Cover Story On Facebook Gets Key Legal Point Totally Backwards, Demonstrating Why CDA 230 Is Actually Important

Wired's Big Cover Story On Facebook Gets Key Legal Point Totally Backwards, Demonstrating Why CDA 230 Is Actually Important

by
Mike Masnick
from Techdirt on (#3GBNE)
Story Image

If you haven't read it yet, I highly recommend reading the latest Wired cover story by Nicholas Thompson and Fred Vogelstein, detailing the past two years at Facebook and how the company has struggled in coming to grips with the fact that their platform can be used by people to do great harm (such as sow discontent and influence elections). It's a good read that is deeply reported (by two excellent reporters), and has some great anecdotes, including the belief that an investigation by then Connecticut Attorney General Richard Blumenthal into Facebook a decade ago, was really an astroturfing campaign by MySpace:

Back in 2007, Facebook had come under criticism from 49 state attorneys general for failing to protect young Facebook users from sexual predators and inappropriate content. Concerned parents had written to Connecticut attorney general Richard Blumenthal, who opened an investigation, and to The New York Times, which published a story. But according to a former Facebook executive in a position to know, the company believed that many of the Facebook accounts and the predatory behavior the letters referenced were fakes, traceable to News Corp lawyers or others working for Murdoch, who owned Facebook's biggest competitor, MySpace. "We traced the creation of the Facebook accounts to IP addresses at the Apple store a block away from the MySpace offices in Santa Monica," the executive says. "Facebook then traced interactions with those accounts to News Corp lawyers. When it comes to Facebook, Murdoch has been playing every angle he can for a long time."

That's a pretty amazing story, which certainly could be true. After all, just a few years later there was the famous NY Times article about how companies were courting state Attorneys General to attack their competitors (which later came up again, when the MPAA -- after reading that NY Times article -- decided to use that strategy to go after Google). And Blumenthal had a long history as Attorney General of grandstanding about tech companies.

But, for all the fascinating reporting in the piece, what's troubling is that Thompson and Vogelstein get some very basic facts wrong -- and, unfortunately, one of those basic facts is a core peg used to hold up the story. Specifically, the article incorrectly points to Section 230 of the Communications Decency Act as being a major hindrance to Facebook improving its platform. Here's how the law incorrectly described in a longer paragraph explaining why Facebook "ignored" the "problem" of "fake news" (scare quotes on purpose):

And then there was the ever-present issue of Section 230 of the 1996 Communications Decency Act. If the company started taking responsibility for fake news, it might have to take responsibility for a lot more. Facebook had plenty of reasons to keep its head in the sand.

That's... wrong. I mean, it's not just wrong by degree, it's flat out, totally and completely wrong. It's wrong to the point that you have to wonder if Wired's fact checkers decided to just skip it, even though it's a fundamental claim in the story.

Indeed, the whole point of CDA 230 is exactly the opposite of what the article claims. As you can read yourself, if you look at the law, it specifically encourages platforms to moderate the content they host by saying that the moderation choices they make do not impact their liability. This is the very core point of CDA 230:

No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected

This is the "good samaritan clause" of the CDA 230 and it's encouraging platforms like Facebook to "take responsibility for fake news" by saying that no matter what choices it makes, it won't make Facebook liable for looking at the content. Changing CDA 230 as many people are trying to do right now is what would create incentives for Facebook to put its head in the sand.

And yet, Thompson and Vogelstein repeat this false claim:

But if anyone inside Facebook is unconvinced by religion, there is also Section 230 of the 1996 Communications Decency Act to recommend the idea. This is the section of US law that shelters internet intermediaries from liability for the content their users post. If Facebook were to start creating or editing content on its platform, it would risk losing that immunity-and it's hard to imagine how Facebook could exist if it were liable for the many billion pieces of content a day that users post on its site.

This one is half right, but half misleading. It's true -- under the Roommates case -- that if Facebook creates content that breaks the law, then it remains liable for that content. But not for editing or moderating content on its platform as that sentence implies.

Indeed, this is a big part of the problem we have with the ongoing debates around CDA 230. So many people insist that CDA 230 incentivizes platforms to "do nothing" or "look the other way" or, as Wired erroneously reports, to "put their head in the sand." But that's not true at all. CDA 230 not only enables, but encourages, platforms to be more active moderators by making it clear that the choices they make concerning moderating content (outside the context of copyright -- which uses a whole different set of rules), don't create new liability for them. That's why so many platforms are trying so many different things (as we recently explored in our series of stories on content moderation by internet platforms).

What's really troubling about this is that people are going to use the Wired cover story as yet another argument for doing away with (or at least punching giant holes in) CDA 230. They'll argue that we need to make changes to encourage companies like Facebook not to ignore the bad behavior on their platform. But the real lesson of the story -- which should have come out if the reporting were more carefully done -- is that CDA 230 is what we need to encourage that behavior. The fact that Facebook is able to and is willing to change and experiment in response to increasing public pressure, is only so because CDA 230 gives the company that freedom to do so. Adding liability for wrong decisions is actually what would make the problem worse, and would encourage platforms like Facebook to do less.

It's tragic that in such a high profile, carefully reported story, a key part of it -- indeed, a part on which much of the story itself hinges -- is simply, factually, wrong.



Permalink | Comments | Email This Story
External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments