Article 1FX86 Appeals Court Doubles Down On Dangerous Ruling: Says Website Can Be Blamed For Failing To Warn Of Rapists

Appeals Court Doubles Down On Dangerous Ruling: Says Website Can Be Blamed For Failing To Warn Of Rapists

by
Mike Masnick
from Techdirt on (#1FX86)
Back in late 2014, we wrote about a case where the somewhat horrifying details were likely leading to a bad result that would undermine Section 230 of the CDA (the most important law on the internet). Again, the details here are appalling. It involves two guys who would use other people's accounts on a website called "Model Mayhem" to reach out to aspiring models, then lure them to their location in South Florida, drug them, and then film themselves having sex with the drugged women to then offer as online porn. Yes, absolutely everything about this is horrifying and disgusting. But here's where the case went weird. A victim of this awful crime decided to sue the large company Internet Brands, who had purchased Model Mayhem, arguing that it knew about these creeps and had failed to warn users of the service. Internet Brands had argued that under Section 230 it was not liable and the appeals court said no. The case was then reheard en banc (with a large slate of 9th Circuit judges) and they've now, once again, said that Section 230 does not apply.

This case has been a favorite of those looking to undermine Section 230, so those folks will be thrilled by the results, but for everyone who supports an open internet, we should be worried. The rule here is basically that sites are protected from being held liable of actions of their users... unless those users do something really horrible. Then things change. It's further important to note that the two sick creeps who pulled off this scam, Lavont Flanders and Emerson Callum, weren't actually members of the Model Mayhem site. They would just use the accounts of others to reach out to people, so the site had even less control.

To get around the plain language and caselaw history around Section 230, the court has to quite carefully parse its words. It starts out by noting that Internet Brands clearly qualifies for the safe harbors as an internet platform. However, it bends over backwards to reinterpret a key part of CDA 230, that says you cannot treat such a platform "as a publisher or speaker" of information posted by users. Here, the court decides that the law requiring services to warn of potential danger do no such thing:
Jane Doe's claim is different, however. She does not seekto hold Internet Brands liable as a "publisher or speaker" ofcontent someone posted on the Model Mayhem website, orfor Internet Brands' failure to remove content posted on thewebsite. Jane Doe herself posted her profile, but she does notseek to hold Internet Brands liable for its content. Nor doesshe allege that Flanders and Callum posted anything tothe website. The Complaint alleges only that "JANEDOE was contacted by Lavont Flanders throughMODELMAYHEM.COM using a fake identity." Jane Doedoes not claim to have been lured by any posting that InternetBrands failed to remove. Internet Brands is also not allegedto have learned of the predators' activity from any monitoringof postings on the website, nor is its failure to monitorpostings at issue.

Instead, Jane Doe attempts to hold Internet Brands liablefor failing to warn her about information it obtained from anoutside source about how third parties targeted and luredvictims through Model Mayhem. The duty to warn allegedlyimposed by California law would not require Internet Brandsto remove any user content or otherwise affect how itpublishes or monitors such content.
In other words, because the law only compels a form of speech -- i.e., a duty to warn people about creeps on your service -- as opposed to a duty to suppress speech, then Section 230 doesn't apply here. Bizarrely, the court points to the so-called "Good Samaritan" clause in CDA 230 (CDA 230(c)(1)) that further notes that any action that a site takes to moderate content cannot be used to create liability around other content on the site, as further proof for its position:
Jane Doe's failure to warn claim has nothing to do withInternet Brands' efforts, or lack thereof, to edit, monitor, orremove user generated content. Plaintiff's theory is thatInternet Brands should be held liable, based on its knowledgeof the rape scheme and its "special relationship" with userslike Jane Doe, for failing to generate its own warning. Thus,liability would not discourage the core policy of section230(c), "Good Samaritan" filtering of third party content.
The court also rejects the idea that this ruling might chill free speech by leading to greater monitoring and censorship, basically just tossing it off to the side as unlikely to be a big deal:
It may be true that imposing any tort liability on InternetBrands for its role as an interactive computer service could besaid to have a "chilling effect" on the internet, if only becausesuch liability would make operating an internet businessmarginally more expensive. But such a broad policyargument does not persuade us that the CDA should bar thefailure to warn claim. We have already held that the CDAdoes not declare "a general immunity from liability derivingfrom third-party content." Barnes, 570 F.3d at 1100. "[T]heCommunications Decency Act was not meant to create alawless no-man's-land on the Internet." Roommates.Com, 521F.3d at 1164. Congress has not provided an all purpose getout-of-jail-free card for businesses that publish user contenton the internet, though any claims might have a marginalchilling effect on internet publishing businesses. Moreover,the argument that our holding will have a chilling effectpresupposes that Jane Doe has alleged a viable failure to warnclaim under California law. That question is not before us andremains to be answered.
Some will, undoubtedly, argue that this limiting of Section 230 is a good thing, either because they already dislike 230, or because they believe that the behavior described above was so beyond the pale that it's fine to punish the platform for it. This is problematic. No one denies that the two individuals who committed these acts deserve to be in jail (for a long time). But blaming the platform that they used for not posting a warning seems extreme and does seem to confuse how Section 230 is supposed to work. The key point is in accurately putting liability on the parties who caused the action. That wasn't the website and it shouldn't be blamed.

You can now expect lots of cases citing this case as they look for any way to get past Section 230's protections.

Permalink | Comments | Email This Story
feed?i=Yo5ug3Wsyag:9FKF4yFPazA:D7DqB2pKE feed?d=c-S6u7MTCTEYo5ug3Wsyag
External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments