A Trio Of Failed Lawsuits Trying To Sue Websites For Moderating Content

Why do people still file these lawsuits? For years now, we see lawsuits filed against websites over their content moderation decisions, despite Section 230 barring them (and the 1st Amendment rights of the platform backing that up). These lawsuits always fail.
Perhaps the reason we're seeing a bunch more of these lately was because a ton of people completely misunderstood (helped along by the guy who I don't think could fairly describe anything if he really tried) what happened with Twitter and Alex Berenson. All of the 1st Amendment claims in Berenson's lawsuit were thrown out easily. The only reason the case moved forward (and then settled) was because an executive at Twitter had made statements to Berenson suggesting that he wouldn't have his account blocked, and that opened up the possibility (though it still would have been a long shot in court) that a Barnes-style promissory estoppel" ruling would come down.
But, because of how that case has been widely misrepresented to nonsense peddlers, they seemed to think it was open season on suing platforms. Anyway, all those cases are losing. Here are three examples that all happened recently, and all covered by Professor Eric Goldman. I'm playing a bit of catchup combining all three, but honestly, none of them represent anything ground-breaking or new. They're just standard foolish lawsuits from people falsely thinking you can sue websites for moderating your content.
First up, we have well known nonsense peddler and pretend Presidential candidate RFK Jr. He's been suing platforms for a while and it hasn't gone well at all. In this case, RFK argued that YouTube was a state actor" in taking down some videos, but the court isn't buying it at all, noting the 9th Circuit has already said that such arguments are nonsense.
The Ninth Circuit held that Twitter exercised its own independent, judgment in adopting its content moderation policies and enforcing them. Id. at 1158. Additionally, the court held that the private and state actors were generally aligned in their missions to limit the spread of misleading election information" and that [s]uch alignment does not transform private conduct into state action." Id. 1156-57.
Similarly, here, under either test, Plaintiff has not shown that the government so insinuated itself into a position of interdependence" with Google or that it exercised coercive power or has provided such significant encouragement" to Google that give rise to state action. Since Plaintiff's counsel, at oral argument, conceded that the evidence provided in support of his application does not show that the government coerced Google, the Court limits its inquiry to whether there is evidence suggesting that the government insinuated itself into a position of interdependence or provided significant encouragement. Regardless of which test is used, the analysis is necessarily fact-bound ...." Lugar v. Edmondson Oil Co., 457 U.S. 922, 939 (1982).
No state actor, no 1st Amendment. This case is going nowhere.
Next up, was a lawsuit against exTwitter from a pro se plaintiff, Taiming Zhang, arguing that his suspension from Twitter violated his contract with Twitter. That is... not how any of this works, as the court explained.
Zhang's case gets tossed on straightforward Section 230 grounds, as his attempt to get around 230 was to say but the contract was breached!" and the court says... nope:
Plaintiff's argument CDA 230 carries no relevance" because Twitter breached their contract is unavailing. There is no exception under Section 230 for breach of contract claims. See 47 U.S.C. 230(e). Courts routinely hold Section 230 immunizes platforms from contract claims, where, as here, they seek to impose liability for protected publishing activity. See, e.g., King v. Facebook, Inc., 845 F. App'x 691, 692 (9th Cir. 2021) (affirming dismissal of pro se plaintiff's contract claim based on, among other things, Facebook's suspension of her user account, because `any activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune under section 230'") (quoting Roommates, 521 F.3d at 1170-71); Murphy v. Twitter, Inc., 60 Cal. App. 5th 12, 28 (2021) (many [courts] have concluded that [contract] claims were barred [by Section 230] because the plaintiff's cause of action sought to treat the defendant as a publisher or speaker of user generated content") (collecting cases).
Finally, we have Joseph Mercola, a somewhat infamous purveyor of absolute nonsense regarding vaccines, who had his account taken down by YouTube. He sued. It didn't go well. He also argues a contractual violation and, as Goldman notes, Mercola seemed to switch legal strategies midstream going from originally suing over the content removals, to arguing that he just wanted access to his content (as if he didn't already have copies?).
Either way, that's not how any of this works:
As set forth in the Statement, YouTube had no obligation to host or serve content. The main issue is that the plaintiffs want access to the content. But no provision of the Agreement provides a right to access that content under the circumstances here: termination for cause under the agreement. In a different context, there is an avenue to export content: if YouTube terminates a user's access for service changes, it gives the user sufficient time to export content, where reasonably possible. But that provision on its face does not apply here. The plaintiffs thus do not plead contract or quasi-contract claims related to denial of access to their content.
Similarly, as set forth in the Statement, YouTube had the discretion to take down content that harmed its users. The content here violated the Community Guidelines. Modifications to the Community Guidelines - such as the modification here to elaborate on YouTube's existing prohibitions on medical misinformation to add COVID-19 and vaccines - could be effective immediately, without notice. YouTube had the discretion to terminate channels without warning after a single case of severe abuse. Under the contract, this determination was discretionary: the contract said that [i]f we reasonably believe that any Content is in breach of this agreement or may cause harm, . . . we may remove or take down that Content in our discretion."
Basically all three of these cases boil down to the same basic thing: a crackpot who a website decided violated its rules has their content taken down, and the crackpot feels entitled to commandeer someone else's private property to host their speech.
That's not how it works. It's not how it's ever worked. But, somehow, I doubt these lawsuits are going away any time soon.