Five Section 230 Cases That Made Online Communities Better
The House Energy and Commerce Committee is holding ahearingtomorrow on sunsetting" Section 230.
Despite facing criticism, Section 230 has undeniably been a cornerstone in the architecture of the modern web, fostering a robust market for new services, and enabling a rich diversity of ideas and expressions to flourish. Crucially, Section 230 empowers platforms tomaintain community integrity through the moderation of harmful content.
With that, it's somewhat surprising that the proposal to sunset Section 230 has garnered Democratic support, given that Section 230 has historically empowered social media services to actively remove content that perpetuates racism and bigotry, thus protecting marginalized communities, including individuals identifying as LGBTQ+ and people of color.
As the hearing approaches, I wanted to highlight five instances where Section 230 swiftly and effectively shielded social media platforms from lawsuits that demanded they host harmful content contrary to their community standards. Without Section 230, online services would face prolonged and costlier legal battles to uphold their right to moderate content - a right guaranteed by the First Amendment.
Section 230 Empowered Vimeo to Remove Conversion Therapy' ContentChristian PastorJames Domen and Church United sued Vimeoafter the platform terminated their account for posting videos promoting Sexual Orientation Change Efforts (SOCE) (i.e. conversion therapy'), which Vimeo argued violated its content policies.
Plaintiffs argued that Vimeo's actions were not in good faith and discriminated based on sexual orientation and religion. However, the court found that the plaintiffs failed to demonstrate Vimeo acted in bad faith or targeted them discriminatorily.
The District Court initially dismissed the lawsuit, ruling that Vimeo was protected under Section 230 for its content moderation decisions. On appeal, the Second Circuit Court upheld the lower court's dismissal. The appellate court emphasized that Vimeo's actionsfell within the protections of Section 230, particularly noting that decisions about content moderation are at the platform's discretion when conducted in good faith. [Note:a third revision of the Court's opinion omitted Section 230, however, the case remains a prominent example of how Section 230 ensures the initial dismissal of content removal cases].
In upholding Vimeo's decision to remove content promoting conversion therapy, the Courtreinforced that Section 230 protects platformswhen they choose to enforce community standards that aim to maintain a safe and inclusive environment for all users, including individuals who identify with LGBTQ+ communities.
Notably, the case also illustrates how platforms can be safeguarded against lawsuits that may attempt to reinforce the privilege of majority groups under the guise of discrimination claims.
Case:Domen v. Vimeo, Inc.,20-616-cv (2d Cir. Sept. 24, 2021).
Section 230 Empowered Twitter to Remove Intentional Dead-Naming & Mis-GenderingMeghan Murphy, a self-proclaimed feminist writer from Vancouver,ignited controversywith a series of tweets in January 2018 targeting Hailey Heartless, a transgender woman. Murphy's posts, which included referring to Heartless as a white man" and labeling her a trans-identified male/misogynist," clearly violated Twitter's guidelines at the time by using male pronouns and mis-gendering Heartless.
Twitter responded by temporarily suspending Murphy's account, citing violations of its Hateful Conduct Policy. Despite this, Murphy persisted in her discriminatory rhetoric, posting additional tweets that challenged and mocked the transgender identity. This pattern of behavior led to a permanent ban in November 2018, after Murphy repeatedly engaged in what Twitter identified as hateful conduct, including dead-naming and mis-gendering other transgender individuals.
In response, MurphysuedTwitter alleging, among other claims, that Twitter had engaged in viewpoint discrimination. Both the district and appellate courtsheldthat the actions taken by Twitter to enforce its policies against hateful conductwere consistent with Section 230.
The case of Meghan Murphy underscores the pivotal role of Section 230 in empowering platforms like Twitter to maintain safe and inclusive environments for all users, including those identifying as LGBTQ+.
Case:Murphy v. Twitter, Inc.,2021 WL 221489 (Cal. App. Ct. Jan. 22, 2021).
Section 230 Empowered Twitter to Remove Hateful & Derogatory ContentIn 2018, Robert M. Cox tweeted ahighly controversial statementcriticizing Islam, which led to Twitter suspending his account.
Islam is a Philosophy of Conquests wrapped in Religious Fantasy & uses Racism, Misogyny, Pedophilia, Mutilation, Torture, Authoritarianism, Homicide, Rape . . . Peaceful Muslims are Marginal Muslims who are Heretics & Hypocrites to Islam. Islam is . . ."
To regain access, Cox was required to delete the offending tweet and others similar in nature. Cox then sued Twitter, seeking reinstatement and damages, claiming that Twitter had unfairly targeted his speech. The South Carolina District Court, however,upheldthe suspension, citing Section 230:
the decision to furnish an account, or prohibit a particular user from obtaining an account, is itself publishing activity. Therefore, to the extent Plaintiff seeks to hold the Defendant liable for exercising its editorial judgment to delete or suspend his account as a publisher,his claims are barred by 230(c) of the CDA."
In other words, actions taken upon third-party content, such as content removal and account termination, are wholly within the scope of Section 230 protection.
Like the Murphy case,Cox v. Twitteremphasizes the importance of Section 230 in empowering platforms like Twitter to decisively and swiftly remove hateful content, maintaining a healthier online environment without getting bogged down in lengthy legal disputes.
Case:Cox v. Twitter, Inc.,2:18-2573-DCN-BM (D.S.C.).
Section 230 Empowered Facebook to Remove Election DisinformationIn April 2018, Facebook took action against the Federal Agency of News (FAN) by shutting down their Facebook account and page. Facebook cited violations of its community guidelines, emphasizing that the closures were part of a broader initiative against accounts controlled by the Internet Research Agency (IRA),a group accused of manipulating public discourse during the 2016 U.S. presidential elections.This action was part of Facebook's ongoing efforts to enhance its security protocols to prevent similar types of interference in the future.
In response, FAN filed a lawsuit against Facebook which led to a legal battle that centered on whether Facebook's actions violated the First Amendment or other legal rights of FAN. The Court, however, determined that Facebook was not a state actor nor had it engaged in any joint action with the government that would make it subject to First Amendment constraints. The court also dismissed FAN's claims fordamages under Section 230.
In an attempt to avoid Section 230, FAN argued that Facebook's promotion of FAN's content via Facebook's recommendation algorithms converts FAN's content into Facebook's content. The Court didn't buy it:
Plaintiffs make a similar argument - that recommending FAN's content to Facebook users through advertisements makes Facebook a provider of that content. The Ninth Circuit, however, held that such actions do not create content in and of themselves."
The FAN case illustrates the critical role Section 230 plays in empowering platforms like Facebook to decisively address and mitigate election-related disinformation. By shielding platforms that act swiftly against entities that violate their terms of service, particularly those involved in spreading divisive or manipulative content, Section 230 ensures that social media services can remain vigilant guardians against the corruption of public discourse.
Case:Federal Agency of News LLC v. Facebook, Inc.,2020 WL 137154 (N.D. Cal. Jan. 13, 2020).
Section 230 Empowered Facebook to Ban Hateful ContentLaura Loomer, an alt-right activist,filed lawsuits against Facebook(and Twitter) after her account was permanently banned. Facebook labeled Loomer as dangerous," a designation that she argued was both wrongful and harmful to her professional and personal reputation. Facebook's classification of Loomer under this term was based on their assessment that her activities and statements online were aligned with behaviors that promote or engage in violence and hate:
To the extent she alleges Facebook called her dangerous" by removing her accounts pursuant to its DIO policy and describing its policy generally in the press, the law is clear that calling someone dangerous" - or saying that she promoted" or engaged" in hate" - is a protected statement of opinion. Even if it were not, Ms. Loomer cannot possibly meet her burden to prove that it would be objectively false to describe her as dangerous" or promoting or engaging in hate" given her widely reported controversial public statements. To the extent Ms. Loomer is claiming, in the guise of a claim for defamation by implication," that Facebook branded her a terrorist" or accused her of conduct that would also violate the DIO policy, Ms. Loomer offers no basis to suggest (as she must) that Facebook ever intended or endorsed that implication."
Loomer challenged Facebook's decision on the grounds of censorship and discrimination against her political viewpoints. However, the Court ruled in favor of Facebook, citing Section 230 among other reasons. The Court's decision emphasized that as a private company, Facebook has the right to enforce its community standards and policies, including the removal of users it deems as violating these policies.
Case:Loomer v. Zuckerberg, 2023 WL 6464133 (N.D. Cal. Sept. 30, 2023).
Jess Miers is Senior Counsel to the Chamber of Progress and a Section 230 expert. This post originally appeared on Medium and is republished here with permission.