Article 63QPT 5th Circuit Rewrites A Century Of 1st Amendment Law To Argue Internet Companies Have No Right To Moderate

5th Circuit Rewrites A Century Of 1st Amendment Law To Argue Internet Companies Have No Right To Moderate

by
Mike Masnick
from Techdirt on (#63QPT)
Story Image

As far as I can tell, in the area the 5th Circuit appeals court has jurisdiction, websites no longer have any 1st Amendment editorial rights. That's the result of what appears to me to be the single dumbest court ruling I've seen in a long, long time, and I know we've seen some crazy rulings of late. However, thanks to judge Andy Oldham, internet companies no longer have 1st Amendment rights regarding their editorial decision making.

Let's take a step back. As you'll recall, last summer, in a fit of censorial rage, the Texas legislature passed HB 20, a dangerously unconstitutional bill that would bar social media websites from moderating as they see fit. As we noted, the bill opens up large websites to a lawsuit over basically every content moderation decision they make (and that's just one of the problems). Pretty quickly, a district court judge tossed out the entire law as unconstitutional in a careful, thorough ruling that explained why every bit of the law violated websites' own 1st Amendment rights to put in place their own editorial policies.

On appeal to the 5th Circuit, the court did something bizarre: without giving any reason or explanation at all, it reinstated the law and promised a ruling at some future date. This was procedurally problematic, leading the social media companies (represented by two of their trade groups, NetChoice and CCIA) to ask the Supreme Court to slow things down a bit, which is exactly what the Supreme Court did.

Parallel to all of this, Florida had passed a similar law, and again a district court had found it obviously unconstitutional. That, too, was appealed, yet in the 11th Circuit the court rightly agreed with the lower court that the law was (mostly) unconstitutional. That teed things up for Florida to ask the Supreme Court to review the issue.

However, remember, back in May when Texas initially reinstated the law, it said it would come out with its full ruling later. Over the last few months I've occasionally pondered (sometimes on Twitter) whether the 5th Circuit would ever get around to actually releasing an opinion. And that's what it just did. And, as 1st Amendment lawyer Ken White notes, it's the most angrily incoherent First Amendment decision I think I've ever read."

It is difficult to state how completely disconnected from reality this ruling is, and how dangerously incoherent it is. It effectively says that companies no longer have a 1st Amendment right to their own editorial policies. Under this ruling, any state in the 5th Circuit could, in theory, mandate that news organizations must cover certain politicians or certain other content. It could, in theory, allow a state to mandate that any news organization must publish opinion pieces by politicians. It completely flies in the face of the 1st Amendment's association rights and the right to editorial discretion.

There's going to be plenty to say about this ruling, which will go down in the annals of history as a complete embarrassment to the judiciary, but let's hit the lowest points. The crux of the ruling, written by Judge Andy Oldham, is as follows:

Today we reject the idea that corporations have a freewheeling FirstAmendment right to censor what people say. Because the district court heldotherwise, we reverse its injunction and remand for further proceedings.

Considering just how long Republicans (and Oldham was a Republican political operative before being appointed to the bench) have spent insisting that corporations have 1st Amendment rights, this is a major turnaround, and (as noted) an incomprehensible one. Frankly, Oldham's arguments sound much more like the arguments made by ignorant trolls in our comments than anyone with any knowledge or experience with 1st Amendment law.

I mean, it's as if Judge Oldham has never heard of the 1st Amendment's prohibition on compelled speech.

First, the primary concern of overbreadth doctrine is to avoid chillingspeech. But Section 7 does not chill speech; instead, it chills censorship. So therecan be no concern that declining to facially invalidate HB 20 will inhibit themarketplace of ideas or discourage commentary on matters of public concern.Perhaps as-applied challenges to speculative, now-hypothetical enforcementactions will delineate boundaries to the law. But in the meantime, HB 20'sprohibitions on censorship will cultivate rather than stifle the marketplace ofideas that justifies the overbreadth doctrine in the first place.

Judge Oldham insists that concerns about forcing websites to post speech from Nazis, terrorist propaganda, and Holocaust denial are purely hypothetical. Really.

The Platforms do not directly engage with any of these concerns.Instead, their primary contention-beginning on page 1 of their brief andrepeated throughout and at oral argument-is that we should declare HB 20facially invalid because it prohibits the Platforms from censoring pro-Nazispeech, terrorist propaganda, [and] Holocaust denial[s]." Red Br. at 1.

Far from justifying pre-enforcement facial invalidation, the Platforms'obsession with terrorists and Nazis proves the opposite. The Supreme Courthas instructed that [i]n determining whether a law is facially invalid," weshould avoid speculat[ing] about hypothetical' or imaginary' cases."Wash. State Grange, 552 U.S. at 449-50. Overbreadth doctrine has atendency . . . to summon forth an endless stream of fanciful hypotheticals,"and this case is no exception. United States v. Williams, 553 U.S. 285, 301(2008). But it's improper to exercise the Article III judicial power based onhypothetical cases thus imagined." Raines, 362 U.S. at 22; cf. SinenengSmith, 140 S. Ct. at 1585-86 (Thomas, J., concurring) (explaining the tensionbetween overbreadth adjudication and the constitutional limits on judicialpower).

These are not hypotheticals. This is literally what these websites have to deal with on a daily basis. And which, under Texas' law, they no longer could do.

Oldham continually focuses (incorrectly and incoherently) on the idea that editorial discretion is censorship. There's a reason that we've spent the last few years explaining how the two are wholly different - and part of it was to avoid people like Oldham getting confused. Apparently it didn't work.

We reject the Platforms' efforts to reframe their censorship as speech.It is undisputed that the Platforms want to eliminate speech-not promoteor protect it. And no amount of doctrinal gymnastics can turn the FirstAmendment's protections for free speech into protections for free censoring.

That paragraph alone is scary. It basically argues that the state can now compel any speech it wants on private property, as it reinterprets the 1st Amendment to mean that the only thing it limits is the power of the state to remove speech, while leaving open the power of the state to foist speech upon private entities. That's ridiculous.

Oldham then tries to square this by... pulling in wholly unrelated issues around the few rare, limited, fact-specific cases where the courts have allowed compelled speech.

Supreme Court precedent instructs that the freedom of speechincludes the right to refrain from speaking at all." Wooley v. Maynard, 430U.S. 705, 714 (1977); see also W. Va. State Bd. of Educ. v. Barnette, 319 U.S.624, 642 (1943). So the State may not force a private speaker to speaksomeone's else message. See Wooley, 430 U.S. at 714.

But the State can regulate conduct in a way that requires privateentities to host, transmit, or otherwise facilitate speech. Were it otherwise,no government could impose nondiscrimination requirements on, say,telephone companies or shipping services. But see 47 U.S.C. 202(a)(prohibiting telecommunications common carriers from mak[ing] anyunjust or unreasonable discrimination in charges, practices, classifications, regulations, facilities, or services"). Nor could a State create a right todistribute leaflets at local shopping malls. But see PruneYard Shopping Ctr. v.Robins, 447 U.S. 74, 88 (1980) (upholding a California law protecting theright to pamphleteer in privately owned shopping centers). So FirstAmendment doctrine permits regulating the conduct of an entity that hostsspeech, but it generally forbids forcing the host itself to speak or interferingwith the host's own message.

From there, he argues that forcing websites to host speech they disagree with is not compelled speech.

The Platforms are nothing like the newspaper in Miami Herald. Unlikenewspapers, the Platforms exercise virtually no editorial control or judgment.The Platforms use algorithms to screen out certain obscene and spam-relatedcontent. And then virtually everything else is just posted to the Platformwith zero editorial control or judgment.

Except that's the whole point. The websites do engage in editorial control. The difference from newspapers is that it's ex post control. If there are complaints, they will review the content afterwards to see if it matches with their editorial policies (i.e., terms of use). So, basically, Oldham is simply wrong here. They do exercise editorial control. That they use it sparingly does not mean they give up the right. Yet Oldham thinks otherwise.

From there, Oldham literally argues there is no editorial discretion under the 1st Amendment. Really.

Premise one is faultybecause the Supreme Court's cases do not carve out editorial discretion"as a special category of First-Amendment-protected expression. Instead, the Court considers editorial discretion as one relevant consideration whendeciding whether a challenged regulation impermissibly compels or restrictsprotected speech.

To back this up, the court cites Turner v. FCC, which has recently become a misleading favorite among those who are attacking Section 230. But the Turner case really turned on some pretty specific facts about cable TV versus broadcast TV which are not at all in play here.

Oldham also states that content moderation isn't editorial discretion, even though it literally is.

Even assumingeditorial discretion" is a freestanding category of First-Amendment-protected expression, the Platforms' censorship doesn't qualify. Curiously,the Platforms never define what they mean by editorial discretion."(Perhaps this casts further doubt on the wisdom of recognizing editorialdiscretion as a separate category of First-Amendment-protected expression.)Instead, they simply assert that they exercise protected editorial discretionbecause they censor some of the content posted to their Platforms and usesophisticated algorithms to arrange and present the rest of it. But whateverthe outer bounds of any protected editorial discretion might be, thePlatforms' censorship falls outside it. That's for two independent reasons.

And here it gets really stupid. The ruling argues that because of Section 230, internet websites can't claim editorial discretion. This is a ridiculously confused misreading of 230.

First, an entity that exercises editorial discretion" acceptsreputational and legal responsibility for the content it edits. In the newspapercontext, for instance, the Court has explained that the role of editors and editorial employees" generally includes determin[ing] the news value ofitems received" and taking responsibility for the accuracy of the itemstransmitted. Associated Press v. NLRB, 301 U.S. 103, 127 (1937). And editorialdiscretion generally comes with concomitant legal responsibility. Forexample, because of a newspaper's editorial judgments in connection withan advertisement," it may be held liable when with actual malice itpublishes a falsely defamatory" statement in an ad. Pittsburgh Press Co. v.Pittsburgh Comm'n on Human Rels., 413 U.S. 376, 386 (1973). But thePlatforms strenuously disclaim any reputational or legal responsibility for thecontent they host. See supra Part III.C.2.a (quoting the Platforms' adamantprotestations that they have no responsibility for the speech they host); infraPart III.D (discussing the Platforms' representations pertaining to 47 U.S.C. 230)

Then, he argues that there's some sort of fundamental difference between exercising editorial discretion before or after the content is posted:

Second, editorial discretion involves selection and presentation" ofcontent before that content is hosted, published, or disseminated. See Ark.Educ. Television Comm'n v. Forbes, 523 U.S. 666, 674 (1998); see also MiamiHerald, 418 U.S. at 258 (a newspaper exercises editorial discretion whenselecting the choice of material" to print). The Platforms do not choose orselect material before transmitting it: They engage in viewpoint-basedcensorship with respect to a tiny fraction of the expression they have alreadydisseminated. The Platforms offer no Supreme Court case even remotelysuggesting that ex post censorship constitutes editorial discretion akin to exante selection.17 They instead baldly assert that it is constitutionallyirrelevant at what point in time platforms exercise editorial discretion." RedBr. at 25. Not only is this assertion unsupported by any authority, but it alsoillogically equates the Platforms' ex post censorship with the substantive,discretionary, ex ante review that typifies editorial discretion" in everyother context

So, if I read that correctly, websites can now continue to moderate only if they pre-vet all content they post. Which is also nonsense.

From there, Oldham goes back to Section 230, where he again gets the analysis exactly backwards. He argues that Section 230 alone makes HB 20's provisions constitutional, because it says that you can't treat user speech as the platform's speech:

We have no doubts that Section 7 is constitutional. But even if somewere to remain, 47 U.S.C. 230 would extinguish them. Section 230provides that the Platforms shall [not] be treated as the publisher orspeaker" of content developed by other users. Id. 230(c)(1). Section 230reflects Congress's judgment that the Platforms do not operate liketraditional publishers and are not speak[ing]" when they host usersubmitted content. Congress's judgment reinforces our conclusion that thePlatforms' censorship is not speech under the First Amendment.

[....]

Section 230 undercuts both of the Platforms' arguments for holdingthat their censorship of users is protected speech. Recall that they rely on twokey arguments: first, they suggest the user-submitted content they host istheir speech; and second, they argue they are publishers akin to a newspaper.Section 230, however, instructs courts not to treat the Platforms as thepublisher or speaker" of the user-submitted content they host. Id. 230(c)(1). And those are the exact two categories the Platforms invoke tosupport their First Amendment argument. So if 230(c)(1) is constitutional,how can a court recognize the Platforms as First-Amendment-protectedspeakers or publishers of the content they host?

Oldham misrepresents the arguments of websites that support Section 230, claiming that by using 230 to defend their moderation choices they have claimed in court they are neutral tools" and simple conduits of speech." But that completely misrepresents what has been said and how this plays out.

It's an upside down and backwards misrepresentation of how Section 230 actually works.

Oldham also rewrites part of Section 230 to make it work the way he wants it to. Again, this reads like some of our trolls, rather than how a jurist is supposed to act:

The Platforms' only response is that in passing 230, Congresssought to give them an unqualified right to control the content they host-including through viewpoint-based censorship. They base this argument on 230(c)(2), which clarifies that the Platforms are immune from defamationliability even if they remove certain categories of objectionable" content.But the Platforms' argument finds no support in 230(c)(2)'s text orcontext. First, 230(c)(2) only considers the removal of limited categoriesof content, like obscene, excessively violent, and similarly objectionableexpression. It says nothing about viewpoint-based or geography-basedcensorship. Second, read in context, 230(c)(2) neither confers norcontemplates a freestanding right to censor. Instead, it clarifies thatcensoring limited categories of content does not remove the immunityconferred by 230(c)(1). So rather than helping the Platforms' case, 230(c)(2) further undermines the Platforms' claim that they are akin tonewspapers for First Amendment purposes. That's because it articulatesCongress's judgment that the Platforms are not like publishers even when theyengage in censorship.

Except that Section 230 does not say similarly objectionable." It says otherwise objectionable." By switching otherwise objectionable" to similarly objectionable," Oldham is insisting that courts like his own get to determine what counts as similarly objectionable," and that alone is a clear 1st Amendment problem. The courts cannot decide what content a website finds objectionable. That is, yet again, the state intruding on the editorial discretion of a website.

Also, completely ridiculously, Oldham leaves out that (c)(2) does not just include that list of objectionable categories, but it states: any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable." In other words, the law explicitly states that whether or not something falls into that list is up to the provider or user and not the state. To leave that out of his description of (c)(2) is beyond misleading.

Also notable: Oldham completely ignores the fact that Section 230 pre-empts state laws like Texas's, saying that no liability may be imposed under any State or local law that is inconsistent with this section." I guess Oldham is arguing that Texas's law somehow is not inconsistent with 230, but it certainly is inconsistent with two and a half decades of 230 jurisprudence.

There's then a long and, again, nonsensical discussion of common carriers, basically saying that the state can magically declare social media websites common carriers. I'm not even going to give that argument the satisfaction of covering it, it is so disconnected from reality. Social media literally meets none of the classifications of traditional common carriers. The fact that Oldham claims, that the Platforms are no different than Verizon or AT&T" makes me question how anyone could take anything in this ruling seriously.

I'm also going to skip over the arguments for why the transparency" bits are constitutional according to the 5th Circuit, other than to note that California must be happy, because under this ruling its new social media transparency laws would also be deemed constitutional even if they now conflict with Texas's (that'll be fun).

There are a few notable omissions from the ruling. It never mentions ACLU v. Reno, which seems incredibly relevant given its discussion of how the internet and the 1st Amendment work together, and is glaring in its absence. Second, it completely breezes past Justice Kavanaugh's ruling in the Halleck case, which clearly established that under the First Amendment a private entity may thus exercise editorial discretion over the speech and speakers in the forum." The only mention of the ruling is in a single footnote, claiming that ruling only applies to public forums" and saying it's distinct from the issue raised here. But, uh, the quote (and much of the ruling) literally says the opposite. It's talking about private forums. This is ridiculous. Third, as noted, the ruling ignores the pre-emption aspects of Section 230. Fourth, while it discusses the 11th Circuit's ruling regarding Florida's law, it tries to distinguish the two (while also highlighting where the two Circuits disagree to set up the inevitable Supreme Court battle). Finally, it never addresses the fact that the Supreme Court put its original turn the law back on" ruling on hold. Apparently Oldham doesn't much care.

The other two judges on the panel also provided their own, much shorter opinions, with Judge Edith Jones concurring and just doubling down on Oldham's nonsense. There is an opinion from Judge Leslie Southwick that is a partial concurrence and partial dissent. It concurs on the transparency stuff, but dissents regarding the 1st Amendment.

The majorityframes the case as one dealing with conduct and unfair censorship. The majority's rejection of First Amendment protections for conduct followsunremarkably. I conclude, though, that the majority is forcing the picture ofwhat the Platforms do into a frame that is too small. The frame must be largeenough to fit the wide-ranging, free-wheeling, unlimited variety of expression- ranging from the perfectly fair and reasonable to the impossibly biased andoutrageous - that is the picture of the First Amendment as envisioned bythose who designed the initial amendments to the Constitution. I do notcelebrate the excesses, but the Constitution wisely allows for them.

The majority no doubt could create an image for the First Amendmentbetter than what I just verbalized, but the description would have to besimilar. We simply disagree about whether speech is involved in this case.Yes, almost none of what others place on the Platforms is subject to anyaction by the companies that own them. The First Amendment, though, iswhat protects the curating, moderating, or whatever else we call thePlatforms' interaction with what others are trying to say. We are in a newarena, a very extensive one, for speakers and for those who would moderatetheir speech. None of the precedents fit seamlessly. The majority appearsassured of their approach; I am hesitant. The closest match I see is caselawestablishing the right of newspapers to control what they do and do not print,and that is the law that guides me until the Supreme Court gives us more.

Judge Southwick then dismantles, bit by bit, each of Oldham's arguments regarding the 1st Amendment and basically highlights how his much younger colleague is clearly misreading a few outlier Supreme Court rulings.

It's a good read, but this post is long enough already. I'll just note this point from Southwick's dissent:

In no manner am I denying the reasonableness of the governmentalinterest. When these Platforms, that for the moment have gained suchdominance, impose their policy choices, the effects are far more powerful andwidespread than most other speakers' choices. The First Amendment,though, is not withdrawn from speech just because speakers are using theiravailable platforms unfairly or when the speech is offensive. The assertedgovernmental interest supporting this statute is undeniably related to thesuppression of free expression. The First Amendment bars the restraints.

This resonated with me quite a bit, and drove home the problem with Oldham's argument. It is the equivalent of one of Ken White's famed free speech tropes. Oldham pointed to the outlier cases where some compelled speech was found constitutional, and turned that automatically into if some compelled speech is constitutional, then it's okay for this compelled speech to be constitutional."

But that's not how any of this works.

Southwick also undermines Oldham's common carrier arguments and his Section 230 arguments, noting:

Section 230 also does not affect the First Amendment right of thePlatforms to exercise their own editorial discretion through contentmoderation. My colleague suggests that Congress's judgment" asexpressed in 47 U.S.C. 230 reinforces our conclusion that the Platforms'censorship is not speech under the First Amendment." Maj. Op. at 39. Thatopinion refers to this language: No provider or user of an interactivecomputer service" - interactive computer service being a defined termencompassing a wide variety of information services, systems, and accesssoftware providers - shall be treated as the publisher or speaker of anyinformation provided by another content provider." 47 U.S.C. 230(c)(1).Though I agree that Congressional fact-findings underlying enactments maybe considered by courts, the question here is whether the Platforms' barredactivity is an exercise of their First Amendment rights. If it is, Section 230'scharacterizations do not transform it into unprotected speech.

The Platforms also are criticized for what my colleague sees as aninconsistent argument: the Platforms analogize their conduct to the exerciseof editorial discretion by traditional media outlets, though Section 230 by itsterms exempts them from traditional publisher liability. This may be exactlyhow Section 230 is supposed to work, though. Contrary to the contentionabout inconsistency, Congress in adopting Section 230 never factuallydetermined that the Platforms are not publishers.'" Maj. Op. at 41. Asone of Section 230's co-sponsors - former California CongressmanChristopher Cox, one of the amici here - stated, Section 230 merelyestablished that the platforms are not to be treated as the publishers of piecesof content when they take up the mantle of content moderation, which wasprecisely the problem that Section 230 set out to solve: contentmoderation . . . is not only consistent with Section 230; its protection is the very raison d'etre of Section 230." In short, we should not force a falsedichotomy on the Platforms. There is no reason that a platform must beclassified for all purposes as either a publisher or a mere conduit." In anycase, as Congressman Cox put it, because content moderation is a form ofeditorial speech, the First Amendment more fully protects it beyond thespecific safeguards enumerated in 230(c)(2)." I agree.

Anyway, that's the quick analysis of this mess. There will be more to come, and I imagine this will be an issue for the Supreme Court to sort out. I wish I had confidence that they would not contradict themselves, but I'm not sure I do.

The future of how the internet works is very much at stake with this one.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments