The ‘Institute For Free Speech’ Seems Confused About Free Speech Online

There's a very strange opinion piece over at The Hill by the chair of something called The Institute for Free Speech, Bradley Smith, basically arguing that because courts are finding that websites are protected by Section 230 while moderating in ways that some people (read: him and his friends)... Congress may take away Section 230, and the way to avoid that is for sites to stop moderating content that some people (again: him and his friends) don't like... even though they have a 1st Amendment right to do so.
The piece starts out by talking about the very good 11th Circuit decision calling Florida's social media bill unconstitutional, along with the Supreme Court's decision to reinstate a lower court ruling blocking Texas' similar law from going into effect. But he uses these rulings as a jumping off point to argue that they will cause Congress to remove Section 230.
Within these victories, however, lie the seeds of disaster for the platforms - the possible repeal, or substantial alteration, of Section 230 of the Communications Decency Act of 1996.
I mean, it's possible, though I'm not sure it would be because of those two rulings. There is bipartisan hatred of Section 230, but generally for opposite reasons, so rulings in any direction these days may cause an eager Congress to try to do something. But given that the 11th Circuit decision was based around the 1st Amendment, and barely touched on Section 230, it's weird to call out Section 230 as the issue.
The key provision of Section 230, which has been dubbed the the twenty-six words that created the internet" by cybersecurity law professor Jeff Kosseff, shields companies from liability for what others post on online platforms. Traditional publishers such as newspapers, by contrast, can be sued for what they allow in their pages.
It's always weird when people cite Jeff's book when it's clear they haven't read any of it. So, at the very least, I'd recommend that Smith actually take the time to read Jeff's book, because it would debunk some other nonsense he has in his piece.
Section 230 was never meant as a gift to Big Tech, which could hardly be said to exist in 1996. Rather, it protected the nascent internet from being crushed by lawsuits or swamped with obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable" speech. Congress wanted companies to be able to exercise editorial control over that sort of content without becoming liable for everything else users post on their platforms.
First off, Section 230 was passed in response to two cases: one involving CompuServe (at the time owned by H&R Block, which was a pretty big company at the time), and one involving Prodigy (at the time owned by IBM and Sears Roebuck, also pretty large companies). So this idea that it was to protect nascent" industries has always struck me as ahistorical.
Second, that summary of what Congress wanted" also seems to only get a part of the story, and not the full picture. As the authors of Section 230 have stated repeatedly, the point of Section 230 wasn't just to keep websites from being crushed in this manner, but rather to let them create the kinds of communities they wanted, without fear of having to face litigation over every editorial decision. That is, it is designed as a procedural booster to the 1st Amendment - a kind of anti-SLAPP law to get rid of frivolous litigation quickly.
And, of course it was never meant to be a gift to big tech" because it was never about the tech at all. It was meant to be a gift to free speech. That's why it is focused on (1) protecting sites that host user content and (2) protecting those users as well (something that most critics of 230 ignore).
Smith then does correctly note that if websites had to carefully screen all content, it would basically be impossible and would create a mess, and notes how much 230 has helped to build the modern internet to enable people to communicate... but then it goes off the rails quickly. He suggests that big tech" is somehow abusing Section 230.
The question now is: What happens when Big Tech decides it doesn't want to let everyone speak freely?
Except, no, we already answered that question with Section 230 and the 1st Amendment much earlier: nothing happens. Companies are free to set their own editorial rules and people are free to use or not use the service based on those, and if you break the rules, the services are free to respond to that rule breaking. That's exactly what 230 intended. There's no further question that really needs to be asked or answered. But Smith thinks otherwise.
The major platforms censor users for purposes that Congress never considered or intended in 1996. Section 230 identifies only speech belonging to the categories above as appropriate for removal.
Except... this is not true. Congress absolutely considered and intended this in 1996, again, according to the very authors of Section 230. The entire intent was to allow websites to determine for themselves what kind of community they wanted, what to allow, and what not to allow, without fear of having to litigate every decision. As the authors of 230 have noted, a forum discussing Republican politics shouldn't be forced to also host Democratic talking points, and vice versa.
The line about 230 identifying only speech belonging to the categories above as appropriate for removal" is hogwash. It's a myth promoted by people who do not understand the law, or any of the jurisprudence in the last two and a half decades around the law.
More specifically, it's misreading how the two key sections of 230, (c)(1) and (c)(2) work. (c)(1) is the key part of the Section 230, and it is the 26 words." It makes no mention of categories of content. It flat out says that a website cannot be held liable for 3rd party speech. Full stop. And courts have interpreted that (correctly according to the authors of the law) to mean that there is no liability at all that can be based on third party speech - including around removals of content or other moderation choices.
The categories come in in (c)(2) - which, notably, are not part of the 26 words. There are actually very few cases exploring (c)(2), because (c)(1) covers almost all of content moderation. But in the rare cases where courts actually do consider (c)(2), they make it clear that the list of items that are mentioned in (c)(2) should be considered broadly and with great discretion towards the right and ability of the website itself to make the determination of what content it wants to allow (or not allow) on its website, because otherwise it would nuke the entire purpose of Section 230 and implicate the 1st Amendment, leading to vexatious litigation over every editorial and moderation decision.
So, Smith, here, seems confused about how Section 230 works, how (c)(1) and (c)(2) work together, and how the list of content that sites can moderate is illustrative, and not comprehensive - and that it needs to be to avoid running afoul of the 1st Amendment.
Smith, however, is sure that the law wasn't intended to allow websites to take down content he doesn't like. He's wrong. It was. He also seems to have been taken in by misleading stories pushed by bad faith actors pretending that the big social media sites are biased against conservatives. He lists out a bunch of out of context examples (I'm not going to go through them now, we've debunked them all in the past) without noting how each of those examples actually involved breaking rules the platforms set forth, and how there were examples of those same rules being applied to left-leaning content as well. All of that disproves his theory, but he's pushing an agenda, not reality.
If the law had intended to bless the removal of any speech that platforms wish to take down, it would say so. It does not.
Except it does. First, it says a platform can't be held liable for 3rd party content, and courts have correctly (according to the bill's own authors) interpreted that to mean the removal of their content as well. And, even if you have to rely on (c)(2), the courts say to construe that broadly, and that includes the otherwise objectionable" part, which courts have correctly said must be based on what the website itself deems objectionable, not some other standard. Because if it wasn't based on the platform's opinion of what's objectionable, it would interfere with the website's own 1st Amendment editorial rights.
Nevertheless, the platforms now argue that they can block anything they want, at any time, for any reason, and there is nothing any person or state can do about it.
Because that's correct. Bizarrely, Smith then admits that websites do in fact have a 1st Amendment right to moderate as they see fit. This paragraph is the most confusing one in the piece:
When courts review a platform's curation of content, they claim a publisher's First Amendment rights. But when legislatures review their liability for user speech, they suddenly transform into mere conduits deserving of special immunity. However comfortable that arrangement may be for the platforms, it is likely intolerable to Washington.
Yes. Websites have a 1st Amendment right to moderate how they wish to. The Section 230 liability provisions work in concert with the 1st Amendment as a procedural benefit, because having to fully litigate the 1st Amendment issue is long and costly. Section 230's entire main feature is to say this is already decided. A website cannot be liable for its moderation decisions, since that would interfere with the 1st Amendment, so therefore, the websites are not liable, kick this lawsuit out of court now."
Big Tech's arguments are so extreme as to close the door on virtually any effort to combat its influence over our politics, or to secure fairer treatment for Americans online. If the only option left for Congress is to amend or repeal Section 230, the result could be disastrous for the companies - and dangerous for free speech.
Except... you just admitted that the 1st Amendment already protects these decisions. So why are you now saying that these arguments are extreme" and put 230 at risk? Are Republican politicians mad that the 1st Amendment allows sites to remove their propaganda and misleading content? Sure. But at the same time, Democrats are mad that websites don't remove enough of that stuff. So, the entire crux of this article being stop removing so much content or Congress may remove 230" doesn't make any sense, because Democrats keep threatening to remove 230 because sites aren't taking down enough content. Both sides are wrong, but it doesn't make Smith's argument make any more sense. He seems to live in such a deep bubble that he doesn't realize what's going on. That's kind of embarrassing for a guy who used to run the Federal Election Commission.
The debate in the courts often plays out by analogy, as the two sides argue over whether social media is more like a newspaper or phone company, parade organizer or shopping mall. The reality, of course, is that they are none of these things exactly. A middle-ground solution might be best for all in the end, but its prospects are rapidly fading. Big Tech can celebrate for now, but they may look back and rue the day.
I mean, there's a reason why those analogies are used: because people are citing back to relevant cases about newspapers, phone companies, parades, and shopping malls. It's not like it just came out of the blue. They're citing precedent.
And, what exactly is the middle ground" you're suggesting here, because it sure sounds like you mean that the tech companies shouldn't be free to exercise their 1st Amendment editorial rights. And that seems like a dumb position to take for The Institute for Free Speech."
After I complained about this article on Twitter, Smith responded to me claiming I had misread the article, and presenting a further clarification via a Twitter thread. It does not help.
Sec. 230 doesn't give platforms the power to curate ("censor")-the 1st A. does that. What 230 does is allow platforms to curate without being legally classified as "publishers" of all that appears on their platforms. /1 https://t.co/UNHg3bVlOW
- Brad Smith (@CommishSmith) August 9, 2022
He's correct about the role of the 1st Amendment here, as we already noted above, but seems oblivious to the fact that this completely undermines his argument that social media sites cannot or should not moderate content that he personally thinks they should not moderate.
He complains about the otherwise objectionable" bit claiming that it renders the first six categories meaningless." Except, it does not. Again, he's already deep in the (c)(2) weeds here, ignoring the much more important (c)(1), but even if we accept his framing that (c)(2) is important, he leaves out an important part BEFORE the categories. I'll put in bold here to help him:
any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected
It's up to the provider to decide. That's it. End of story. (Just for clarity's sake, the user" part is for moderation decisions made by end users, which 230 also protects - and it's why 230 protects retweets, for example).
Section 230 is not saying that it only protects those things. It's saying that the provider gets to decide.
Smith concludes his clarification... with an outright fabrication. Claiming that the big tech platforms claim to be open to all." That has never been the case. They all have terms of service and they all have rules. And that's been true since the beginning.
All of the major actors claim to be open to all and to want a free, open net. They need to live up to that-if they don't, they may find that while there is a 1st A right to curate speech on one's platform, there's no right to do so w/ immunity from libel and defamation suits. /6
- Brad Smith (@CommishSmith) August 9, 2022
And that final line is bizarre. If they have a 1st Amendment right to curate speech on your own platform (and they do), then the only way to make that right real is to get them out of lawsuits early. Which is what Section 230 does. It protects that right by making it procedurally possible to avoid having to go through a full 1st Amendment defense (which is involved and expensive).
Again, this is something you would think that the Institute for Free Speech would understand. And support.
Yet, the basic argument here is that by exercising their own free speech rights in a way that some people, including Brad Smith, don't like, well, then Congress may seek to remove their rights. That strikes me as a counterproductive position for someone heading a free speech organization to take, but these days very little makes sense any more. Indeed, arguing that if you don't make editorial choices the government may like, the government may punish you" strikes me as a deeply ridiculous take for a free speech" organization to take. It's a kind of mafioso threat: hey, big tech, if you don't stop taking down the content I like, my friends in Congress may decide to punish you." What a deeply cynical and ridiculous take for a free speech organization to make.
Of course, as I was finishing this piece, a friend pointed out to me, helpfully, that The Institute for Free Speech" actually filed an amicus brief in support of Texas's laughably unconstitutional anti-1st Amendment content moderation bill. So, as with so many of these organizations, the name appears to be the opposite of what they actually do. They're just your garden variety anti-free speech, anti-1st Amendment authoritarians with a misleading name to cover up their authoritarian thuggish instincts.