As Old Session Of Congress Closed, One Final Bipartisan Bill To Pressure Websites To Censor Controversial Content Is Introduced
A new Congress has begun, but in the waning days of the last one, we got one final bipartisan bill to amend" Section 230. It officially died with the last Congress, but it sure is a sign of what to expect from this new one (introducing it at the very end of the session with no chance to go anywhere is known as a messaging" bill, alerting others in Congress about legislation these troublemakers are interested in pushing). In this case, the bill is bipartisan, coming from Reps. David Cicilline and Ken Buck, who teamed up in their seething, unmoored-from-reality, moral-panic hatred of tech" multiple times in the last Congress.
This bill is called the Platform Integrity Act and the very fact that they released the bill without releasing the actual language should suggest how serious they are about it.

However, the description of the bill in Cicilline's press release is pretty clear (and, not for the first time, suggests that Cicilline and Buck both need remedial education on how the 1st Amendment actually works).
The Platform Integrity Act would:
- Offer a simple and common-sense clarification of the scope of 47 U.S.C. 230(c)(1) by removing a bar to recovery for victims who have suffered harm from acts of terrorism, hate, or extremism enabled by online platforms' content suggestions.
- Reject the judicial misinterpretation of 47 U.S.C. 230(c)(1) whereby courts have concluded, for example, that the statute bars victims of terrorist attacks from seeking relief from a social-media company for its proactive role connecting the perpetrators through friend- and content-suggestion algorithms.
- Adopt the correct interpretation of the statute reflected in the separate opinion of the late Honorable Robert Katzmann in Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019), wherein he concluded that it strains the English language" to construe 47 U.S.C. 230(c)(1) to say that in targeting and recommending [extremist] writings to users," thereby forging connections" and developing new social networks," online platforms are protected from liability by the statute.
- Apply only to content that the platform actively promotes, leaving in place Section 230(c)(2)'s protection of platforms' good-faith application of terms of service and community guidelines.
We've heard suggestions like this before from others, and it shows a profound lack of understanding about how any of this actually works.
Let's go through each of the bullet points to explain what the confusion is (though, honestly, in an ideal world, I shouldn't be explaining how the 1st Amendment works to two sitting Congressional Reps.).
Offer a simple and common-sense clarification of the scope of 47 U.S.C. 230(c)(1) by removing a bar to recovery for victims who have suffered harm from acts of terrorism, hate, or extremism enabled by online platforms' content suggestions.
This won't help in the way that Cicilline and Buck think, and will only lead to serious problems. As we've discussed about other Section 230 reform" bills, the 1st Amendment requires actual knowledge by a distributor of the illegality of the content in question (not just the existence of the content itself). As the Supreme Court noted in Smith v. California, if you start blaming distributors for issues without their direct knowledge, then massive speech suppression is the likely result:
There is no specific constitutional inhibition against making the distributors of good the strictest censors of their merchandise, but the constitutional guarantees of the freedom of speech and of the press stand in the way of imposing a similar requirement on the bookseller. By dispensing with any requirement of knowledge of the contents of the book on the part of the seller, the ordinance tends to impose a severe limitation on the public's access to constitutionally protected matter. For if the bookseller is criminally liable without knowledge of the contents, and the ordinance fulfills its purpose, he will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected as well as obscene literature. It has been well observed of a statute construed as dispensing with any requirement of scienter that: Every bookseller would be placed under an obligation to make himself aware of the contents of every book in his shop. It would be altogether unreasonable to demand so near an approach to omniscience.' The King v. Ewart, 25 N.Z.L.R. 709, 729 (C.A.). And the bookseller's burden would become the public's burden, for by restricting him the public's access to reading matter would be restricted. If the contents of bookshops and periodical stands were restricted to material of which their proprietors had made an inspection, they might be depleted indeed. The bookseller's limitation in the amount of reading material with which he could familiarize himself, and his timidity in the face of his absolute criminal liability, thus would tend to restrict the public's access to forms of the printed word which the State could not constitutionally suppress directly. The bookseller's self-censorship, compelled by the State, would be a censorship affecting the whole public, hardly less virulent for being privately administered. Through it, the distribution of all books, both obscene and not obscene, would be impeded.
So, even if this bill removes Section 230 (c)(1) protections for platforms sued over harm from acts of terrorism, hate, or extremism," the platforms are not going to be held liable anyway, because they'll be protected under the 1st Amendment, unless the victims can show that they knew they were promoting content that was somehow illegal.
And that takes us to the second 1st Amendment issue with this bill: hate" and extremism" still remain legal under the 1st Amendment. Terrorism, somewhat obviously, is not protected speech, but this bill isn't actually about terrorism" because you don't commit terrorism via social media. What they're talking about are a bunch of bogus ambulance-chaser nonsense lawsuits in which big internet companies were sued because terrorists used social media.
But even in those cases, the nexus between social media" and act of terrorism" are so disconnected that they'd never survive a lawsuit anyway.
These are frivolous nuisance cases that are designed for one purpose only: to try to get the companies to cough up settlements as they'll be cheaper than going through a full court case. And, of course, this bill only serves to increase the likelihood of such things, because Section 230's main benefit is to get these kinds of cases kicked out at the earliest possible moment. That is, with 230, the companies can file a motion to dismiss at the beginning of a lawsuit, and usually get it dismissed without having to go through the more expensive parts of a lawsuit.
Without Section 230, and having to rely on a full 1st Amendment argument, the cases go on much longer, and are way more costly. This means it will encourage companies to (1) just pay up or (2) suppress any content that might lead to such a lawsuit even if that content is legally protected.
So this bill serves no legitimate interests, is on very shaky constitutional grounds, and seems to only encourage nuisance lawsuits against internet companies and settlements in response to those lawsuits. It's difficult to see how that benefits anyone other than some ambulance-chasing tort lawyers.
Reject the judicial misinterpretation of 47 U.S.C. 230(c)(1) whereby courts have concluded, for example, that the statute bars victims of terrorist attacks from seeking relief from a social-media company for its proactive role connecting the perpetrators through friend- and content-suggestion algorithms.
Again, there is no underlying cause of action here. Recommending someone be a friend or recommending content is, in itself, protected 1st Amendment activity, because recommendations are opinions and opinions are protected speech. The idea that an internet company would magically know that recommended content or a recommended friend might somehow be connected to a terrorist organization is nonsense, and without that underlying knowledge, no lawsuit could possibly succeed in the first place.
But, as with the first bullet, it will encourage more frivolous nuisance lawsuits, and corresponding settlements (potentially along with greater speech suppression to avoid them).
Adopt the correct interpretation of the statute reflected in the separate opinion of the late Honorable Robert Katzmann in Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019), wherein he concluded that it strains the English language" to construe 47 U.S.C. 230(c)(1) to say that in targeting and recommending [extremist] writings to users," thereby forging connections" and developing new social networks," online platforms are protected from liability by the statute.
This one kinda gives away the game. As we've noted, Force v. Facebook is one of the most obvious and extreme examples of a frivolous lawsuit designed solely to try to shakedown social media for money. Section 230 helped get the case tossed out quickly rather than having to go through a huge, long, expensive process... which absolutely would have ended in a loss for Force also. Just after a lot more money and time got wasted.
So, one needs to ask: why do Reps. Cicilline and Buck want to enable frivolous ambulance chaser lawsuits that can't win under the 1st Amendment, but only serve to cost tech companies a lot more money in wasteful legal fees?
Apply only to content that the platform actively promotes, leaving in place Section 230(c)(2)'s protection of platforms' good-faith application of terms of service and community guidelines.
This last point is kind of silly and is there to appease some supporters of Section 230, because there were other bills that looked to weaken (c)(2) in Section 230, the part of Section 230 that gets lots of attention from people who don't understand it, regarding good faith" efforts to moderate content. As we've discussed, most cases around moderation actually rely on (c)(1)'s prohibition on holding a website liable for speech from users, but people who don't follow the law assume (incorrectly) that (c)(2) is what enables websites to moderate (they're wrong: the 1st Amendment is key there).
All in all this is a silly bill that doesn't help anyone other than lawyers who file frivolous lawsuits. So why do Cicilline and Buck think this is necessary? And why did they release it as the last Congress closed out?