Before Advocating To Repeal Section 230, It Helps To First Understand How It Works
Brian Reed's Question Everything" podcast built its reputation on careful journalism that explores moral complexity within the journalism field. It's one of my favorite podcasts. Which makes his latest pivot so infuriating: Reed has announced he's now advocating to repeal Section 230-while demonstrating he fundamentally misunderstands what the law does, how it works, and what repealing it would accomplish.
If you've read Techdirt for basically any length of time, you'll know that I feel the exact opposite on this topic. Repealing, or really almost all proposals to reform Section 230, would be a complete disaster for free speech on the internet, including for journalists.
The problem isn't advocacy journalism-I've been doing that myself for years. The problem is Reed's approach: decide on a solution, then cherry-pick emotional anecdotes and misleading sources to support it, while ignoring the legal experts who could explain why he's wrong. It's the exact opposite of how to do good journalism, which is unfortunate for someone who holds out his (otherwise excellent!) podcast as a place to explore how to do journalism well.
Last week, he published the first episode of his get rid of 230" series, and it has so many problems, mistakes, and nonsense, that I feel like I had to write about it now, in the hopes that Brian might be more careful in future pieces. (Reed has said he plans to interview critics of his position, including me, but only after the series gets going-which seems backwards for someone advocating major legal changes.)
The framing of this piece is around the conspiracy theory regarding the Sandy Hook school shootings, and someone who used to believe them. First off, this feels like a cheap journalistic hook, basing a larger argument on an emotional hook that clouds the issues and the trade-offs. The Sandy Hook shooting was horrible! The fact that some jackasses pushed conspiracy theories about it is also horrific! That primes you in the form of something must be done, this is something, we must do this" to accept Reed's preferred solution: repeal 230.
But he doesn't talk to any actual experts on 230, misrepresents Section 230, misleads people into understanding how repealing 230 would impact that specific (highly emotional) story, and then closes on an emotionally manipulative hook (convincing the person he spoke to who used to believe in Sandy Hook conspiracy theories, that getting rid of 230 would work, despite her lack of understanding or knowledge of what would actually happen).
In listening to the piece, it struck me that Reed here is doing part of what he (somewhat misleadingly) claims social media companies are doing: hooking you with manipulative lies and misrepresentations to keep you hooked and to convince you something false is true by lying to his listeners. It's a shame, but it's certainly not journalism.
Let's dig into some of the many problems with the piece.
The Framing is Manipulative
I already mentioned that the decision to frame the entire piece around one extraordinary, but horrific story is manipulative, but it goes beyond that. Reed compares the fact that some of the victims from Sandy Hook successfully sued Alex Jones for defamation over the lies and conspiracy theories he spread regarding that event, to the fact that they can't sue YouTube.
But in 2022, family members of 10 of the Sandy Hook victims did win a defamation case against Alex Jones's company, and the verdict was huge. Jones was ordered to pay the family members over a billion dollars in damages.
Just this week, the Supreme Court declined to hear an appeal from Jones over it. A semblance of justice for the victims, though infuriatingly, Alex Jones filed for bankruptcy and has avoided paying them so far. But also, and this is what I want to focus on, the lawsuits are a real deterrent to Alex Jones and others who will likely think twice before lying like this again.
So now I want you to think about this. Alex Jones did not spread this lie on his own. He relied on social media companies, especially YouTube, which hosts his show, to send his conspiracy theory, out to the masses. One YouTube video spouting this lie shortly after the shooting got nearly 11 million views in less than 2 weeks. And by 2018 when the family sued him. Alex Jones had 1.6 billion views on his YouTube channel. The Sandy Hook lie was laced throughout that content, borrowing its way into the psyche of millions of people, including Kate and her dad.
Alex Jones made money off of each of those views. But so did YouTube. Yet, the Sandy Hook families, they cannot sue YouTube for defaming them because of section 230.
There are a ton of important details left out of this, that, if actually presented, might change the understanding here. First, while the families did win that huge verdict, much of that was because Jones defaulted. He didn't really fight the defamation case, basically ignoring court orders to turn over discovery. It was only after the default that he really tried to fight things at the remedy stage. Indeed, part of the Supreme Court cert petition that was just rejected was because he claimed he didn't get a fair trial due to the default.
You simply can't assume that because the families won that very bizarre case in which Jones treated the entire affair with contempt, that means that the families would have a case against YouTube as well. That's not how this works.
This is Not How Defamation Law Works
Reed correctly notes that the bar for defamation is high, including that there has to be knowledge to qualify, but then immediately seems to forget that. Without a prior judicial determination that specific content is defamatory, no platform-with or without Section 230-is likely to meet the knowledge standard required for liability. That's kind of important!
And I won't even get into him using the dangerously misleading fire in a crowded theater" line:
Now this is really important to keep in mind. Freedom of speech means we have the freedom to lie. We have the freedom to spew absolute utter bullshit. We have the freedom to concoct conspiracy theories and even use them to make money by selling ads or subscriptions or what have you.
Most lies are protected by the First Amendment and they should be.
But there's a small subset of lies that are not protected speech even under the First Amendment. The old shouting fire in a crowded theater, not necessarily protected. And similarly, lies that are defamatory aren't protected.
In order for a statement to be defamatory, okay, for the most part, whoever's publishing it has to know it's untrue and it has to cause damage to the person or the institution the statement's about. Reputational damage, emotional damage, or a lie could hurt someone's business. The bar for proving defamation is high in the US. It can be hard to win those cases.
I bolded the key part here: while there's some nuance here, mostly, the publisher has to know the statement is untrue. And the bar here is very high. To survive under the First Amendment, the knowledge standard is important.
It's why booksellers can't be held liable for obscene" books on their shelves. It's why publishers aren't held liable for books they publish, even if those books lead people to eat poisonous mushrooms. The knowledge standard matters.
And even though Reed mentions the knowledge point, he seems to immediately forget it. Nor does he even attempt to deal with the question of how an algorithm can have the requisite knowledge (hint: it can't). He just brushes past that kind of important part.
But it's the key to why his entire argument premise is flawed: just making it so anyone can sue web platforms doesn't mean anyone will win. Indeed, they'll lose in most cases. Because if you get rid of 230, the First Amendment still exists. But, because of a bunch of structural reasons explained below, it will make the world of internet speech much worse for you and I (and the journalists Reed wants to help), while actually clearing the market of competitors to the Googles and Metas of the world Reed is hoping to punish.
That's Not How Section 230 Works
Reed's summary is simply inaccurate. And not in the well, we can differ on how we describe it." He makes blatant factual errors. First, he claims that only internet companies" get 230 protections:
These companies have a special protection that only internet companies get. We need to strip that protection away.
But that's wrong. Section 230 applies to any provider of an interactive computer service (which is more than just internet companies") and their users. It's right there in the law. Because of that latter part, it has protected people forwarding emails and retweeting content. It has been used repeatedly to protect journalists on that basis. It protects you and me. It is not exclusive to internet companies." That's just factually wrong.
The law is not, and has never been, some sort of special privilege for certain kinds of companies, but a framework for protecting speech online, by making it possible for speech distributing intermediaries to exist in the first place. Which helps journalists. And helps you and me. Without it, there would be fewer ways in which we could speak.
Reed also appears to misrepresent or conflate a bunch of things here:
Section 230, which Congress passed in 1996, it makes it so that internet companies can't be sued for what happened happens on their sites. Facebook, YouTube, Tik Tok, they bear essentially no responsibility for the content they amplify and recommend to millions, even billions of people. No matter how much it harms people, no matter how much it warps our democracy under section 230, you cannot successfully sue tech companies for defamation, even if they spread lies about you. You can't sue them for pushing a terror recruitment video on someone who then goes and kills your family member. You can't sue them for bombarding your kids. with videos that promote eating disorders or that share suicide methods or sexual content.
First off, much of what he describes is First Amendment protected speech. Second, he ignores that Section 230 doesn't apply to federal criminal law, which is what things like terrorist content would likely cover (I'm guessing he's confused based on the Supreme Court cases from a few years ago, where 230 wasn't the issue-the lack of any traceability of the terrorist attacks to the websites was).
But, generally speaking, if you're advocating for legal changes, you should be specific in what you want changed and why. Putting out a big list of stuff, some of which would be protected, some of which would not me, as well as some that the law covers and some it doesn't... isn't compelling. It suggests you don't understand the basics. Furthermore, lumping things like eating disorders in with defamation and terrorist content, suggests an unwillingness to deal with the specifics and the complexities. Instead, it suggests a desire for a general why can't we pass a law that says bad stuff isn't allowed online?'" But that's a First Amendment issue, not a 230 issue (as we'll explain in more detail below).
Reed also, unfortunately, seems to have been influenced by the blatantly false argument that there's a platform/publisher distinction buried within Section 230. There isn't. But it doesn't stop him from saying this:
I'm going to keep reminding you what Section 230 is, as we covered on this show, because I want it to stick. Section 230, small provision in a law Congress passed in 1996, just 26 words, but words that were so influential, they're known as the 26 words that created the internet.
Quick fact check: Section 230 is way longer than 26 words. Yes, Section (c)(1) is 26 words. But, the rest matters too. If you're advocating to repeal a law, maybe read the whole thing?
Those words make it so that internet platforms cannot be treated as publishers of the content on their platform. It's why Sandy Hook parents could sue Alex Jones for the lies he told, but they couldn't sue the platforms like YouTube that Jones used to spread those lies.
And there is a logic to this that I think made sense when Section 230 was passed in the '90s. Back then, internet companies offered chat rooms, message boards, places where other people posted, and the companies were pretty passively transmitting those posts.
Reed has this completely backwards. Section 230 was a direct response to Stratton Oakmont v. Prodigy, where a judge ruled that Prodigy's active moderation to create a family friendly" service made it liable for all content on the platform.
The two authors of Section 230, Ron Wyden and Chris Cox, have talked about this at length for decades. They wanted platforms to be active participants and not dumb conduits passively transmitting posts. Their fear was without Section 230, those services would be forced to just be passive transmitters, because doing anything to the content (as Prodigy did) would make them liable. But given the amount of content, that would be impossible.
So Cox and Wyden's solution to encourage platforms to be more than passive conduits was to say if you do regular publishing activities-such as promoting, rearranging, and removing certain content then we won't treat you like a publisher."
The entire point was to encourage publisher-like behavior, not discourage it.
Reed has the law's purpose exactly backwards!
That's kind of shocking for someone advocating to overturn the law! It would help to understand it first! Because if the law actually did what Reed pretends it does, I might be in favor of repeal as well! The problem is, it doesn't. And it never did.
One analogy that gets thrown around for this is that the platforms, they're like your mailman. They're just delivering somebody else's letter about the Sandy Hook conspiracy. They're not writing it themselves. And sure, that might have been true for a while, but imagine now that the mailman reads the letter he's delivering, sees it's pretty tantalizing. There's a government conspiracy to take away people's guns by orchestrating a fake school shooting, hiring child actors, and staging a massacre and a whole 911 response.
The mailman thinks, That's pretty good stuff. People are going to like this." He makes millions of copies of the letter and delivers them to millions of people. And then as all those people start writing letters to their friends and family talking about this crazy conspiracy, the mailman keeps making copies of those letters and sending them around to more people.
And he makes a ton of money off of this by selling ads that he sticks into those envelopes. Would you say in that case the mailman is just a conduit for someone else's message? Or has he transformed into a different role? A role more like a publisher who should be responsible for the statements he or she actively chooses to amplify to the world. That is essentially what YouTube and other social media platforms are doing by using algorithms to boost certain content. In fact, I think the mailman analogy is tame for what these companies are up to.
Again, the entire framing here is backwards. It's based on Reed's false assumption-an assumption that any expert in 230 would hopefully disabuse him of-that the reason for 230 was to encourage platforms to be passive conduits" but it's the exact opposite.
Cox and Wyden were clear (and have remained clear) that the purpose of the law was exactly the opposite. It was to give platforms the ability to create different kinds of communities and to promote/demote/moderate/delete at will.
The key point was that, because of the amount of content, no website would be willing and able to do any of this if they were potentially held liable for everything.
As for the final point, that social media companies are now way different from the mailman," both Cox and Wyden have talked about how wrong that is. In an FCC filing a few years back, debunking some myths about 230, they pointed out that this claim of oh sites are different" is nonsense and misunderstands the fundamentals of the law:
Critics of Section 230 point out the significant differences between the internet of 1996 and today. Those differences, however, are not unanticipated. When we wrote the law, we believed the internet of the future was going to be a very vibrant and extraordinary opportunity for people to become educated about innumerable subjects, from health care to technological innovation to their own fields of employment. So we began with these two propositions: let's make sure that every internet user has the opportunity to exercise their First Amendment rights; and let's deal with the slime and horrible material on the internet by giving both websites and their users the tools and the legal protection necessary to take it down.
The march of technology and the profusion of e-commerce business models over the last two decades represent precisely the kind of progress that Congress in 1996 hoped would follow from Section 230's protections for speech on the internet and for the websites that host it. The increase in user-created content in the years since then is both a desired result of the certainty the law provides, and further reason that the law is needed more than ever in today's environment.
The Understanding of How Incentives Work Under the Law is Wrong
Here's where Reed's misunderstanding gets truly dangerous. He claims Section 230 removes incentives for platforms to moderate content. In reality, it's the opposite: without Section 230, websites would have less incentive to moderate, not more.
Why? Because under the First Amendment, you need to show that the intermediary had actual knowledge of the violative nature of the content. If you removed Section 230, the best way to prove that you have no knowledge is not to look, and not to moderate.
You potentially go back to a Stratton Oakmont-style world, where the incentives are to do less moderation because any moderation you do introduces more liability. The more liability you create, the less likely someone is to take on the task. Any investigation into Section 230 has to start from understanding those basic facts, so it's odd that Reed so blatantly misrepresents them and suggests that 230 means there's no incentive to moderate:
We want to make stories that are popular so we can keep audiences paying attention and sell ads-or movie tickets or streaming subscriptions-to support our businesses. But in the world that every other media company occupies, aside from social media, if we go too far and put a lie out that hurts somebody, we risk getting sued.
It doesn't mean other media outlets don't lie or exaggerate or spin stories, but there's still a meaningful guard rail there. There's a real deterrent to make sure we're not publishing or promoting lies that are so egregious, so harmful that we risk getting sued, such as lying about the deaths of kids who were killed and their devastated parents.
Social media companies have no such deterrent and they're making tons of money. We don't know how much money in large part because the way that kind of info usually gets forced out of companies is through lawsuits which we can't file against these tech behemoths because of section 230. So, we don't know, for instance, how much money YouTube made from content with the Sandy Hook conspiracy in it. All we know is that they can and do boost defamatory lies as much as they want, raking cash without any risk of being sued for it.
But this gets at a fundamental flaw that shows up in these debates: that the only possible pressure on websites is the threat of being sued. That's not just wrong, it, again, totally gets the purpose and function of Section 230 backwards.
There are tons of reasons for websites to do a better job moderating: if your platform fills up with garbage, users start to go away. As do advertisers, investors, other partners as well.
This is, fundamentally, the most frustrating part about every single new person who stumbles haphazardly into the Section 230 debate without bothering to understand how it works within the law. They get the incentives exactly backwards.
230 says experiment with different approaches to making your website safe." Taking away 230 says any experiment you try to keep your website safe opens you up to ruinous litigation." Which one do you think leads to a healthier internet?
It Misrepresents how Companies Actually Work
Reed paints tech companies as cartoon villains, relying on simplistic and misleading interpretations of leaked documents and outdated sources. This isn't just sloppy-it's the kind of manipulative framing he'd probably critique in other contexts.
For example, he grossly misrepresents (in a truly manipulative way!) what the documents Frances Haugen released said, just as much of the media did. For example, here's how Reed characterizes some of what Haugen leaked:
Haugen's document dump showed that Facebook leadership knew about the harms their product is causing, including disinformation and hate speech, but also product designs that were hurting children, such as the algorithm's tendency to lead teen girls to posts about anorexia. Francis Haugen told lawmakers that top people at Facebook knew exactly what the company was doing and why it was doing.
Except... that's very much out of context. Here's how misleading Reed's characterization is. The actual internal research Haugen leaked-the stuff Reed claims shows Facebook knew about the harms"-looked like this:

The headline of that slide sure looks bad, right? But then you look at the context, which shows that in nearly every single category they studied across boys and girls, they found that more users found Instagram made them feel better, not worse. The only category where that wasn't true was teen girls and body image, where the split was pretty equal. That's one category out of 24 studied! And this was internal research calling out that fact because the point was to convince the company to figure out ways to better deal with that one case, not to ignore it.
And, what we've heard over and over again since all this is that companies have moved away from doing this kind of internal exploration, because they know that if they learn about negative impacts of their own service, it will be used against them by the media.
Reed's misrepresentation creates exactly the perverse incentive he claims to oppose: companies now avoid studying potential harms because any honest internal research will be weaponized against them by journalists who don't bother to read past the headline. Reed's approach of getting rid of 230's protections would make this even worse, not better.
Because as part of any related lawsuit there would be discovery, and you can absolutely guarantee that a study like the one above that Haugen leaked would be used in court, in a misleading way, showing just that headline, without the necessary context of we called this out to see how we could improve."
So without Section 230 and with lawsuits, companies would have much less incentive to look for ways to improve safety online, because any such investigation would be presented as knowledge" of the problem. Better not to look at all.
There's a similar problem with the way Reed reports on the YouTube algorithm. Reed quotes Guillaume Chaslot but doesn't mention that Chaslot left YouTube in 2013-12 years ago. That's ancient history in tech terms. I've met Chaslot and been on panels with him. He's great! And I think his insights on the dangers of the algorithm in the early days were important work and highlighted to the world the problems of bad algorithms. But it's way out of date. And not all of the algorithms are bad.
Conspiracy theories are are really easy to make. You can just make your own conspiracy theories in like one hour shoot it and then it get it can get millions of views. They're addictive because people who live in this filter bubble of conspiracy theories and they don't watch the classical media. So they spend more time on YouTube.
Imagine you're someone who doesn't trust the media, you're going to spend more time on YouTube. So since you spend more time on YouTube, the algorithm thinks you're better than anybody else. The definition of better for the algorithm, it's who spends more time. So it will recommend you more. So there's like this vicious call.
It's a vicious circle, Chaslot says, where the more conspiratorial the videos, the longer users stay on the platform watching them, the more valuable that content becomes, the more YouTube's algorithm recommends the conspiratorial videos.
Since Chaslot left YouTube, there have been a series of studies that have shown that, while some of that may have been true back when Chaslot was at the company, it hasn't been true in many, many years.
A study in 2019 (looking at data from 2016 onwards) found that YouTube's algorithm actually pushed people away from radicalizing content. A further study a couple of years ago similarly found no evidence of YouTube's algorithm sending people down these rabbit holes.
It turns out that things like Chaslot's public berating of the company, as well as public and media pressure, not to mention political blowback, had helped the company re-calibrate the algorithm away from all that.
And you know what allowed them to do that? The freedom Section 230 provided, saying that they wouldn't face any litigation liability for adjusting the algorithm.
A Total Misunderstanding of What Would Happen Absent 230
Reed's fundamental error runs deeper than just misunderstanding the law-he completely misunderstands what would happen if his solution" were implemented. He claims that the risk of lawsuits would make the companies act better:
We need to be able to sue these companies.
Imagine the Sandy Hook families had been able to sue YouTube for defaming them in addition to Alex Jones. Again, we don't know how much money YouTube made off the Sandy Hook lies. Did YouTube pull in as much cash as Alex Jones, five times as much? A hundred times? Whatever it was, what if the victims were able to sue YouTube? It wouldn't get rid of their loss or trauma, but it could offer some compensation. YouTube's owned by Google, remember, one of the most valuable companies in the world. More likely to actually pay out instead of going bankrupt like Alex Jones.
This fantasy scenario has three fatal flaws:
First, YouTube would still win these cases. As we discussed above, there's almost certainly no valid defamation suit here. Most complained about content will still be First Amendment-protected speech, and YouTube, as the intermediary, would still have the First Amendment and the actual knowledge" standard to fall back on.
The only way to have actual knowledge of content being defamatory is for there to be a judgment in court about the content. So, YouTube couldn't be on the hook in this scenario until after the plaintiffs had already taken the speaker to court and received a judgment that the content was defamatory. At that point, you could argue that the platform would then be on notice and could no longer promote the content. But that wouldn't stop any of the initial harms that Reed thinks they would.
Second, Reed's solution would entrench Big Tech's dominance. Getting a case dismissed on Section 230 grounds costs maybe $50k to $100k. Getting the same case dismissed on First Amendment grounds? Try $2 to $5 million.
For a company like Google or Meta, with their buildings full of lawyers, this is still pocket change. They'll win those cases. But it means that you've wiped out the market for non-Meta, non-Google sized companies. The smaller players get wiped out because a single lawsuit (or even a threat of a lawsuit) can be existential.
The end result: Reed's solution gives more power to the giant companies he paints as evil villains.
Third, there's vanishingly little content that isn't protected by the First Amendment. Using the Alex Jones example is distorting and manipulative, because it's one of the extremely rare cases where defamation has been shown (and that was partly just because Jones didn't really fight the case).
Reed doubles down on these errors:
But on a wider scale, The risk of massive lawsuits like this, a real threat to these companies' profits, could finally force the platforms to change how they're operating. Maybe they change the algorithms to prioritize content from outlets that fact check because that's less risky. Maybe they'd get rid of fancy algorithms altogether, go back to people getting shown posts chronologically or based on their own choice of search terms. It'd be up to the companies, but however they chose to address it, they would at least have to adapt their business model so that it incorporated the risk of getting sued when they boost damaging lies.
This shows Reed still doesn't understand the incentive structure. Companies would still win these lawsuits on First Amendment grounds. And they'd increase their odds by programming algorithms and then never reviewing content-the exact opposite of what Reed suggests he wants.
And here's where Reed's pattern of using questionable sources becomes most problematic. He quotes Frances Haugen advocating for his position, without noting that Haugen has no legal expertise on these issues:
For what it's worth, this is what Facebook whistleblower Frances Haugen argued for in Congress in 2021.
I strongly encourage reforming Section 230 to exempt decisions about algorithms. They have 100% control over their algorithms and Facebook should not get a free pass on choices it makes to prioritize growth and virality and reactiveness over public safety. They shouldn't get a free pass on that because they're paying for their profits right now with our safety. So, I strongly encourage reform of 230 in that way.
But, as we noted when Haugen said that, this is (again) getting it all backwards. At the very same time that Haugen was testifying with those words, Facebook was literally running ads all over Washington DC, encouraging Congress to reform Section 230 in this way. Facebook wants to destroy 230.
Why? Because Zuckerberg knows full well what I wrote above. Getting rid of 230 means a few expensive lawsuits that his legal team can easily win, while wiping out smaller competitors who can't afford the legal bills.
Meta's usage has been declining as users migrate to smaller platforms. What better way to eliminate that competition than making platform operation legally prohibitive for anyone without Meta's legal budget?
Notably, not a single person Reed speaks to is a lawyer. He doesn't talk to anyone who lays out the details of how all this works. He only speaks to people who dislike tech companies. Which is fine, because it's perfectly understandable to hate on big tech companies. But if you're advocating for a massive legal change, shouldn't you first understand how the law actually works in practice?
For a podcast about improving journalism, this represents a spectacular failure of basic journalistic practices. Indeed, Reed admits at the end that he's still trying to figure out how to do all this:
I'm still trying to figure out how to do this whole advocacy thing. Honestly, pushing for a policy change rather than just reporting on it. It's new to me and I don't know exactly what I'm supposed to be doing. Should I be launching a petition, raising money for like a PAC? I've been talking to marketing people about slogans for a campaign. We'll document this as I stumble my way through. It's all a bit awkward for me. So, if you have ideas for how you can build this movement to be able to sue big tech. Please tell me.
There it is: I'm still trying to figure out how to do this whole advocacy thing." Reed has publicly committed to advocating for a specific legal change-one that would fundamentally reshape how the internet works-while admitting he doesn't understand advocacy, hasn't talked to experts, and is figuring it out as he goes. Generally it's a bad idea to come up with a slogan when you still don't even understand the thing you're advocating for.
This is advocacy journalism in reverse: decide your conclusion, then do the research. It's exactly the kind of shoddy approach that Reed would rightly criticize in other contexts.
I have no problem with advocacy journalism. I've been doing it for years. But effective advocacy starts with understanding the subject deeply, consulting with experts, and then forming a position based on that knowledge. Reed has it backwards.
The tragedy is that there are so many real problems with how big tech companies operate, and there are thoughtful reforms that could help. But Reed's approach-emotional manipulation, factual errors, and backwards legal analysis-makes productive conversation harder, not easier.
Maybe next time, try learning about the law first, then deciding whether to advocate for its repeal.