Article 67J8R If You Don’t Want EU Style Censorship To Take Over The Internet, Support Section 230

If You Don’t Want EU Style Censorship To Take Over The Internet, Support Section 230

by
Mike Masnick
from Techdirt on (#67J8R)
Story Image

Last summer, I mocked the EU a bit for setting up a new office in Silicon Valley, and sending an official here to liaise with Silicon Valley companies affected by EU tech regulation," noting how it felt weird to have EU internet police setting up shop in Silicon Valley. Given that, I was a bit surprised that the new office invited me to moderate" a panel discussion last month about the Digital Services Act (DSA), a bill I have regularly criticized and which I think is going to be dangerous for free speech on the internet.

There was no recording of the actual panel, but much of it involved me, as moderator, and Berkeley's Brandie Nonnecke (who was a panelist, but could have just as easily moderated the panel) quizzing the EU official sent to Silicon Valley, Gerard de Graaf, about how the DSA would actually work in practice, and specifically about our concerns regarding the potential censorial nature of the law. de Graaf insisted, repeatedly, that the DSA is not a speech regulation," but then he basically kept going back to effectively admitting that it very much is a speech regulation. That is, he said it's not a speech regulation, it's just asking companies to have in place some practices to deal with bad" speech. And, even as he kept insisting that it wasn't a speech regulation, he admitted that if it didn't lead to less misinformation online, EU officials would be disappointed.

Which... means it's a speech regulation.

Any time either Nonnecke or I tried to pin de Graaf down on exactly what kind of speech was problematic, he did a neat little dance, where he'd pivot from disinformation" to content that was clearly illegal: child sexual abuse material and other criminal behavior. He would say of course, everyone agrees that this content is illegal." But the issue is that it's not always so clear which content is illegal, and the DSA is clearly targeting more than just illegal" content, but rather also requires a variety of processes and plans for companies to deal with legal content that EU officials deem problematic.

My takeaway is that the EU really wants to have it both ways. It wants to have speech regulation, but they know they can't call it speech regulation, so instead it's all just winks and nudges, telling companies they need to effectively disappear unwanted" speech... or something bad might happen to them. I understand the bureaucratic impulses that got the EU to this point, but I still find it extremely dangerous. While it's certainly not the same, it's not all that different from the original version of the Chinese Great Firewall, in which the government wouldn't tell service providers specifically what to censor, but just let it be known that they'd be displeased if the service providers got it wrong.

I don't think it's exactly the same here, obviously, and I do think that the EU officials honestly believe that they're the friendly regulators" who are trying to work in a more cooperative manner with the companies. That is, after all, more of the EU approach to things. Sometimes. But we've certainly seen EU officials suddenly decide that this or that company is somehow bad. Or, sometimes, bad for Europe. And suddenly while they can tell a sensible narrative about why they're doing what they're doing... it still boils down to because we don't like you."

So it's great to see the EU-based Jacob Mchangama (former Techdirt podcast guest and author of a wonderful book on the history of free speech) calling out the EU's approach to regulating the internet, and warning that America should not get too enamored with the EU's plan to regulate.

The European law, by contrast, may sound like a godsend to those Americans concerned about social media's weaponization against democracy, tolerance and truth after the 2020 election and the Jan. 6 insurrection. Former Secretary of State Hillary Clinton enthusiastically supported the European clampdown on Big Tech's amplification of what she considers disinformation and extremism." One columnist in the New Yorker hailed the Digital Services Act as a road map" for putting the onus on social-media companies to monitor and remove harmful content, and hit them with big fines if they don't."

But when it comes to regulating speech, good intentions do not necessarily result in desirable outcomes. In fact, there are strong reasons to believe that the law is a cure worse than the disease, likely to result in serious collateral damage to free expression across the EU and anywhere else legislators try to emulate it.

Removing illegal content sounds innocent enough. It's not. Illegal content" is defined very differently across Europe. In France, protesters have been fined for depicting President Macron as Hitler, and illegal hate speech may encompass offensive humor. Austria and Finland criminalize blasphemy, and in Victor Orban's Hungary, certain forms of LGBT propaganda" is banned.

As Mchangama notes, the DSA goes well beyond illegal" content as well (and even if it didn't, the EU's ever shifting definition of what is illegal is problematic in its own way):

The Digital Services Act will essentially oblige Big Tech to act as a privatized censor on behalf of governments - censors who will enjoy wide discretion under vague and subjective standards. Add to this the EU's own laws banning Russian propaganda and plans to toughen EU-wide hate speech laws, and you have a wide-ranging, incoherent, multilevel censorship regime operating at scale.

The obligation to assess and mitigate risks relates not only to illegal content, though. Lawful content could also come under review if it has any actual or foreseeable negative effect" on a number of competing interests, including fundamental rights," the protection of public health and minors" or civic discourse, the electoral processes and public security."

What this laundry list actually means is unclear. What we do know is that the unelected European Commission, the EU's powerful executive arm, will act as a regulator and thus have a decisive say in whether large platforms have done enough to counter both illegal and harmful" content. You don't have to be a psychic to predict that the commission could use such ill-defined terms to push for suppression of perfectly lawful speech that rubs it - or influential member states - the wrong way.

Mchangama notes, as we have, that Elon Musk's public embrace of the DSA seems wholly at odds with his recent attempts to (misleadingly) call out government involvement in Twitter's content moderation, which is way less heavy handed than anything in the DSA. Yet, Musk (falsely) claims that the US government is violating free speech, while the EU's approach makes sense?

For instance, Thierry Breton, a powerful European commissioner responsible for implementing the Digital Services Act, has already taken aim at Twitter, now run by Elon Musk. Last month, Breton gave Musk an ultimatum: Abide by the new rules or risk getting banned from the EU.

Such moves will only lead to excessive content moderation by other social media companies. Most large platforms already remove a lot of lawful but awful" speech. But given the legal uncertainty, and the risk of huge fines, platforms are likely to further err on the side of safety and adopt even more restrictive policies than required by the new law. In fact, Musk called the Digital Services Act very sensible," signaling his intent to comply in response to Breton's warning. This flies in the face of Musk's techno-optimistic commitment to only remove illegal content and his condemnation of Old Twitter's untransparent dealings with politicians and government officials seeking to influence content moderation.

But all of this is why Americans - and American tech companies - really should strongly embrace Section 230. Section 230 is, in many ways, the anti-DSA. Even as a bunch of very ignorant, very foolish people insist that Section 230 was how the US government pressured internet companies to censor," the opposite is true.

Section 230 gives companies the freedom to moderate how they want, without fear of facing liability or regulatory pressure for their decisions and non-decisions. Take that away, and suddenly lawmakers and bureaucrats - and anyone who can file a lawsuit - gain tremendous power to suppress speech. With 230, the companies get to decide, and if there are people who disagree with them, their options are to take their business elsewhere, not to create a legal punishment for the company.

But the DSA approach is vastly different. It starts from a stance that the government needs to be hovering over companies, with the ever-present threat of punishment for making (vaguely described) bad" decisions. And that, by its very nature, leads to much more widespread actual censorship, because the companies feel compelled to suppress speech to avoid state enforcement and punishment.

Over the past year or so, I've heard (somewhat tragically) that the big American internet companies (beyond Twitter and whatever it is that Musk thinks he's doing), recognizing that they're going to have to live with the DSA in the EU, are becoming more open to importing that model to the US.

They should not. That's not a model that the US should want, and it's a model that is dangerous for the American approach to free speech. Section 230 and the 1st Amendment, combined with normal business pressures, provides the exact right balance. Private companies moderate how they see best to do so, recognizing that allowing too much garbage will often drive away any business prospects. But without the extremely heavy hand of government (and its questionable motives) telling them what to take down and what to leave up.

The US doesn't need the DSA. We already have a better, more speech-supporting approach.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments