Article 3SH75 Lessons From Making Internet Companies Liable For User's Speech: You Get Less Speech, Less Security And Less Innovation

Lessons From Making Internet Companies Liable For User's Speech: You Get Less Speech, Less Security And Less Innovation

by
Mike Masnick
from Techdirt on (#3SH75)
Story Image

Stanford's Daphne Keller is one of the world's foremost experts on intermediary liability protections and someone we've mentioned on the website many times in the past (and have had her on the podcast a few times as well). She's just published a fantastic paper presenting lessons from making internet platforms liable for the speech of its users. As she makes clear, she is not arguing that platforms should do no moderation at all. That's a silly idea that no one who has any understanding of these issues thinks is a reasonable idea. The concern is that as many people (including regulators) keep pushing to pin liability on internet companies for the activities of their users, it creates some pretty damaging side effects. Specifically, the paper details how it harms speech, makes us less safe, and harms the innovation economy. It's actually kind of hard to see what the benefit side is on this particular cost-benefit equation.

As the paper notes, it's quite notable how the demands from people about what platforms should do keeps changing. People keep demanding that certain content gets removed, while others freak out that too much content is being removed. And sometimes it's the same people (they want the "bad" stuff -- i.e., stuff they don't like -- removed, but get really angry when the stuff they do like is removed). Perhaps even more importantly, the issues for why certain content may get taken down are the same issues that often involve long and complex court cases, with lots of nuance and detailed arguments going back and forth. And yet, many people seem to think that private companies are somehow equipped to credibly replicate that entire judicial process, without the time, knowledge or resources to do so:

As a society, we are far from consensus aboutlegal or social speech rules. There are still enough novel and disputed questions surroundingeven long-standing legal doctrines, like copyright and defamation, to keep law firms in business. If democratic processes and court rulings leave us with such unclear guidance, wecannot reasonably expect private platforms to do much better. However they interpret thelaw, and whatever other ethical rules they set, the outcome will be wrong by many people'sstandards.

Keller then looked at a variety of examples involving intermediary liability to see what the evidence says would happen if we legally delegate private internet platforms into the role of speech police. It doesn't look good. Free speech will suffer greatly:

The first cost of strict platform removal obligations is to internet users' free expressionrights. We should expect over-removal to be increasingly common under laws that ratchetup platforms' incentives to err on the side of taking things down. Germany's new NetzDGlaw, for example, threatens platforms with fines of up to &euro'50 million for failure to remove"obviously" unlawful content within twenty-four hours' notice. This has already ledto embarrassing mistakes. Twitter suspended a German satirical magazine for mockinga politician, and Facebook took down a photo of a bikini top artfully draped over adouble speed bump sign.11 We cannot know what other unnecessary deletions have passedunnoticed.

From there, the paper explores the issue of security. Attempts to stifle terrorists' use of online services by pressuring platforms to remove terrorist content may seem like a good idea (assuming we agree that terrorism is bad), but the actual impact goes way beyond just having certain content removed. And the paper looks at what the real world impact of these programs have been in the realm of trying to "counter violent extremism."

The second cost I will discuss is to security. Online content removal is only one of manytools experts have identified for fighting terrorism. Singular focus on the internet, andoverreliance on content purges as tools against real-world violence, may miss out on or evenundermine other interventions and policing efforts.

The cost-benefit analysis behind CVE campaigns holds that we must accept certaindownsides because the upside-preventing terrorist attacks-is so crucial. I will argue thatthe upsides of these campaigns are unclear at best, and their downsides are significant.Over-removal drives extremists into echo chambers in darker corners of the internet, chillsimportant public conversations, and may silence moderate voices. It also builds mistrustand anger among entire communities. Platforms straining to go "faster and further" intaking down Islamist extremist content in particular will systematically and unfairlyburden innocent internet users who happened to be speaking Arabic, discussing MiddleEastern politics, or talking about Islam. Such policies add fuel to existing frustrations withgovernments that enforce these policies, or platforms that appear to act as state proxies.Lawmakers engaged in serious calculations about ways to counter real-world violence-notjust online speech-need to factor in these unintended consequences if they are to set wisepolicies.

Finally, the paper looks at the impact on innovation and the economy and, again, notes that putting liability on platforms for user speech can have profound negative impacts.

The third cost is to the economy. There is a reason why the technology-driven economicboom of recent decades happened in the United States. As publications with titles like"How Law Made Silicon Valley" point out, our platform liability laws had a lot to do withit. These laws also affect the economic health of ordinary businesses that find customersthrough internet platforms-which, in the age of Yelp, Grubhub, and eBay, could be almostany business. Small commercial operations are especially vulnerable when intermediaryliability laws encourage over-removal, because unscrupulous rivals routinely misuse noticeand takedown to target their competitors.

The entire paper weighs in at a neat 44 pages and it's chock full of useful information and analysis on this very important question. It should be required reading for anyone who thinks that there are easy answers to the question of what to do about "bad" content online, and it highlights that we actually have a lot of data and evidence to answer the questions that many legislators seem to be regulating based on how they "think" the world would work, rather than how the world actually works.

Current attitudes toward intermediary liability, particularly in Europe, verge on "regulatefirst, ask questions later." I have suggested here that some of the most important questionsthat should inform policy in this area already have answers. We have twenty years ofexperience to tell us how intermediary liability laws affect, not just platforms themselves,but the general public that relies on them. We also have valuable analysis and sources oflaw from pre-internet sources, like the Supreme Court bookstore cases. The internet raisesnew issues in many areas-from competition to privacy to free expression-but none are asnovel as we are sometimes told. Lawmakers and courts are not drafting on a blank slate forany of them.

Demands for platforms to get rid of all content in a particular category, such as "extremism,"do not translate to meaningful policy making-unless the policy is a shotgun approachto online speech, taking down the good with the bad. To "go further and faster" ineliminating prohibited material, platforms can only adopt actual standards (more or lessclear, and more or less speech-protective) about the content they will allow, and establishprocedures (more or less fair to users, and more or less cumbersome for companies) forenforcing them.

On internet speech platforms, just like anywhere else, only implementable thingshappen. To make sound policy, we must take account of what real-world implementationwill look like. This includes being realistic about the capabilities of technical filters andabout the motivations and likely choices of platforms that review user content underthreat of liability.

This is an important contribution to the discussion, and highly recommended. Go check it out.



Permalink | Comments | Email This Story
External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments