Article 6KQ6G Even The Most Well-Meaning Internet Regulations Can Cause Real Harm

Even The Most Well-Meaning Internet Regulations Can Cause Real Harm

by
Mike Masnick
from Techdirt on (#6KQ6G)
Story Image

Here's how people advocating for internet regulations for bad speech" think things will work: an enlightened group of pure-minded, thoughtful individuals will carefully outlaw dangerous speech that invokes hatred, or encourages bad behavior. Then, that speech will magically and cleanly disappear from the internet, and the internet will be a better place.

What could go wrong? Turns out, pretty much everything.

Here's how such things work in reality: laws are written that are broadly applicable to appease a variety of interests. Then those in power get upset about some sort of speech. And, even if they're well-meaning politicians, they realize, surely, what's the harm in using these laws to stop this kind of speech?' And that's how hate speech laws are used to jail French citizens for comparing their President to Hitler. Or to force a website to take down speech calling Germany's Justice Minister an idiot." Or to jail critics of the government. Or how there's a push to make criticizing the police into hate speech. The list goes on and on.

That's not to say those creating the laws aren't well-intentioned. Some of them really are. Many people keep telling me that we need to outlaw bad speech online or to force websites to be responsible" for the bad speech online.

And I keep trying to point out that the well-intentioned people calling for such laws are unlikely to be the people determining what is good speech" and what is bad speech." Over and over again, the people calling for such laws are unlikely to like the choices made by those who get to decide how those laws are used.

Laws to crack down on bad" speech, even well-intentioned ones, are open to very serious abuse. It is important to think about this, because we do see all sorts of not well-intentioned laws, and we criticize those all the time as well. But understanding how even well-intentioned laws can go wrong is equally important, to avoid enabling dangerous abuses of power.

That brings me to a recent piece in Foreign Affairs by David Kaye, former UN Special Rapporteur on freedom of expression, about the Risks of Internet Regulation. We talked about this article a bit on last week's Ctrl-Alt-Speech podcast, but I wanted to write some more about it. Kaye, who also wrote the excellent book Speech Police: the global struggle to govern the internet," is not some random person writing about this. He's spent years writing and thinking about these issues, and his piece in Foreign Affairs should stand as a warning to those rushing in to regulate the internet.

In response to public pressure to clean up the Internet, policymakers in Brussels, London, Washington, and beyond are following a path that, in the wrong hands, could lead to censorship and abuse. Some in Brussels, including in EU institutions and civil society, speak of an Orban test," according to which lawmakers should ask themselves whether they would be comfortable if legislation were enforced by Hungary's authoritarian and censorial Prime Minister Viktor Orban, or someone like him. This is a smart way to look at things, particularly for those in the United States concerned about the possibility of another term for former U.S. President Donald Trump (who famously, and chillingly, referred to independent media as enemies of the people). Rather than expanding government control over Internet speech, policymakers should focus on the kinds of steps that could genuinely promote a better Internet.

As the opening suggests, Kaye's piece details efforts in the US, EU, and the UK to regulate the internet. It highlights how each has a very real risk of going wrong in the wrong hands, even when done thoughtfully."

At the heart of Brussels' approach to online content is the Digital Services Act (DSA). When negotiations over the DSA concluded in April 2022, European Commission Executive Vice President Margrethe Vestager exulted that democracy is back." For Vestager and her allies, the DSA asserts the EU's public authority over private platforms. It restates existing EU rules that require platforms to take down illegal content when they are notified of its existence. In its detailed bureaucratic way, the DSA also goes further, seeking to establish how the platforms should deal with speech that, though objectionable, is not illegal. This category includes disinformation, threats to civic discourse and electoral processes," most content deemed harmful to children, and many forms of hate speech. The DSA disclaims specific directives to the companies. It does not require, for instance, the removal of disinformation or legal content harmful to children. Instead, it requires the largest platforms and search engines to introduce transparent due diligence and reporting. Such a step would give the Commission oversight power to evaluate whether these companies are posing systemic risks to the public.

Politicization, however, threatens the DSA's careful approach, a concern that emerged soon after Hamas's October 7 terrorist attacks on Israel. Posts glorifying Hamas or, conversely, promising a brutal Israeli vengeance immediately began circulating online. Thierry Breton, the European commissioner responsible for implementing the DSA, saw an opportunity and, three days after the attacks, sent a letter to X CEO Elon Musk and then to Meta, TikTok, and YouTube. Following the terrorist attacks carried out by Hamas against Israel," Breton wrote to Musk, we have indications that your platform is being used to disseminate illegal content and disinformation in the EU." He urged the platforms to ensure that they had in place mechanisms to address manifestly false or misleading information" and requested a prompt, accurate and complete response" to the letters within 24 hours. Breton gave the impression that he was acting in accordance with the DSA, but he went much further, taking on a bullying approach that seemed to presuppose that the platforms were enabling illegal speech. In fact, the DSA authorizes Commission action only after careful, technical review.

[....]

Breton showed that the DSA's careful bureaucratic design can be abused for political purposes. This is not an idle concern. Last July, during riots in France following the police shooting of a youth, Breton also threatened to use the DSA against social media platforms if they continued to post hateful content." He said that the European Commission could impose a fine and even ban the operation [of the platforms] on our territory," which are steps beyond his authority and outside the scope of the DSA.

European legal norms and judicial authorities, and the commission rank-and-file's commitment to a successful DSA, may check the potential for political abuse. But this status quo may not last. It is possible that June's European Parliament elections will tilt leadership in directions hostile to freedom of expression online. New commissioners could take lessons from Breton's political approach to DSA enforcement and issue new threats to the social media companies. Indeed, Breton's actions may have legitimized politicization in ways that could be used to limit public debate, rather than going through the careful, if technical, approaches of DSA risk assessment, researcher access, and transparency.

And, indeed, the same day that Kaye's piece came out, a piece in the Financial Times also came out, talking about how the EU is pushing out a series of hastily crafted guidelines" for how social media must handle election disinformation," in the run-up to June's EU elections, which will be enforced under the DSA.

While the guidelines are broadly drafted, the code is legally enforceable as part of the Digital Services Act, a core piece of legislation aimed at setting the rules on how Big Tech should police the internet.

Social media platforms need to show that they are complying or explain what else they are doing to mitigate risks," said one EU official. If they don't explain, we issue a fine."

Once again, this is proving how the DSA is very much structured to be a tool that can be used to suppress speech. I keep pointing this out, and EU officials keep insisting it's not... even as they promulgate rules that are clearly designed to suppress speech.

Kaye's piece also talks about the UK's Online Safety Act, which we've also discussed quite a bit.

One concern is that the UK legislation defines content harmful to children so broadly that it could cause companies to block legitimate health information, such as that related to gender identity or reproductive health, that is critical to childhood development and those who study it. Moreover, the act requires companies to conduct age verification, a difficult process that may oblige a user to present some form of official identification or age assurance, perhaps by using biometric measures. This is a complicated area involving a range of approaches that will have to be the focus of Ofcom's attention since the act does not specify how companies should enforce this. But, as the French data privacy regulator has found, age verification and assurance schemes pose serious privacy concerns for all users, since they typically require personal data and enable tracking of online activity. These schemes also often fail to meet their objectives, instead posing new barriers to access to information for everyone, not just children.

The Online Safety Act gives Ofcom the authority to require a social media platform to identify and swiftly remove publicly posted terrorist or child sexual abuse content. This is not controversial, since such material should not be anywhere on the Internet; child sexual abuse content in particular is vile and illegal, and there are public tools designed to facilitate its detection, investigation, and removal. But the act also gives Ofcom the authority to order companies to apply technology to scan private, user-to-user content for child sexual abuse material. It sounds legitimate, but doing so would require monitoring private communications, at the risk of disrupting the encryption that is fundamental to Internet security generally. If required, it would open the door to the type of monitoring that would be precisely the tool authoritarians would like in order to gain access to dissident communications. The potential for such interference with digital security is so serious that the heads of Signal and WhatsApp, the world's leading encrypted messaging services, indicated that they would leave the British market if the provision were to be enforced. For them, and those who use the services, encryption is a guarantee of privacy and security, particularly in the face of criminal hacking and interference by authoritarian governments. Without encryption, all communications would be potentially subject to snooping. So far, it seems that Ofcom is steering clear of such demands. Yet the provision stands, leaving many uncertain about the future of digital security in the UK.

Nor is the US spared (though I doubt anyone would consider the US's approaches to online regulation to be carefully considered or thoughtful).

Yet at its core, KOSA regards the Internet as a threat from which young people ought to be protected. The bill does not develop a theory for how an Internet for children, with its vast access to information, can be promoted, supported, and safeguarded. As such, critics including the Electronic Frontier Foundation, the American Civil Liberties Union, and many advocates for LGBTQI communities still rightly argue that KOSA could undermine broader rights to expression, access to information, and privacy. For example, the bill would require platforms to take reasonable steps to prevent or mitigate a range of harms, pushing them to filter content that could be said to harm minors. The threat of litigation would be ever present as an incentive for companies to take down even lawful, if awful, content. This could be mitigated if enforcement were in the hands of a trustworthy, neutral body that, like Ofcom, is independent. But KOSA places enforcement not only in the hands of the Federal Trade Commission but also, for some provisions, of state attorneys general-elected officials who have become increasingly partisan in national political debates in recent years. Thus, it will be politicians in each state who could wield power over KOSA's enforcement. When Blackburn said that her bill pursued the goal of protecting minor children from the transgender in this culture," she was not reassuring those fearing politicized implementation.

There's much more in the piece as well, but you get the idea. Each of these laws has very real risks of serious harm. And that's true even when the law is more carefully thought out. Last week, Tim Cushing wrote here about some of the problems of Canada's similar attempt to regulate the internet, and we're seeing ever greater concern there. The Globe and Mail recently ran a column by Andrew Coyne also calling out problems of the bill, even after noting how this was supposed to be the good' bill" that carefully addressed what seemed like real problems.

And yet:

It soon became clear, however, that there was much more to the bill than just that. And the more closely it was examined, the worse it appeared.

Most obviously out of bounds are a suite of amendments to the Criminal Code. Any attempt to criminalize speech ought to be viewed with extreme suspicion, and kept to the narrowest possible grounds. The onus should always be on the state to prove the necessity of any exception to the general rule of free speech - to prove not merely that the speech is objectionable or offensive, but demonstrably harmful.

[....]

The most remarkable part of this is the timing. At the very moment when everyone and his dog is accusing someone else of genocide, or of promoting it - as Israel's defenders say of Hamas's supporters, as the Palestinians' say of Israel's, as Ukraine's say of Russia's - the government proposes that the penalty for being on the losing side of such controversies should be life in prison? I have my views on these questions, and you have yours, but I would not throw you in jail for your opinions, and I hope you would not do the same to me - not for five years, and certainly not for life.

Hardly better is the proposal to create a new hate crime - that is, for acts motivated by hatred. Whether the state should be punishing people for their motives, rather than for their crimes, is perhaps too rarefied a debate: We take motives into account, for example, with regard to crimes committed in self-defence. And hatred has long been considered an aggravating factor at sentencing.

But the new proposal is to set up a whole separate category for crimes motivated by hatred. Well, not just crimes. The new crime would apply not only to offences under the Criminal Code but any other Act of Parliament." Got that? It doesn't matter how obscure or trivial the law: anyone who breaks it for reasons of hate would be guilty of a crime. And the punishment? Once again, up to life imprisonment.

As Kaye stated at the top of his article, run such laws through the Orban test" or the Trump test." Or, if you somehow like Orban or Trump, run them through the Biden test."

These laws can be abused. They can suppress all kinds of speech. This doesn't mean that there aren't bad things online. This doesn't mean that social media companies have your best interests at heart. This doesn't mean that there aren't other regulations that might be useful. But if you expect your government to regulate speech online, recognize the many, many ways in which it will be abused.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments