Article 1FRWJ Top Internet Companies Agree To Vague Notice & Takedown Rules For 'Hate Speech' In The EU

Top Internet Companies Agree To Vague Notice & Takedown Rules For 'Hate Speech' In The EU

by
Mike Masnick
from Techdirt on (#1FRWJ)
It's easy to say that "hate speech" is bad and that we, as a society, shouldn't tolerate it. But, reality is a lot more complicated than that, which is why we're concerned about various attempts to ban or stifle "hate speech." In the US, contrary to what many believe to be true, "hate" speech is still protected speech under the First Amendment. In Europe, that's often not the case, and hate speech bans are more common. But, as we've noted, while it seems like a no brainer to be against hate speech, the vagueness in what counts as "hate speech" allows that term to be expanded over and over again, such that laws against hate speech are now regularly used for government censorship over the public saying things the government doesn't like.

So consider me quite concerned about the news out of the EU that the EU Commission has convinced all the big internet platform companies -- Google, Facebook, Twitter and Microsoft -- to agree to remove "hate speech" within 24 hours.
Upon receipt of a valid removal notification, the IT Companies to review such requests against their rules and community guidelines and where necessary national laws transposing the Framework Decision 2008/913/JHA, with dedicated teams reviewing requests.

The IT Companies to review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary.

In addition to the above, the IT Companies to educate and raise awareness with their users about the types of content not permitted under their rules and community guidelines. The use of the notification system could be used as a tool to do this.
In other words, it sounds a lot like these companies have agreed to a DMCA-like notice-and-takedown regime for handling "hate speech." Let's be clear here: this will be abused and it will be abused widely. That's what happens when you give individuals the ability to remove content from platforms. Obviously, these companies are private companies and can set whatever policies they want on keeping up or removing content, but when they come to an agreement with the EU Commission about what they'll remove and how quickly, reasonable concerns should be raised about how this will work in practice, what definitions will be used to determine "hate speech," what kinds of appeals processes there will be and more. And none of that is particularly clear.

And, of course, very few people will raise these issues upfront because no one wants to be seen as being in favor of hate speech. And that's the real problem. It's easy to create rules for censorship by saying it's just about "hate speech," since almost no one will stand up and complain about that. But that opens up the door to all sorts of abuse -- whether in how "hate speech" is defined, as well as in how the companies will actually handle the implementation. Two major human rights groups -- EDRi and Access Now have already withdrawn from the EU Commission forum discussing all of this in protest of how these rules were put together:
Today, on 31 May, European Digital Rights (EDRi) and Access Now delivered a joint statement on the EU Commission's "EU Internet Forum", announcing our decision not to take part in future discussions and confirming that we do not have confidence in the ill considered "code of conduct" that was agreed.
Their main concern was that the whole thing was set up directly between the EU Commission and the internet companies behind closed doors -- and when you're talking about issues that impact human rights and freedom of expression, that needs to be done openly and transparently.
In short, the "code of conduct" downgrades the law to a second-class status, behind the "leading role" of private companies that are being asked to arbitrarily implement their terms of service. This process, established outside an accountable democratic framework, exploits unclear liability rules for companies. It also creates serious risks for freedom of expression as legal but controversial content may well be deleted as a result of this voluntary and unaccountable take down mechanism.
I recognize why many people may cheer on this move, thinking that it's a way to stop "bad stuff" from happening online, but beware the actual consequences of setting up an opaque process with a vague standard for pressuring platforms to censor content based on notices from angry people. If you don't think this will be abused in dangerous ways, you haven't been paying attention to the last two decades on the internet.

Permalink | Comments | Email This Story
feed?i=fMRowoQ5jzg:Zq5LjU93EY8:D7DqB2pKE feed?d=c-S6u7MTCTEfMRowoQ5jzg
External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments