DSA Ruling: ExTwitter Must Pay Up For Shadowbanning; Trolls Rejoice
In a stunning display of technocratic incompetence, the EU's Digital Services Act (DSA) has effectively outlawed the very tool that online platforms have relied on for years to combat trolls: shadowbanning. Recent court decisions suggest that the DSA's (possibly?) well-intentioned but misguided Article 17 has created a troll's paradise, leaving websites in an impossible position when trying to deal with bad actors. I will note that the DSA's authors were warned of this in advance by multiple experts, but they either didn't care or didn't listen.
But before we get into the details, we need to take a step back and remind people that the general understanding of shadowbanning changed dramatically five or six years ago. This change mostly happened because a bunch of Trumpists got angry that they weren't getting enough free promotion and decided that was a form of shadowbanning. It's not.
The original concept of shadowbanning" was as a method of dealing with trolls who were only in it to get reactions from other users in forums. People realized that banning them wouldn't work, since they'd just make new accounts and come back. Convincing everyone else not to respond wouldn't work, because that runs against human nature.
The concept of shadowbanning goes back to some of the earliest parts of the internet. It was a way to deal with those trolls by making the troll think their efforts had reached the site, but no other user could actually see it. Just the troll. So the troll thinks they've posted... and just that no one is responding. Thus, they don't get their troll dopamine hit and hopefully give up. The reality, though, is that none of the other users saw it at all.
However, the key bit here is that the shadow" part of shadowbanning has to be about the user not knowing they were banned. Otherwise, it's just a ban.
In 2018, Trumpist folks started complaining that they weren't getting promoted high enough in search results or other algorithms. They misunderstood the nature of downranking" nonsense in an algorithm to be something evil and awful, and (because why understand what things actually are?) declared that to be shadowbanning."
It's now so widely used to mean that kind of visibility filtering/algorithmic adjustment that the term is now effectively meaningless.
Nonetheless, it's beginning to look like the EU might not allow any kind of shadowbanning." A couple of months ago, we wrote about a Belgian court punishing Meta for allegedly shadowbanning" a controversial extremist politician. In that case, the court found that Meta couldn't justify" the downranking of the politician and argued that the downranking was based on the politician's political views, and profiling someone based on their political views apparently violates the GDPR.
However, TechCrunch recently reported on another, different case, this time in the Netherlands, in which a PhD student, Danny Mekic, took ExTwitter to court for having visibility filtering" applied to his account without being told about it.
Now, again, some background here is important. Before taking over Twitter, Elon decried the evils of shadowbanning" at the company. He insisted (incorrectly) that it went against free speech, democracy, and all things good and holy. Indeed, one of the big Twitter Files" misleading reveals was that the company did what it called visibility filtering" - which everyone in the Elon realm of alternative facts seemed to forget was something the company publicly announced and was covered in the media back in 2018.
Hilariously, at the same time Musk was pushing those very Twitter Files that (1) revealed that the company was using the thing it had publicly said it was going to use nearly five years earlier while (2) insisting this was a big, secret revelation of bad behavior... Elon was making use of those very tools to hide accounts he didn't like, such as ElonJet.
Indeed, soon afterwards, Elon (without recognizing any of the irony at all) announced that this visibility filtering" (what his friends called shadowbanning) would be a key part of moderation on Twitter.
So, the new Twitter policy was the old Twitter policy, which had been announced in 2018, and which Elon insisted was horrible and had to be revealed" via a Twitter Files" dump, and which he had to overpay to buy the company to stop... just to announce that it was now the official policy under his regime.
A few months later, the company announced that it would ramp up that shadowban... er... visibility filtering program, but it promised that it would be transparent about it and let you know:
Restricting the reach of Tweets, also known as visibility filtering, is one of our existing enforcement actions that allows us to move beyond the binary leave up versus take down" approach to content moderation. However, like other social platforms, we have not historically been transparent when we've taken this action. Starting soon, we will add publicly visible labels to Tweets identified as potentially violating our policies letting you know we've limited their visibility.
And, indeed, every so often people get slapped with a publicly visible label" that just seems to make them even angrier.
But, according to Mekic, he believed his account was visibility filtered without being notified. According to TechCrunch's summary:
PhD student Danny Meki took action after he discovered X had applied visibility restrictions to his account in October last year. The company applied restrictions after he had shared a news article about an area of law he was researching, related to the bloc's proposal to scan citizens' private messages for child sexual abuse material (CSAM). X did not notify it had shadowbanned his account - which is one of the issues the litigation focused on.
Meki only noticed his account had been impacted with restrictions when third parties contacted him to say they could no longer see his replies or find his account in search suggestions.
For what it's worth, the company claims it notified him multiple times.
It appears that he then went to the equivalent of a small claims court in the Netherlands to argue that not being told violated the DSA because the DSA's Article 17 requires that a service provider give users a statement of reasons" for any restrictions of the visibility of specific items of information provided by the recipient of the service."
For years, we've pointed out how ridiculous this is. It basically means that sites need to explain to trolls why they removed their content or made it harder to read.
But it could also mean that actual shadowbanning (taking action against a malicious actor that they don't know about) appears to be effectively outlawed. The court ruling is in Dutch, but the court appears to have sided with Mekic, and basically said that if you shadowban someone, you need to tell them, in fairly great detail what happened and why. And telling them has a bunch of requirements, all of which would undermine shadowbanning. Apparently, even though Mekic was notified, the explanation apparently wasn't clear enough for this court.
Which means there is no such thing as shadowbanning anymore.
It's not shadowbanning if it's not in the shadow." If you have to tell the shadowbanned about the shadowban, it's no longer shadowbanning. It's just a hey, troll, come yell at me" notice.
From a translation of the ruling:
Contrary to Twitter's argument, the subdistrict court judge considers that the restrictions imposed by it on [the applicant] fall under a restriction of visibility as referred to in Article 17, paragraph 1a, DSA. It has remained undisputed that the 64 million users of X in Europe were able to see [the applicant]'s messages in a reduced manner, which clearly constitutes a restriction within the meaning of that provision. The fact that not all 64 million users searched for [the applicant]'s account during that period does not alter this. The same applies to the fact that, as Twitter has noted, there was no restriction of the visibility of specific information provided by [the applicant], but a restriction of his entire account, this interpretation is not followed. After all, the greater, the reduced visibility of the entire account, entails the lesser, the reduced visibility of the specific information.
In the ruling itself, the story seems even worse because ExTwitter did, in fact, give the guy an explanation. But the judge says it wasn't specific enough.
According to Twitter, it provided three messages to [the applicant] about the measure, on15 October 2023, 14 November 2023 and 12 January 2024, thereby meeting the requirements of Article 17 DSA. [The applicant] first contested that hereceived the message of 14 November 2023. Since Twitter has not provided any evidence that this message reached [the applicant] and has also not made any concrete offer of evidence, this message will be disregarded in the assessment. However, even if [the applicant] had received this message, it does not comply with Article 17 DSA, since this message is formulated far too generally and does not contain any specific information referred to in Article 17 DSA.
The other two messages do not comply with the provisions of Article 17 paragraph 3 DSA. The email message of 15 October 2023 does not contain any information as referred to in Article 17 DSA. [Applicant] cannot infer from this message that a measure has been taken and which measure has been taken (sub a), why a possible measure would have been taken and which facts and circumstances are involved (sub b). Nor is anything stated about the legal basis (see sub d). Finally, the information referred to in sub f is also missing. The mere reference to the Help Center in the email cannot be regarded as such a notification. This email therefore does not meet the requirements of Article 17 paragraph 3 DSA or paragraph 4. This information is not clear, easy to understand and in any case not such that [Applicant] can exercise any rights of recourse that may be due to him. The message from Twitter of 12 January 2024 is also not fully compliant. That message also does not contain any information as referred to under sub f. It does otherwise state that there was a temporary restriction, although the extent of that restriction is not stated. It also states that a few days later X lifted the temporary restriction on [applicant]'s account. Although a specific date is missing, [applicant] could at least infer from this that these restrictions no longer applied on 12 January 2024.
This is just a small claims" dispute, so I'm guessing it has little to no precedential value. But combined with that other ruling in Belgium, and the text of Article 17 itself, this is going to create a freaking field day for trolls in the EU.
So... now that tool that has been used for decades, mainly to deal with trolls, is basically no longer possible. If you take an action in the EU against a troll, you have to tell them about it. This bit of the law was clearly written by people who have never, ever had to deal with trolls. Because trolls will absolutely love this feature. They can whine incessantly and threaten legal process if you don't give them a clear statement (which they can argue with) regarding what you did to them and why.
I know that people (especially in the EU) complain that my coverage of the DSA is unfair. But when you get results like this, what else am I supposed to say about the DSA? Sections like Article 17 are designed to deal with a world where everyone is acting in good faith. And, in doing so, empowers the trolls and harms the ability of websites to deal with trolls.