Article 5Y67W The Kids And Their Algo Speak Show Why All Your Easy Fixes To Content Moderation Questions Are Wrong

The Kids And Their Algo Speak Show Why All Your Easy Fixes To Content Moderation Questions Are Wrong

by
Mike Masnick
from Techdirt on (#5Y67W)
Story Image

Last month at SXSW, I was talking to someone who explained that they kept seeing people use the term Unalive" on TikTok as a way of getting around the automated content moderation filter that would downrank or block posts that included the word dead," out of a fear of what that video might be talking about. Another person in the conversation suggested that I should write an article about all the ways in which the kids these days" figure out how to get around filters. I thought it was a good idea, but did absolutely nothing with it. Thankfully, Taylor Lorenz, now of the Washington Post, is much more resourceful than I am, and went ahead and wrote that article that had been suggested to me - and it's really, really good.

The article is framed as talking about how algospeak is changing our language" as people (usually kids) look to get around usually (but not always) automated moderation tools and filters.

Algospeak refers to code words or turns of phrase users have adopted in an effort to create a brand-safe lexicon that will avoid getting their posts removed or down-ranked by content moderation systems. For instance, in many online videos, it's common to say unalive" rather than dead," SA" instead of sexual assault," or spicy eggplant" instead of vibrator."

There are some pretty amusing examples of this:

When the pandemic broke out, people on TikTok and other apps began referring to it as the Backstreet Boys reunion tour" or calling it the panini" or panda express" as platforms down-ranked videos mentioning the pandemic by name in an effort to combat misinformation. When young people began to discuss struggling with mental health, they talked about becoming unalive" in order to have frank conversations about suicide without algorithmic punishment. Sex workers, who have long been censored by moderation systems, refer to themselves on TikTok as accountants" and use the corn emoji as a substitute for the word porn."

But, really, the article highlights something that we've been talking about for ages: the belief that you can just deal with large societal problems through content moderation and filters is silly. In the paragraph above, you can get a sense of some of the issues around suicide discussion and discussions on sex work.

A few months ago, we wrote about how the NY Times (almost single handedly) kicked up a huge overblown moral panic about an online forum where people discuss suicide. As we noted in that article, the only reason that forum existed in the first place was because of a freak-out over people discussing suicide on a Reddit forum pressured that company into closing it down - leading people to create this separate forum, in a darker part of the internet where it's more difficult to monitor.

But, between that and the article on algospeak, people should start to realize that whether we like it or not, some people are going to want to talk about suicide. Hiding or shutting down all such forums isn't going to help if people really want to. They're going to find a place to go and to talk. Rather than denying the idea that anyone should ever discuss suicide, shouldn't we be setting up safer places for them to do so?

Also, as Lorenz's article makes clear, contrary to the claims of aggrieved Trumpists who insist that all content moderation only targets conservatives, the people who rely on algospeak to get around filters are often the more marginalized folks, who are seeking to find like-minded people to talk to or who are feeling shunned and attacked:

Black and trans users, and those from other marginalized communities, often use algospeak to discuss the oppression they face, swapping out words for white" or racist." Some are too nervous to utter the word white" at all and simply hold their palm toward the camera to signify White people.

As we've discussed in our content moderation case study series, victims of racism have long found it difficult to talk about their experiences without getting moderated for racism. The fact that they have to resort to these types of tactics again shows (1) that yes, lots of people are impacted by moderation, and (2) people are increasingly forced to find workarounds to bad moderation policies.

Of course, this works in all directions as well:

Last year, anti-vaccine groups on Facebook began changing their names to dance party" or dinner party" and anti-vaccine influencers on Instagram used similar code words, referring to vaccinated people as swimmers."

The article even highlights an entire site, called Zuck Got Me, highlighting content and memes that Instagram now filters.

Either way, as Lorenz points out in her piece, none of this is to say that all moderation is ineffective or that it doesn't make sense to moderate - because, as we've explained over and over again, some level of moderation is always necessary for any community. However, it does highlight how lots and lots of people get caught in the impossible-to-do-well nature of moderation tools, and that expecting content moderation to fix underlying social problems is a fool's errand.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments