Article 6KXXC Both Things Can Be True: Meta Can Be Evil AND It’s Unlikely That The Company Deliberately Blocked A Mildly Negative Article About It

Both Things Can Be True: Meta Can Be Evil AND It’s Unlikely That The Company Deliberately Blocked A Mildly Negative Article About It

by
Mike Masnick
from Techdirt on (#6KXXC)

Truth matters. Even if it's inconvenient for your narrative. I'm going to do a question and answer style post, because I want to address a bunch of questions that came up on this story over the weekend, but let's start here.

So what happened?

Last Thursday, the Kansas Reflector, a small local news non-profit in (you guessed it) Kansas, published an interesting piece regarding the intersection of Facebook and local news. The crux of the story was that Facebook makes it next to impossible to talk about anything related to Climate Change without having it blocked or with limited visibility.

The piece is perhaps a bit conspiratorial, but not that crazy. What's almost certainly happening is that Meta has decided to simply limit the visibility of such content because of so many people constantly slamming Meta for supporting one or the other side of culture war issues, including climate change. It's a dumb move, but it's the kind of thing that Meta will often do: hamfisted, reactive, stupid-in-the-light-of-day managing of some issue that is getting the company yelled at. It's not the criticism of Meta, but the hot button nature of the topic.

But then, Meta made things worse (a Meta specialty). Later in the day, it blocked all links to the Kansas Reflector from all Meta properties.

This morning, sometime between 8:20 and 8:50 a.m. Thursday, Facebook removed all posts linking to Kansas Reflector's website.

This move not only affected Kansas Reflector's Facebook page, where we link to nearly every story we publish, but the pages of everyone who has ever shared a story from us.

That's the short version of the virtual earthquake that has shaken our readers. We've been flooded with instant messages and emails from readers asking what happened, how they can help and why the platform now falsely claims we're a cybersecurity risk.

As the story started to Streisand its way around the internet and had people asking what Meta was afraid of, the company eventually turned links back on to most of the Kansas Reflector site. But not all. Links to that original story were still banned. And, of course, conspiracy theories remained.

Meta's comms boss, Andy Stone, came out to say that it was all just a mistake and had nothing to do with the Reflector's critical article about Meta:

e5b245f7-ade8-43d1-b376-bf6a530397a2-Rac

And, again, it felt like there was a decent chance that this was actually true. Mark Zuckerberg is not sitting in his office worrying about a marginally negative article from a small Kansas non-profit. Neither are people lower down the ranks of Meta. That's just not how it works. There isn't some guy on the content moderation line thinking I know, Mark must hate this story, so I'll block it!"

It likely had more to do with the underlying topic (the political hot potato of climate change") than the criticism of Meta, combined with a broken classifier that accidentally triggered a this is a dangerous site" flag for whatever reason.

Then things got even dumber on Friday. Reporter Marisa Kabas reposted the original Reflector article on her site, The Handbasket. She could do this as the Reflector nicely publishes its work under a Creative Commons CC BY-NC-ND 4.0 license.

And then Marisa discovered that links to that article were also blocked across Meta (I personally tried to share the link to her article on Threads and had it blocked as not allowed.").

Soon after that, blogger Charles Johnson noticed that his site was also blocked by Meta as malware, almost certainly because a commenter linked to the original Kansas Reflector story. Eventually, his site was unblocked on Sunday.

Instagram and Threads boss Adam Mosseri showed up in somewhat random replies to people (often not those directly impacted) and claimed that it was a series of mistakes:

39526449-f402-4f30-b5af-962887095d02-Rac

What likely actually happened?

Like all big sites, Meta uses some automated tools to try to catch and block malicious sites before they spread wide and far. If they didn't, you'd rightly be complaining that Meta doesn't do the most basic things to protect its users from malicious sites.

Sometimes (more frequently than you would believe, given the scale) those systems make errors. Those errors can be false negatives (letting through dangerous sites that they shouldn't) and false positives (blocking sites that shouldn't be blocked). Both types of errors happen way more than you'd like, and if you tweak the dials to lessen one of those errors, you almost certainly end up with a ton of the other. It's the nature of the beast. Being more accurate in one direction means less accurate in the other.

So, what almost certainly happened here was that there was some set of words or links or something on the Kansas Reflector story that tripped an alarm on a Meta classifier saying this site is likely dangerous."

This alarm likely is triggered thousands or tens of thousands of times every single day. Most of these are never reviewed, because most of them don't become newsworthy. In many cases, there's a decent chance no one ever even learns that their websites are barred by Meta, because no one ever actually notices.

Everything afterward stems from that one mistake. The automated trigger threshold was passed, the Reflector got blocked because Meta's systems gave it a probabilistic score that suggested the site was dangerous. Most likely, no human at Meta even read the article before all this, and if anyone had, they would most likely not have cared about the mild criticism (way more mild than tons of stories out there).

If you are explaining why Meta did something that is garden variety incompetent, rather than maliciously evil, doesn't that make you a Meta bootlicker?

Meta is a horrible company that has a horrible track record. It has done some truly terrible shit. If you want to read just how terrible the company is, read Erin Kissane's writeup of what happened in Myanmar. Or read about how the company told users to download a VPN, Onavo, that was actually breaking encryption and spying on how users used other apps to send that data back to Meta. Or read about how Meta basically served up the open internet on a platter for Congress to slaughter, because they knew it would harm competition.

The list goes on and on. Meta is not a good company. I try to use their products as little as possible, and I'd like to live in a world where no one feels the need to use any of their products.

But truth matters.

And we shouldn't accept a narrative as true just because it confirms our priors. That seems to have happened over the past few days regarding a broken content moderation decision that caused a negative news story about Meta to briefly be blocked from being shared across Meta properties.

It looks bad. It sounds bad. And Meta is a terrible company. So it's very easy to jump to the conclusion that of course it was done on purpose. The problem is that there is a much more benign explanation that is also much more likely.

And this matters, because when you're so trigger happy to insist that the mustache-twirling version of everything must be true, it actually makes it that much harder to truly hold Meta to account for its many sins. It makes it that much easier for Meta (and others) to dismiss your arguments as coming from a conspiracy theorist, rather than someone who has an actual point.

But what about those other sites? Isn't the fact that it spread to other sites posting the same story proof of nefariousness?

Again, what happened there also likely stemmed from that first mistake. Once the system is triggered, it's also probably looking for similar sites or sites trying to get around the block. So, when Kabas reposted the Reflector text, another automated system almost certainly just saw it as here's a site copying the other bad site, so it's an attempt to get around the block." Same with Johnson's site, where it likely saw the link to the bad" site as an attempt to route around the block.

Even after Meta was made aware of the initial error, the follow-on activities would quite likely continue automatically as the systems just did their thing.

But Meta is evil!

Yeah, we covered this already. Even if that's true... especially if that's true, we should strive to be accurate in our criticism of the company. Every overreaction undermines the arguments for the very real things that the company has done wrong, and that it continues to do wrong.

It allows the company to point to an error someone made in describing what they've done wrong here and use it to dismiss their more accurate criticism for other things.

But Meta has a history of lying and there's no reason to give it the benefit of the doubt!

Indeed. But I'm not giving the benefit of the doubt to Meta here. Having spent years and years covering not just social media content moderation fuckups, but literally the larger history of content moderation fuckups, there are some pretty obvious things that suggest this was garden variety technical incompetence, found in every system, rather than malicious intent to block an article.

First, as noted, these kinds of mistakes happen all the time. Sooner or later, one is going to hit an article critical of Meta. It reminds me of the time that Google threatened Techdirt because it said comments on an article violated its terms of service. It just so happened that that article was critical of Google. I didn't go on a rampage saying Google was trying to censor me because of my article that was critical of Google. Because I knew Google made that type of error a few times a month, sending us bogus threat notices over comments.

It happens.

And Meta has always allowed tons of way worse stories, including the Erin Kissane story above.

On top of that, the people at Meta know full well that if they were actually trying to block a critical news story, it would totally backfire and the story would Streisand all over the place (as this one did).

Also, Meta has a tell: if they were really doing something nefarious on this, they'd have a full court, slick press response ready to go. It wouldn't be a few random Adam Mosseri social media posts going oh shit, we fucked up, we're fixing now..."

But it's just too big of a coincidence, since this is over a negative story!

Again, there are way, way, way worse stories about Meta out there that haven't been blocked. This story wasn't even that bad. And no one at Meta is concerned about a marginally negative opinion piece in the Kansas Reflector.

When mistakes are made as often as they are made at this kind of scale (again, likely thousands of mistakes a day), eventually one of them is going to be over an article critical of Meta. It is most likely a coincidence.

But if this is actually a coincidence and it happens all the time, how come we don't hear about it every day?

As someone who writes about this stuff, I do tend to hear similar stories nearly every day. But most of them never get covered because it's just not that interesting. Automated harmful site classifier wrong yet again" isn't news. But even then, I do still write about tons of content moderation fuckups that fit into this kind of pattern.

Why didn't Meta come out and explain all this if it was really no big deal?

I mean, they kinda did. Two different execs posted that it was a mistake and that they were looking into it, and some of those posts came over a weekend. It took a few days, but it appears that most of the blocked links that I was aware of earlier have been allowed again.

But shouldn't they have a full, clear, and transparent explanation for what happened?

Again, if they had that all ready to go by now, honestly, then I'd think they were up to no good. Because they only have those packages ready to go when they know they're doing something bad and need to be ready to counter it. In this case, their response is very much of the nature of blech, classifier made a mistake again... someone give it a kick."

And don't expect a fully transparent explanation, because these systems are actually doing a lot to protect people from truly nefarious shit. Giving a fully transparent explanation of how that system works, where it goes wrong, and how it was changed might also be super useful to someone with nefarious intent, looking to avoid the classifier.

If Meta is this incompetent, isn't that a big enough problem? Shouldn't we still burn them at the stake?

Um... sure? I mean, there are reasons why I support different approaches that would limit the power of big centralized players. And, if you don't like Meta, come use Bluesky (where people got to yell about this at me all weekend), where things are set up in a way that one company doesn't have so much power.

But, really, no matter where you go online, you're going to discover that mistakes get made. They get made all the time.

Honestly, if you understood the actual scale, you'd probably be impressed at how few mistakes are made. But every once in a while a mistake is going to get made that makes news. And it's unlikely to be because of anything nefarious. It's really just likely to be a coincidence that this one error happened to be one where a nefarious storyline could be built around it.

If Meta can't handle this, then why should we let it handle anything?

Again, you'll find that every platform, big and small, makes these mistakes. And it's quite likely that Meta makes fewer of these mistakes, relative to the number of decisions it makes, than most other platforms. But it's still going to make mistakes. So is everyone else. Techdirt makes mistakes as well, as anyone who has ever had their comments caught in the spam filter should recognize.

But why was Meta so slow to fix these things or explain them?

It wasn't. Meta is quite likely dealing with a few dozen different ongoing crises at any one moment, some more serious than others. Along those lines, it's quite likely that, internally, this is viewed as a non-story, given that it's one of these mistakes that happens thousands of times a day. Most of these mistakes are never noticed, but the few that are get fixed in due time. This just was not seen as a priority, because it's the type of mistake that happens all the time.

But why didn't Adam Mosseri respond directly to those impacted? Doesn't that mean he was avoiding them?

The initial replies from Mosseri seemed kinda random. He responded to people like Ken Popehat" White on Threads and Alex Digiphile" Howard on Bluesky, rather than anyone who was directly involved. But, again, this tends to support the underlying theory that, internally, this wasn't setting off any crisis alarm bells. Mosseri saw those posts because he just happened to see those posts and so he responded to them, noting the obvious mistake, and promising to have someone look into it more at a later date (i.e., not on a weekend).

Later on, I saw that he did respond to Johnson, so as more people raised issues, it's not surprising that he started paying closer attention.

None of what you say matters, because they were still blocking a news organization, and whether it was due to maliciousness or incompetence doesn't matter.

Well, yes and no. You're right that the impact is still pretty major, especially to the news sites in question. But if we want to actually fix things, it does matter to understand the underlying reasons why they happen.

I guarantee that if you misdiagnose the problem, your solution will not work and has a high likelihood of actually making things way, way worse.

As we discussed on the most recent episode of Ctrl-Alt-Speech, in the end, the perception is what matters, no matter what the reality of the story. People are going to remember simply that Meta blocked the sharing of links right after a critical article was published.

And that did create real harm.

But you're still a terrible person/what do you know/why are you bootlicking, etc?

You don't have to listen to me if you don't want to. You can also read this thread from another trust & safety expert, Denise from Dreamwidth, whose own analysis is very similar to mine. Or security expert @pwnallthethings, who offers his own, similar explanation. Everyone with some experience in this space sees this as an understandable (which is not to say acceptable!) scenario.

I spend all this time trying to get people to understand the reality of trust & safety for one reason: so that they understand what's really going on and can judge these situations accordingly. Because the mistakes do cause real harm, but there is no real way to avoid at least some mistakes over time. It's just a question of how you deal with them when they do happen.

Is it an acceptable tradeoff if it means Meta allows more links to scam, phishing, and malware sites? Because those are the tradeoffs we're talking about.

While it won't be, this should be a reminder that content moderation often involves mistakes. But also, while it's always easy to append some truly nefarious reason to things (e.g., anti-conservative bias"), it's often more just the company is bad at this, because every company is bad at this, because the scale is more massive than anyone can comprehend."

Sometimes the system sucks. And sometimes the system simply over-reacted to one particular column and Streisanded the whole damn thing into the stratosphere. And that's useful in making people aware. But if people are going to be aware, they should be aware of how these kinds of systems work, rather than assuming mustache twirling villainy where there's not likely to be any.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments