Article 6A2VP Matt Taibbi Can’t Comprehend That There Are Reasons To Study Propaganda Information Flows, So He Insists It Must Be Nefarious

Matt Taibbi Can’t Comprehend That There Are Reasons To Study Propaganda Information Flows, So He Insists It Must Be Nefarious

by
Mike Masnick
from Techdirt on (#6A2VP)
Story Image

Over the last few months, Elon Musk's handpicked journalists have continued revealing less and less with each new edition of the Twitter Files," to the point that even those of us who write about this area have mostly been skimming each new release, confirming that yet again these reporters have no idea what they're talking about, are cherry picking misleading examples, and then misrepresenting basically everything.

It's difficult to decide if it's even worth giving these releases any credibility at all in going through the actual work of debunking them, but sometimes a few out of context snippets from the Twitter Files, mostly from Matt Taibbi, seem to get picked up by others and it becomes necessary to dive back into the muck to clean up the mess that Matt has made yet again.

Unfortunately, this seems like one of those times.

Over the last few Twitter Files" releases, Taibbi has been pushing hard on the false claim that, okay, maybe he can't find any actual evidence that the government tried to force Twitter to remove content, but he can find... information about how certain university programs and non-governmental organizations received government grants... and they setup censorship programs."

It's censorship by proxy!" Or so the claim goes.

Except, it's not even remotely accurate. The issue, again, goes back to understanding some pretty fundamental concepts that seem to escape Taibbi's ability to understand. Let's go through them.

Point number one: Studying misinformation and disinformation is a worthwhile field of study. That's not saying that we should silence such things, or that we need an arbiter of truth." But the simple fact remains that some have sought to use misinformation and disinformation to try to influence people, and studying and understanding how and why that happens is valuable.

Indeed, I personally tend to lean towards the view that most discussions regarding mis- and disinformation are overly exaggerated moral panics. I think the terms are overused, and often misused (frequently just to attack factual news that people dislike). But, in part, that's why it's important to study this stuff. And part of studying it is to actually understand how such information is spread, which includes across social media.

Point number two: It's not just an academic field of interest. For fairly obvious reasons, companies that are used to spread such information have a vested interest in understanding this stuff as well, though to date, it's mostly been the social media companies that have shown the most interest in understanding these things, rather than say, cable news, even as some of the evidence suggests cable news is a bigger vector for spreading such things than social media.

Still, the companies have an interest in understand this stuff, and sometimes that includes these organizations flagging content they find and sharing it with the companies for the sole purpose of letting those companies evaluate if the content violate existing policies. And, once again, the companies regularly did nothing after noting that the flagged accounts didn't violate any policies.

Point number three: governments also have an interest in understand how such information flows, in part to help combat foreign influence campaigns designed to cause strife and even violence.

Note what none of these three points are saying: that censorship is necessary or even desired. But it's not surprising that the US government has funded some programs to better understand these things, and that includes bringing in a variety of experts from academia and civil society and NGOs to better understand these things. It's also no surprise that some of the social media companies are interested in what these research efforts find because it might be useful.

And, really, that's basically everything that Taibbi has found out in his research. There are academic centers and NGOs that have received some grants from various government agencies to study mis- and disinformation flows. Also, that sometimes Twitter communicated with those organization. Notably, many of his findings actually show that Twitter employees absolutely disagreed with the conclusions of those research efforts. Indeed, some of the revealed emails show Twitter employees somewhat dismissive of the quality of the research.

What none of this shows is a grand censorship operation.

However, that's what Taibbi and various gullible culture warriors in Congress are arguing, because why not?

So, some of the organizations in questions have decided they finally need to do some debunking on their own. I especially appreciate the University of Washington (UW), which did a step by step debunker that, in any reasonable world, would completely embarrass Matt Taibbi for the very obvious fundamental mistakes he made:

False impression: The EIP orchestrated a massive censorship" effort. In a recent tweet thread, Matt Taibbi, one of the authors of the Twitter Files" claimed: According to the EIP's own data, it succeeded in getting nearly 22 million tweets labeled in the runup to the 2020 vote." That's a lot of labeled tweets! It's also not even remotely true. Taibbi seems to be conflating our team's post-hoc research mapping tweets to misleading claims about election processes and procedures with the EIP's real-time efforts to alert platforms to misleading posts that violated their policies. The EIP's research team consisted mainly of non-expert students conducting manual work without the assistance of advanced AI technology. The actual scale of the EIP's real-time efforts to alert platforms was about 0.01% of the alleged size.

Now, that's embarrassing.

There's a lot more that Taibbi misunderstands as well. For example, the freak-out over CISA:

False impression: The EIP operated as a government cut-out, funneling censorship requests from federal agencies to platforms. This impression is built around falsely framing the following facts: the founders of the EIP consulted with the Department of Homeland Security's Cybersecurity and Infrastructure Security Agency (CISA) office prior to our launch, CISA was a partner" of the EIP, and the EIP alerted social media platforms to content EIP researchers analyzed and found to be in violation of the platforms' stated policies. These are all true claims - and in fact, we reported them ourselves in the EIP's March 2021 final report. But the false impression relies on the omission of other key facts. CISA did not found, fund, or otherwise control the EIP. CISA did not send content to the EIP to analyze, and the EIP did not flag content to social media platforms on behalf of CISA.

There are multiple other false claims that UW debunks as well, including that it was a partisan effort, that it happened in secret, or that it did anything related to content moderation. None of those are true.

The Stanford Internet Observatory (SIO), which works with UW on some of these programs, ended up putting out a similar debunker statement as well. For whatever reason, the SIO seems to play a central role in Taibbi's fever dream of government-driven censorship." He focuses on projects like the Election Integrity Project or the Virality Project, both of which were focused on looking at the flows of viral misinformation.

In Taibbi's world, these were really government censorship programs. Except, as SIO points out, they weren't funded by the government:

Does the SIO or EIP receive funding from the federal government?

As part of Stanford University, the SIO receives gift and grant funding to support its work. In 2021, the SIO received a five-year grant from the National Science Foundation, an independent government agency, awarding a total of $748,437 over a five-year period to support research into the spread of misinformation on the internet during real-time events. SIO applied for and received the grant after the 2020 election. None of the NSF funds, or any other government funding, was used to study the 2020 election or to support the Virality Project. The NSF is the SIO's sole source of government funding.

They also highlight how the Virality Project's work on vaccine disinformation was never about censorship."

Did the SIO's Virality Project censor social media content regarding coronavirus vaccine side-effects?

No. The VP did not censor or ask social media platforms to remove any social media content regarding coronavirus vaccine side effects. Theories stating otherwise are inaccurate and based on distortions of email exchanges in the Twitter Files. The Project's engagement with government agencies at the local, state, or federal level consisted of factual briefings about commentary about the vaccine circulating on social media.

The VP's work centered on identification and analysis of social media commentary relating to the COVID-19 vaccine, including emerging rumors about the vaccine where the truth of the issue discussed could not yet be determined. The VP provided public information about observed social media trends that could be used by social media platforms and public health communicators to inform their responses and further public dialogue. Rather than attempting to censor speech, the VP's goal was to share its analysis of social media trends so that social media platforms and public health officials were prepared to respond to widely shared narratives. In its work, the Project identified several categories of allegations on Twitter relating to coronavirus vaccines, and asked platforms, including Twitter, which categories were of interest to them. Decisions to remove or flag tweets were made by Twitter.

In other words, as was obvious to anyone who actually had followed any of this while these projects were up and running, these are not examples of censorship" regimes. Nor are they efforts to silence anyone. They're research programs on information flows. That's also clear if you don't read Taibbi's bizarrely disjointed commentary and just look at the actual things he presents.

In a normal world, the level of just outright nonsense and mistakes in Taibbi's work would render his credibility completely shot going forward. Instead, he's become a hero to a certain brand of clueless troll. It's the kind of transformation that would be interesting to study and understand, but I assume Taibbi would just build a grand conspiracy theory about how doing that was just an attempt by the illuminati to silence him.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments