Article 4YJ7T Stop Blaming Algorithms For Misinformation And Threats To Democracy; The Real Problem Is Societal

Stop Blaming Algorithms For Misinformation And Threats To Democracy; The Real Problem Is Societal

by
Mike Masnick
from Techdirt on (#4YJ7T)
Story Image

For quite some time now, we've pointed out that we should stop blaming technology for problems that are actually societal. Indeed, as you look deeper at nearly every "big tech problem," you tend to find the problem has to do with people, not technology. And "fixing" technology isn't really going to fix anything when it's not the real problem. Indeed, many proposals to "fix" the tech industry seem likely to exacerbate the problems we're discussing.

Of course, the "techlash" narrative is incredibly powerful, and the media has really run with it of late (as have politicians). So, it's nice to see at least Wired is starting to push back on the narrative. A new cover story makes it clear that "Bad Algorithms Didn't Break Democracy." It's a great article, by Gideon Lewis-Kraus. It acknowledges the narrative, and even that the techlash narrative is appealing at a surface level:

It's easy to understand why this narrative is so appealing. The big social media firms enjoy enormous power; their algorithms are inscrutable; they seem to lack a proper understanding of what undergirds the public sphere. Their responses to widespread, serious criticism can be grandiose and smarmy. "I understand the concerns that people have about how tech platforms have centralized power, but I actually believe the much bigger story is how much these platforms have decentralized power by putting it directly into people's hands," said Mark Zuckerberg, in an October speech at Georgetown University. "I'm here today because I believe we must continue to stand for free expression."

If these corporations spoke openly about their own financial interest in contagious memes, they would at least seem honest; when they defend themselves in the language of free expression, they leave themselves open to the charge of bad faith.

But as the piece goes on to highlight, this doesn't really make much sense -- and despite many attempts to support it with actual evidence, the evidence is completely lacking:

Over the past few years, the idea that Facebook, YouTube, and Twitter somehow created the conditions of our rancor-and, by extension, the proposal that new regulations or algorithmic reforms might restore some arcadian era of "evidential argument"-has not stood up well to scrutiny. Immediately after the 2016 election, the phenomenon of "fake news" spread by Macedonian teenagers and Russia's Internet Research Agency became shorthand for social media's wholesale perversion of democracy; a year later, researchers at Harvard University's Berkman Klein Center concluded that the circulation of abjectly fake news "seems to have played a relatively small role in the overall scheme of things." A recent study by academics in Canada, France, and the US indicates that online media use actually decreases support for right-wing populism in the US. Another study examined some 330,000 recent YouTube videos, many associated with the far right, and found little evidence for the strong "algorithmic radicalization" theory, which holds YouTube's recommendation engine responsible for the delivery of increasingly extreme content.

The article has a lot more in it -- and you should read the whole thing -- but it's nice to see it recognizes that the real issue is people. If there's a lot of bad stuff on Facebook, it's because that's what its users want. You have to be incredibly paternalistic to assume that the best way to deal with that is to have Facebook deny users what they want.

In the end, as it becomes increasingly untenable to blame the power of a few suppliers for the unfortunate demands of their users, it falls to tech's critics to take the fact of demand-that people's desires are real-even more seriously than the companies themselves do. Those desires require a form of redress that goes well beyond "the algorithm." To worry about whether a particular statement is true or not, as public fact-checkers and media-literacy projects do, is to miss the point. It makes about as much sense as asking whether somebody's tattoo is true. A thorough demand-side account would allow that it might in fact be tribalism all the way down: that we have our desires and priorities, and they have theirs, and both camps will look for the supply that meets their respective demands.

Just because you accept that preferences are rooted in group identity, however, doesn't mean you have to believe that all preferences are equal, morally or otherwise. It just means our burden has little to do with limiting or moderating the supply of political messages or convincing those with false beliefs to replace them with true ones. Rather, the challenge is to persuade the other team to change its demands-to convince them that they'd be better off with different aspirations. This is not a technological project but a political one.

Perhaps it's time for a backlash to the techlash. And, at the very least, it's time that instead of just blaming the technology, we all take a closer look at ourselves. If it's a political or societal problem, we're not going to fix it (at all) by blaming Facebook.



Permalink | Comments | Email This Story
External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments