Article 6JNGH Has Wired Given Up On Fact Checking? Publishes Facts-Optional Screed Against Section 230 That Gets Almost Everything Wrong

Has Wired Given Up On Fact Checking? Publishes Facts-Optional Screed Against Section 230 That Gets Almost Everything Wrong

by
Mike Masnick
from Techdirt on (#6JNGH)
Story Image

What is going on at Wired Magazine? A few years ago, the magazine went on a bit of a binge with some articles that completely misrepresented Section 230. While I felt those articles were extraordinarily misleading, at least they seemed to mostly live in the world of facts.

Its latest piece goes so far beyond all facts that it's on another plane of existence, where facts don't exist and vibes rule the world. Wired has published an article that either wasn't fact-checked or edited, or if it was, whoever is responsible failed at their job. The piece is by Jaron Lanier and Allison Stanger. Jaron Lanier being wrong about things online is nothing new. It's kinda become his brand. Stanger, apparently, has a new book coming out entitled Who Elected Big Tech?" and based on the overview of a course she taught under the same name, it does not inspire confidence:

An unprecedented shift in the balance of power between multinational industry and national governments has been a necessary condition for these new challenges. How else could a freely elected American president be silenced by Google, Twitter, and Facebook? How else could Facebook's Instagram be exposed as knowingly causing harm to teenagers without government penalty? How did America reach the point where Big Tech has the capacity to mount foreign policies?

I mean, we've discussed all three of those things in great detail on Techdirt, and all of them require a lot more nuance than is presented here. The sites did not silence" a President. They suspended his account for violating their rules (after years of bending over backwards to pretend he did not violate their rules). And it did not silence him in any way. He was still able to speak, and whenever he did, his words were immediately conveyed on those same platforms.

And the whole knowingly causing harm to teenagers" thing is, yet again, ridiculously misleading. Meta did internal research to try to find out if it was causing harm in order to stop causing harm. It researched 24 categories, in which 23 of them suggested no significant harm, and only one raised concerns, which Facebook's internal research highlighted so that the company could try to address the harm and minimize it. And the government has been trying to penalize them ever since, but has failed, because the penalties" are unconstitutional.

Either way, let's return to this article. The title is The One Internet Hack That Could Save Everything." With the provocative subhed: It's so simple: Axe 26 words from the Communications Decency Act. Welcome to a world without Section 230."

Now, we've spent the better part of the last 25 years debunking nonsense about Section 230, but this may be the worst piece we've ever seen on this topic. It does not understand how Section 230 works. It does not understand how the First Amendment works. It's not clear it understands how the internet works.

But also, it's just not well written. I was completely confused about the point that the article is trying to make, and it was only on the third reading that I finally understood the extraordinarily wrong point that is at the heart of the article: that if you got rid of Section 230, websites would have to moderate based on the First Amendment - but also they would magically get rid of harassment and other bad content, but be forced to leave up the good content. It's magic fairytale thinking that has nothing to do with reality. There's also some nonsense about privacy and copyright that have nothing to do with Section 230 at all, but the authors seem wholly unaware of that fairly basic fact.

I'm going to skip over the first section of the article, because it's just confused babble, and move onto some really weird claims about Section 230. Specifically, that it somehow created a business model:

The impact on the public sphere has been, to say the least, substantial. In removing so much liability, Section 230 forced a certain sort of business plan into prominence, one based not on uniquely available information from a given service, but on the paid arbitration of access and influence. Thus, we ended up with the deceptively named advertising" business model-and a whole society thrust into a 24/7 competition for attention. A polarized social media ecosystem. Recommender algorithms that mediate content and optimize for engagement. We have learned that humans are most engaged, at least from an algorithm's point of view, by rapid-fire emotions related to fight-or-flight responses and other high-stakes interactions. In enabling the privatization of the public square, Section 230 has inadvertently rendered impossible deliberation between citizens who are supposed to be equal before the law. Perverse incentives promote cranky speech, which effectively suppresses thoughtful speech.

First of all... what? Literally none of that makes sense, nor is any citation or explanation given for what is entirely a vibes" based argument. Section 230 has nothing to do with the advertising market directly. Advertising existed prior to Section 230 and has been a way to subsidize content going back centuries. It's unclear how the authors think Section 230 is somehow responsible for internet advertising as a business model, and the article does nothing to clarify why that would be the case. Because it's just wrong. There is no way to support it.

Second, the claim that algorithms optimize for engagement" is also simply false. Some algorithms definitely do optimize for engagement. Many do not. Neither the ones that do, nor the ones that don't, have much (if anything) to do with Section 230. They kinda have to do with capitalism and the demands of investors for returns. That's not a Section 230 issue at all.

Furthermore, as tons of research keeps showing, if you only optimize for engagement, it just leads to anger and nonsense to the point that it drives both advertisers and users away over time. And that's why sites like Facebook and YouTube have both spent much of the past decade toning down those algorithms to be less about engagement," because they realized it was long-term counterproductive. The idea that algorithms are inherently about engagement is outdated thinking that is at least a decade obsolete.

The idea that algorithms were brought about because of Section 230 is easily debunked by the simple fact that the first company that really focused on algorithmically recommending content to people was not hosting user-generated content, but rather was Netflix, trying to better recommend movies to people (remember that?).

The reason we have algorithms is not Section 230, but because without algorithms there's so much junk on the internet it's hard to find what you want. Recommendation algorithms exist because they're useful and because of the sheer amount of content online.

Taking away Section 230 wouldn't change that one bit. Because recommendations are inherently First Amendment-protected speech. It's an opinion.

The authors seem wholly confused about what Section 230 actually does. Like the following paragraph makes no sense at all. It's in the not even wrong" category, it so defies explaining how nearly every part of it is wrong.

And then there is the economic imbalance. Internet platforms that rely on Section 230 tend to harvest personal data for their business goals without appropriate compensation. Even when data ought to be protected or prohibited by copyright or some other method, Section 230 often effectively places the onus on the violated party through the requirement of takedown notices. That switch in the order of events related to liability is comparable to the difference between opt-in and opt-out in privacy. It might seem like a technicality, but it is actually a massive difference that produces substantial harms. For example, workers in information-related industries such as local news have seen stark declines in economic success and prestige. Section 230 makes a world of data dignity functionally impossible.

There is no economic imbalance" in those who use 230. Section 230 protects any interactive computer service or user (everyone always forgets the users) for sharing third-party content. It has protected Techdirt in court, and under no standard anywhere would anyone ever argue that Techdirt has an economic imbalance." It has protected people for forwarding emails. It has protected people for retweeting content. It doesn't just protect big companies.

The discussion about copyright and personal data is not just wrong, but simply, obviously, wholly unrelated to Section 230. Section 230 explicitly exempts intellectual property law. There is no issue whatsoever with copyright-covered content somehow being impacted by Section 230. That's just not how it works.

The statement that Section 230 often effectively places the onus on the violated party through the requirement of takedown notices" is even dumber because there are no takedown notices under 230. I'm guessing the authors of the piece probably mean DMCA 512, which is about copyright and does have takedown notices, but that has fuck all to do with Section 230. This is the sort of thing that a fact checker would normally catch. If Wired had one.

Similarly, data protection/privacy laws are unrelated to Section 230. The only times Section 230 comes up in relationship to privacy laws is when state legislatures (hello California!) try to pass a law about speech which they pretend is a privacy law.

Literally nothing in this paragraph makes any sense at all. You have to deliberately work hard to misunderstand Section 230 this badly.

The authors of this piece basically misrepresent Section 230 at every opportunity. They don't understand what it does and how it works. They blame it for things it has nothing to do with (advertising business models? algorithms?) and then associate it with things very clearly beyond its purview (copyright, takedown notices, and data protection).

Honestly, if Wired had any integrity at all, they'd pull this piece and admit it wasn't even half-baked yet.

Then, finally, we get to... I guess you'd call it the point of the article? Apparently the authors don't like content moderation and claim that content moderation is beholden to the quest for attention and engagement" and I have no idea what that even means. If the concern before was that algorithms were driven by that quest for attention, why would moderation also be driven by that? Isn't content moderation an attempt to push back on that trend by making sure that content follows rules? Not according to the authors of this article who seem to think the efforts of trust & safety teams to make sure users follow the rules is... somehow driven by attention and engagement?

To date, content moderation has too often been beholden to the quest for attention and engagement, regularly disregarding the stated corporate terms of service. Rules are often bent to maximize engagement through inflammation, which can mean doing harm to personal and societal well-being. The excuse is that this is not censorship, but is it really not? Arbitrary rules, doxing practices, and cancel culture have led to something hard to distinguish from censorship for the sober and well-meaning. At the same time, the amplification of incendiary free speech for bad actors encourages mob rule. All of this takes place under Section 230's liability shield, which effectively gives tech companies carte blanche for a short-sighted version of self-serving behavior. Disdain for these companies-which found a way to be more than carriers, and yet not publishers-is the only thing everyone in America seems to agree on now.

It's almost as if the authors have no experience in trust & safety and have never spoken to anyone in trust & safety, yet pretend to understand it. The claim that the rules are arbitrary" or that enforcing rules has something to do with either doxing practices" or cancel culture" suggests people who have never, not once, been in a position where they had to moderate any online conversation.

From there, the piece goes even further off the rails, arguing (for no clear reason) that YouTube is in a post-230 world and that ExTwitter is being destroyed by Section 230. Why? They don't explain.

In one sense, it's already happening. Certain companies are taking steps on their own, right now, toward a post-230 future. YouTube, for instance, is diligently building alternative income streams to advertising, and top creators are getting more options for earning. Together, these voluntary moves suggest a different, more publisher-like self-concept. YouTube is ready for the post-230 era, it would seem. (On the other hand, a company like X, which leans hard into 230, has been destroying its value with astonishing velocity.) Plus, there have always been exceptions to Section 230. For instance, if someone enters private information, there are laws to protect it in some cases. That means dating websites, say, have the option of charging fees instead of relying on a 230-style business model. The existence of these exceptions suggests that more examples would appear in a post-230 world.

The alternative business models that YouTube has created, yet again, have nothing to do with Section 230. It's such a weird, nonsensical point, I'm honestly beginning to wonder if the piece was written about something else entirely, and at the last minute they tried to shove 230 in there. Whether or not you build alternative income streams to advertising is wholly unrelated to Section 230. Again, Techdirt, whose comments are protected by 230, does not make money from advertising (we make money from user support, thank you very much, please support us). Having user support doesn't put us in a post-230 world, it shows why 230 is so important.

As for ExTwitter, the destruction of its value can easily be placed on one person, not Section 230.

And the line relying on a 230-style business model" makes no sense. There is no 230-style business model.

All of this seems based on the blatantly incorrect belief that Section 230 encourages an advertising-based business model. But that's never even been close to correct. I mean, the first big Section 230 case was Zeran v. AOL in which AOL (which made money on subscription fees more than advertising at the time) was found to be protected. And, I mean, Section 230 was written in response to lawsuits against Compuserve and Prodigy, two other subscription-based services.

The idea that 230 creates an advertising or data-based business model is not just ahistorical, it's provably false.

The article then returns" to the question of online speech and shows how incredibly confused its authors are:

Let's return to speech. One difference between speech before and after the internet was that the scale of the internet weaponized" some instances of speech that would not have been as significant before. An individual yelling threats at someone in passing, for instance, is quite different from a million people yelling threats. This type of amplified, stochastic harassment has become a constant feature of our times-chilling speech-and it is possible that in a post-230 world, platforms would be compelled to prevent it.

Wait, what? Literally three paragraphs earlier you were complaining that content moderation is evil censorship" driven by engagement." Now, you're saying without Section 230, magically, websites would be compelled to prevent" harassment.

This gets the law backwards. Under Section 230, websites have the freedom to quickly respond to harassment. That's what content moderation is. Without Section 230 (as we know from pre-230 cases), it would hinder sites' ability to do that.

Underpinning all of this - which the authors seem wholly ignorant of - is the way the First Amendment works. The First Amendment in a pre-230 world made it clear that a distributor could only be held liable for speech if (1) they knew about it and (2) the speech violated the law. That means without Section 230, most platforms' best move would be to avoid knowledge. That means less moderation, not more. It means more harassment, not less.

Also, nearly all harassment" outside the most extreme cases is protected by the 1st Amendment as well.

It's almost as if the authors have no idea what they're talking about.

It is sometimes imagined that there are only two choices: a world of viral harassment or a world of top-down smothering of speech. But there is a third option: a world of speech in which viral harassment is tamped down but ideas are not. Defining this middle option will require some time to sort out, but it is doable without 230, just as it is possible to define the limits of viral financial transactions to make Ponzi schemes illegal.

Lol, what? Viral harassment is tamped down but ideas are not?" What the fuck do these people think every trust & safety team in the world is doing right now? They're trying to tamp down harassment, not ideas. And the reason they can do so cleanly, without having to involve lawyers at every move, is because Section 230 protects them in making those decisions.

And then.... it gets dumber.

With this accomplished, content moderation for companies would be a vastly simpler proposition. Companies need only uphold the First Amendment, and the courts would finally develop the precedents and tests to help them do that, rather than the onus of moderation being entirely on companies alone.

No one - and I do mean no one - wants a website where companies can only moderate based on the First Amendment. Such a site would almost immediately turn into harassment, abuse, and garbage central. Most speech is protected under the First Amendment. Very, very, very little speech is not protected. The very harassment" that the authors complain about literally one paragraph above is almost entirely protected under the First Amendment.

Also, if you could only moderate based on the First Amendment, all online forums would be the same. The wonder of the internet right now is that every online forum gets to set its own rules and moderate accordingly. And that's because Section 230 allows them to do so without fear of litigation over their choices.

Under this plan, you couldn't (for example) have a knitting community with a no politics" rule. You'd have to allow all legal speech. That's... beyond stupid.

And, as if to underline that the authors, the fact checkers, and the editors, have no idea how any of this works, they throw this in:

The United States has more than 200 years of First Amendment jurisprudence that establishes categories of less protected speech-obscenity, defamation, incitement, fighting words-to build upon, and Section 230 has effectively impeded its development for online expression. The perverse result has been the elevation of algorithms over constitutional law, effectively ceding judicial power.

The first sentence is partially right. There is jurisprudence establishing exceptions to the First Amendment. Though it's very narrow and very clearly defined. Indeed, the inclusion of fighting words" in the list of exceptions above shows that the authors are unaware that over the past 50 years the fighting words doctrine has been effectively deprecated as an exception.

It's also just blatantly, factually, incorrect that 230 has somehow impeded" the development of First Amendment exceptions. It's as if the authors are wholly unaware of myriad attempts in the decades since Section 230 went into effect for people to convince courts to establish new exceptions. Most notably was the US v. Stevens, in which the Supreme Court made it clear that it wasn't really open to adding new exceptions to the First Amendment.

That was the case about animal crush" videos showing cruelty to animals. The court ruled that it was a violation of the First Amendment to make such videos illegal. And if the Supreme Court is saying that animal crush" videos are protected by the First Amendment, they seem highly unlikely to include the rando exception for people were mean to me online!" (I mean, Clarence Thomas might, but he's not enough).

When the jurisprudential dust has cleared, the United States would be exporting the democracy-promoting First Amendment to other countries rather than Section 230's authoritarian-friendly liability shield and the sewer of least-common-denominator content that holds human attention but does not bring out the best in us.

Again, this is the opposite of reality. The sewer of least-common-denominator content" is what you get without 230, when you encourage websites to look the other way to avoid liability for any content. How could the authors not have done the most basic of research to understand this?

In a functional democracy, after all, the virtual public square should belong to everyone, so it is important that its conversations are those in which all voices can be heard. This can only happen with dignity for all, not in a brawl.

And if you take away 230 you get the brawl, because you limit the ability of websites to moderate.

Honestly, this entire article seems based on the wholly backwards belief that getting rid of 230 leads to better content moderation, even as the authors complain about content moderation. They don't seem to understand any of it.

Section 230 perpetuates an illusion that today's social media companies are common carriers like the phone companies that preceded them, but they are not. Unlike Ma Bell, they curate the content they transmit to users. We need a robust public conversation about what we, the people, want this space to look like, and what practices and guardrails are likely to strengthen the ties that bind us in common purpose as a democracy. Virality might come to be understood as an enemy of reason and human values. We can have culture and conversations without a mad race for total attention.

Your problem is with the First Amendment, not 230.

Section 230 does not treat websites as common carriers. It's literally the opposite of that. It's saying (correctly) that they're not common carriers, and that they need the right to set rules and enforce them in order to enable conversations without a mad race for total attention."

The article then goes off on some (again, nonsensical) tangent about AI, and then once again shows that the authors know nothing about how the First Amendment works:

When the US government said the American public owned the airwaves so that television broadcasting could be regulated, it put in place regulations that supported the common good. The internet affects everyone, so we must devise measures to ensure that our digital-age public discourse is of high quality and includes everyone. In the television era, the fairness doctrine laid that groundwork. A similar lens needs to be developed for the internet age.

The law on this stuff is pretty clear.

The Supreme Court made it clear that broadcast TV and radio could be regulated only because it used public spectrum. It could not (and does not) regulate cable TV or the internet, because they do not.

There's even more but I need to end this piece before I bang my head on the desk one more time.

The authors do not understand Section 230, the First Amendment, or how content moderation works. Yet they position themselves as experts. They get the law backwards, upside down, and twisted inside out.

Normally, an editor or a fact-checker would maybe catch those things, but apparently Wired will let just anyone spew nonsense on their pages these days. And, yes, I get it that these are complex topics. But they're also topics where there are dozens of actual experts available who could take one look at the claims in this piece and point out just how wrong almost every confident claim is. I get that Lanier is internet famous," but that doesn't make him worth publishing without someone who actually knows what they're talking about reviewing his work to call out the myriad factual errors.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments