Article 6GZNK We Teamed Up With Bluesky To Tell The Supreme Court How State Social Media Laws Don’t Take Into Account User Empowerment

We Teamed Up With Bluesky To Tell The Supreme Court How State Social Media Laws Don’t Take Into Account User Empowerment

by
Mike Masnick
from Techdirt on (#6GZNK)
Story Image

As you know, the Supreme Court is now considering the NetChoice/CCIA cases challenging two similar (but not identical) state laws regarding social media moderation. The laws in Florida and Texas came about around the same time, and were clearly written to target ideological speech. Both of them put restrictions on how certain social media apps can moderate or even recommend certain speech.

As you'll recall, district courts in both states found the laws obviously unconstitutional attacks on the 1st Amendment. On appeal, things went differently. The 11th Circuit agreed that most of the Florida law was unconstitutional (we think they're wrong about the part they weren't concerned with too). But the 5th Circuit went rogue and said that of course states can set whatever moderation laws they want (which is in conflict with later 5th Circuit rulings regarding state pressure on moderation, but I digress).

Anyway, you can follow along on the docket for the case at the Supreme Court, where NetChoice/CCIA filed their brief recently. This week, a bunch of amicus briefs are being filed, some of which are really interesting.

I wanted to focus on one brief in particular in this post: our own, written by Cathy Gellis. We teamed up with Bluesky (the alternative microblogging service that is building a federated protocol for social media) and Chris Riley (in his personal capacity as the operator of a Mastodon instance) to make some points that we don't think other amici are likely to make.

The key point we tried to make is that so much of the arguments being thrown back and forth are really about who it is that gets to determine how a website moderates: should it be the government or should it be the website? If those are the only two options there are, then it already does seem obvious that it should be the website, not the government.

But, the key to our brief is pointing out that this assumes, falsely, that this is the only possible model out there. Instead, we highlight, that it is possible to envision a world in which users themselves get to decide, and any ruling that says the government gets to decide would fundamentally make that kind of user freedom and empowerment impossible.

A fundamental part of my Protocols, Not Platforms article (which, in part, helped inspire Bluesky) was that it would empower users to have more control themselves, or at least let them choose which intermediaries they trust to help them with algorithms and moderation, rather than relying on the same platform that they use for the hosting of the content itself.

For example, Bluesky has an amazingly useful feature called custom feeds" and it has created a marketplace of algorithms so that you can create your own algorithms and share them with others, or you can just decide to use someone else's algorithm (or even adapt it further yourself). That is, rather than relying on a company like Twitter or Facebook to decide what you should see, on Bluesky, you get to make those decisions yourself, or hand them off to someone you trust.

But, in that architecture, it means that Bluesky often won't even know what algorithms people are using (the algorithms don't have to live on Bluesky's servers, indeed, Bluesky itself might never even be aware of them). But should these laws (or laws like them) apply to Bluesky, that kind of ecosystem basically would be effectively barred, because the law would limit what kinds of algorithms could work on Bluesky, and Bluesky itself would have no way to control those third party algorithms.

This ecosystem of platforms is necessary in order for there to be meaningful choices in what expression Internet users experience online. Platform choice, and the customization algorithmic choice enables, are what helps realize the expression-promoting value of the Internet and ensures it captures a diversity of expression by putting the choices of what expression to be exposed to in the hands of users. It is not for the government to take away this choice, creating a platform or algorithmic monoculture, which is what the Florida and Texas laws threaten.

Similarly, we used the example of how comments here at Techdirt are moderated, in which much of the moderation is actually handled by community votes, and how there's no way to comply with these laws for such community moderation practices either:

But while the Copia Institute's moderation practices can be described in broad strokes, they cannot be articulated with the specificity that the Texas law would require. For instance, the law requires that platforms disclose their moderation standards. See, e.g., TEX. BUS. & COM. CODE 120.051. And it also puts limits on how platforms can do this moderation. See, e.g., TEX. CIV. PRAC. & REM. CODE 143A.002 (banning certain moderation decisions, including those based on the viewpoint" of the user expression being moderated). But even if the Copia Institute wanted to comply with the Texas law, it could not. For instance, it could not disclose its moderation policy because its moderation system is primarily community-driven and subject to the community's whims and values of the moment. Which also means that it could not guarantee that moderation always comported with a preannounced Acceptable Use Policy," which the Texas law also requires. TEX. BUS. & COM. CODE 120.052. It would also be infeasible to meet any of the Texas law's additional burdensome demands, including to provide notice to any affected user, TEX. BUS. & COM. CODE 120.103, maintain a complaint system, TEX. BUS. & COM. CODE 120.101,24 or offer a process for appeal,25 TEX. BUS. & COM. CODE 120.103. None of these faculties are features the Copia Institute has the resources or infrastructure to support. In other words, the Texas law sets up a situation where if the Copia Institute cannot host user-provided content exactly the way Texas demands, it effectively does not get to host any user-provided content at all. Or, potentially even worse, it would leave Techdirt in the position of having to host odious content, including content threatening to it, its staff, or others in its reader community, in order to satisfy Texas's moderation requirements.

There are all sorts of other important 1st Amendment reasons why these laws are deeply problematic. But we assumed (almost certainly correctly) that the briefs from NetChoice/CCIA and other amici will cover all of that.

Our brief was more focused on highlighting how these issues go beyond just the 1st Amendment concerns of big websites, but how they might impact a new generation of social media platforms, like Bluesky and Mastodon, whose very models and infrastructure are fundamentally different from the giant silos" of today's major social media platforms.

Too much of the discussion assumes that there are only two parties who might have a say in the moderation of social media: governments and the platforms. But we want to make the court aware that a new generation of services are focused on enabling the users themselves to make that choice, and if these laws are allowed, it could wipe out that possibility.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments