Article 698TT Four ways the Supreme Court could reshape the web

Four ways the Supreme Court could reshape the web

by
Tate Ryan-Mosley
from MIT Technology Review on (#698TT)
Story Image

This article is from The Technocrat, MIT Technology Review's weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

All eyes were on the US Supreme Court this week as it weighed up arguments for two cases relating to recommendation algorithms and content moderation, both core parts of how the internet works. It was also the first time SCOTUS has considered Section 230, a 1996 legal provision that gives web companies a shield to publish and moderate content as they see fit. We won't get a ruling on either case for a few months yet, but when we do, it could be a Very Big Deal for the future of the internet.

We shouldn't read too much into the oral arguments heard this week, and they're not a firm indication of how the court will rule (likely by summer). However, the questions the justices ask can signal how the court is thinking about a case, and we can extrapolate what might happen with more confidence. I've broken down some of those more probable scenarios below.

First, some context. The two cases-Gonzalez v. Google and Twitter v. Taamneh-both deal with holding online platforms responsible for harmful effects of the content they host. They were both filed by the families of people killed in ISIS terrorist attacks in 2015 and 2017. They differ in many ways, but at their core is a similar claim: that Google and Twitter helped to aid terrorist recruitment on their platforms, and thus violated the law.

Gonzalez has garnered the most attention for its argument that Section 230 protection shouldn't extend to recommendation algorithms. If the Supreme Court rules that it does cover these algorithms, Google has not broken the law. If it doesn't, Google could be held liable.

The core question is whether the presentation of content (which is protected under the law) is different from the recommendation of content. (I've written about why this is actually a really hard distinction, and why experts are so concerned about the unintended consequences of drawing this line legally.)

Bottom line: All in all, it looks as if the justices are hesitant to drastically reinterpret Section 230. However, content moderation could come in for greater legal scrutiny, since the Twitter v. Taamneh verdict seems less clear.

The Supreme Court, on the whole, appeared this week to be less aggressive about reinterpreting Section 230 than anticipated. It displayed a healthy dose of humility about its own understanding of how the internet works. These are not, like, the nine greatest experts on the internet," joked Justice Elena Kagan during Tuesday's hearing.

Ahead of this week, many experts were extremely skeptical about the court's ability to understand the technical complexity involved in this case. They will be heartened that the justices themselves are acknowledging the limitations of their knowledge.

So where do we go from here? These are the potential scenarios, in no particular order:

Scenario 1: One or both cases are dismissed or sent back.

Several justices voiced confusion about what exactly the Gonzalez case was arguing, and how the case got all the way up to the Supreme Court. The plaintiff's lawyers received criticism for poor arguments, and there's speculation that the case might be dismissed. This would mean the Supreme Court could avoid ruling on Section 230 at all, and send a clear signal that Congress ought to deal with the problem. There's also a chance that the Taamneh case could go back to the lower court.

Scenario 2: Google wins in Gonzalez, but the way Section 230 is interpreted changes.

When the Supreme Court issues a verdict, it issues opinions on the verdict too. These opinions offer legal rationales that change how lower courts interpret the ruling and law going forward. So even if Google wins, that doesn't necessarily mean the court won't write something that changes the way Section 230 is interpreted.

It's possible that the court could open a whole new can of worms if it does this. For example, there was lots of discussion about neutral algorithms" during the oral arguments-tapping into the age-old myth that technology can be separated from messy, complex societal issues. It's unclear exactly what would constitute algorithmic neutrality, and much has been written about the inherently non-neutral nature of AI.

Scenario 3: The Taamneh ruling becomes the heavy hitter.

The oral arguments in Taamneh seemed to have more teeth. The justices seemed more up to speed on the basics of the case, and questions focused on how it should interpret the Antiterrorism Act. Though the arguments don't mention Section 230, the results could still change how platforms are held responsible for content moderation.

Arguments in Taamneh centered on what Twitter knew about how ISIS used its platform and whether the company's actions (or inactions) led to ISIS recruitment. If the court agrees with Taamneh, platforms might be incentivized to look away from potentially illegal content so they can claim immunity, which could make the internet less safe. On the other hand, Twitter said it relied on government authorities to inform the company about terrorist content, which could raise other questions about free speech.

Scenario 4: Section 230 is repealed.

This now seems unlikely, and if it happened, chaos would ensue-at least among tech executives. However, the upside is that Congress might be pushed to actually pass comprehensive legislation holding platforms accountable for harms they cause.

(If you want even more SCOTUS content, here are some good takes from Michael Kanaan, who was the first chairperson of artificial intelligence for the US Air Force, and Danielle Citron, a UVA law professor, among the many watchers weighing in.)

What else I'm reading about this week
  • The European Union banned TikTok on its staff devices. This is just the latest clampdown by governments on the Chinese social media app. Many US states have banned the use of the app among government employees over concerns (echoed by the FBI) of espionage and influence operations from the Chinese Communist Party, and the Biden administration passed a temporary ban of the app on federal devices in December.
  • This great story from Wired by Vauhini Vara is about the grip big tech platforms have on our lives and economies, even when we try to escape them. Vara details how Buy Nothing, a movement of people trying to limit their consumption by exchanging free stuff, tried to leave Facebook and start its own app, and the mess that resulted.
  • Biden went to Kyiv on a surprise trip on the anniversary of the Russian invasion of Ukraine. I recommend reading this highly entertaining press pool report from the Wall Street Journal's Sabrina Siddiqui that details the preparations for the secret trip.
What I learned this week

Young people seem to trust what influencers have to say about politics ... a lot. A new study by researchers at Pennsylvania State University's Media Effects Research Lab suggests that social media influencers may be a powerful asset for political campaigns." That's because trust among their followers carries over to political messaging.

The study involved a survey of almost 400 US university students. It found that political messages from influencers have a meaningful impact on their followers' political opinions, especially if they're viewed as trustworthy, knowledgeable, or attractive.

Influencers, both national and local, are becoming a bigger part of political campaigning. That's not necessarily a wholly bad thing. However, it's still a cause for concern: other researchers have noted that people are particularly vulnerable to the risk of misinformation from influencers.

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments