The Supreme Court may overhaul how you live online
This article is from The Technocrat, MIT Technology Review's weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.
Recommendation algorithms sort most of what we see online and determine how posts, news articles, and accounts you follow are prioritized on digital platforms. In the past, recommendation algorithms and their influence on our politics have been the subject of much debate; think Cambridge Analytica, filter bubbles, and the amplification of fake news.
Now they're at the center of a landmark legal case that ultimately has the power to completely change how we live online. On February 21, the Supreme Court will hear arguments in Gonzalez v. Google, which deals with allegations that Google violated the Anti-Terrorism Act when YouTube's recommendations promoted ISIS content. It's the first time the court will consider a legal provision called Section 230.
Section 230 is the legal foundation that, for decades, all the big internet companies with any user generated stuff-Google, Facebook, Wikimedia, AOL, even Craigslist-built their policies and often businesses upon. As I wrote last week, it has long protected social platforms from lawsuits over harmful user-generated content while giving them leeway to remove posts at their discretion." (A reminder: Presidents Trump and Biden have both said they are in favor of getting rid of Section 230, which they argue gives platforms too much power with little oversight; tech companies and many free-speech advocates want to keep it.)
SCOTUS has homed in on a very specific question: Are recommendations of content the same as display of content, the latter of which is widely accepted as being covered by Section 230?
The stakes could not really be higher. As I wrote: [I]f Section 230 is repealed or broadly reinterpreted, these companies may be forced to transform their approach to moderating content and to overhaul their platform architectures in the process."
Without getting into all the legalese here, what is important to understand is that while it might seem plausible to draw a distinction between recommendation algorithms (especially those that aid terrorists) and the display and hosting of content, technically speaking, it's a really murky distinction. Algorithms that sort by chronology, geography, or other criteria manage the display of most content in some way, and tech companies and some experts say it's not easy to draw a line between this and algorithmic amplification, which deliberately boosts certain content and can have harmful consequences (and some beneficial ones too).
While my story last week narrowed in on the risks the ruling poses to community moderation systems online, including features like the Reddit upvote, experts I spoke with had a slew of concerns. Many of them shared the same worry that SCOTUS won't deliver a technically and socially nuanced ruling with clarity.
This Supreme Court doesn't give me a lot of confidence," Eric Goldman, a professor and dean at Santa Clara University School of Law, told me. Goldman is concerned that the ruling will have broad unintentional consequences and worries about the risk of an opinion that's an internet killer."
On the other hand, some experts told me that the harms inflicted on individuals and society by algorithms have reached an unacceptable level, and though it might be more ideal to regulate algorithms through legislation, SCOTUS should really take this opportunity to change internet law.
We're all looking at the technology landscape, particularly the internet, and being like, This is not great,'" Hany Farid, a professor of engineering and information at the University of California, Berkeley, told me. It's not great for us as individuals. It's not great for societies. It's not great for democracies."
In studying the online proliferation of child sexual abuse material, covid misinformation, and terrorist content, Farid has seen how content recommendation algorithms can leave users vulnerable to really destructive material.
You've probably experienced this in some way; I recently did too-which I wrote about this week in an essay about algorithms that consumed my digital life after my dad's latest cancer diagnosis. It's a bit serendipitous that this story came out the same week as the inaugural newsletter; it's one of the harder stories I've ever written and certainly the one in which I feel the most vulnerable. Over a decade of working in emerging tech and policy, I've studied and observed some of the most concerning impacts of surveillance capitalism, but it's a whole different thing when your own algorithms trap you in a cycle of extreme and sensitive content.
As I wrote:
I started, intentionally and unintentionally, consuming people's experiences of grief and tragedy through Instagram videos, various newsfeeds, and Twitter testimonials. It was as if the internet secretly teamed up with my compulsions and started indulging my own worst fantasies ....
Yet with every search and click, I inadvertently created a sticky web of digital grief. Ultimately, it would prove nearly impossible to untangle myself. My mournful digital life was preserved in amber by the pernicious personalized algorithms that had deftly observed my mental preoccupations and offered me ever more cancer and loss.
In short, my online experience on platforms like Google, Amazon, Twitter, and Instagram became overwhelmed with posts about cancer and grieving. It was unhealthy, and as my dad started to recover, the apps wouldn't let me move on with my life.
I spent months talking to experts about how overpowering and harmful recommendation algorithms can be, and about what to do when personalization turns toxic. I gathered a lot of tips for managing your digital life, but I also learned that tech companies have a really hard time controlling their own algorithms-thanks in part to machine learning.
On my Google Discover page, for example, I was seeing loads of stories about cancer and grief, which is not in line with the company's targeting policies that are supposed to prevent the system from serving content on sensitive health issues.
Imagine how dangerous it is for uncontrollable, personalized streams of upsetting content to bombard teenagers struggling with an eating disorder or tendencies toward self-harm. Or a woman who recently had a miscarriage, like the friend of one reader who wrote in after my story was published. Or, as in the Gonzalez case, young men who get recruited to join ISIS.
So while the case before the justices may seem largely theoretical, it is really fundamental to our daily lives and the role that the internet plays in society. As Farid told me, You can say, Look, this isn't our problem. The internet is the internet. It reflects the world'... I reject that idea." But recommendation systems organize the internet. Could we really live without them?
What do you think about the upcoming Supreme Court case? Have you personally experienced the dark side of content recommendation algorithms? I want to hear from you! Write to me: tate.ryan-mosley@technologyreview.com.
What else I'm readingThe devastation in Turkey and Syria from the 7.8 magnitude earthquake on Monday is overwhelming, with the death toll swelling to over 20,000 people.
- I recommend reading this inspiring story by Robyn Huang in Wired about the massive effort by software engineers to aid in rescue efforts. By the day after the quake, 15,000 tech professionals had volunteered with the Earthquake Help Project." Led by Turkish entrepreneurs Furkan Kilic and Eser Ozvataf, it is building applications to help locate people who remain trapped and in distress, as well as to distribute aid. One of the project's first contributions was a heat map of survivors in need, created by scraping social media for calls for help and geolocating them.
The spy balloon, of course! Details keep coming out about the massive Chinese balloon that the US shot down last weekend.
- The US is saying that the balloon was conducting electronic signals intelligence" using antennas-meaning it was monitoring communications, which could include locating and collecting data from devices like mobile phones and radios. We still don't have a lot of details about what exactly was being surveilled and how, but as the US continues to gather its remnants, we might find out more.
- The incident is another escalation in the already fraught relationship between the world's two most powerful countries-and part of a new technological cold war.
- This bit from Biden's State of the Union address is particularly timely: But I will make no apologies that we're investing to make America stronger. Investing in American innovation, in industries that will define the future, that China intends to be dominating. Investing in our alliances and working with our allies to protect advanced technologies so they will not be used against us."
Speaking of the State of the Union, Biden called out Big Tech several times, offering the clearest signal yet that there will be increased activity around tech policy-one of the few areas with potential for bipartisan agreement in the newly divided Congress.
- In addition to pushing for more movement on antitrust efforts to break up tech monopolies, Biden talked about protecting digital privacy for young people, restricting targeted advertising, and curbing the use of personal data-a rare line that was met with a standing ovation on both sides of the aisle.
- None of this means we are close to passing a federal online privacy bill. Thanks, Congress.
There's a massive knowledge gap around online data privacy in the US. Most Americans don't understand the basics of online data, and what companies are doing with it, according to a new study of 2,000 Americans from the Annenberg School for Communication at the University of Pennsylvania-even though 80% of those surveyed agree that what companies know about them from their online behaviors can harm them.
Researchers asked 17 questions to gauge what people know about online data practices. If it were a test, the majority of people would have failed: 77% of respondents got fewer than 10 questions correct.
- Only about 30% of those surveyed know it is legal for an online store to charge people different prices depending on location.
- More than 8 in 10 of participants incorrectly believe that the federal Health Insurance Portability and Accountability Act (HIPAA) stops health apps (like exercise or fertility trackers) from selling data to marketers.
- Fewer than half of Americans know that Facebook's user privacy settings allow users to control how their own personal information is shared with advertisers.
The TL;DR: Even if US regulators increased requirements for tech companies to get explicit consent from users for data sharing and collection, many Americans are ill equipped to provide that consent.