Article 6FZDS Why Meta is getting sued over its beauty filters

Why Meta is getting sued over its beauty filters

by
Tate Ryan-Mosley
from MIT Technology Review on (#6FZDS)
Story Image

This article is from The Technocrat, MIT Technology Review's weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

Dozens of states suedMeta on October 24, claiming that the company knowingly harms young users.The caseis a pretty big deal and will almost certainly have a sweeping impact on the national debate about child safety online-a topic regular readers know I'vecoveredquite a bit. Potentially, it could lead to policy and platform changes. The case is also poised to stress-test existing privacy law that protects minors' data.

Some of itscore allegationsare that Meta misleads young users about safety features and the pervasivenessof harmful content on platforms, and that it harvests their data and violates federal laws on children's privacy and consumer protection. The case grew out of an investigationtriggered largely by Frances Haugen's whistleblowingin 2021, which revealeddamning evidence that Meta knew Instagram has detrimental effects on themental health of young girls.

Interestingly, Haugen's testimony revealed some of the best concrete evidence we seem to have about how kids have been specifically harmed by social media. In fact, the evidence (or lack of it) for the impact of social media on kidscould become a flash pointin the debate over online safety, and it's possible that this court case will bring critical new findings to light. But we will have to wait and see; the filing was heavily redacted.

(If you want to know more, there's been a lot ofgood writingabout the case inrecent days. I'd recommend reading Casey Newton'snewsletterto understand the evidence we do have and why this case matters.)

This leads me to what I want to focus on today:allegations of harm created by visual filters on Instagram.As you probably know, one key aspect of the platform is filters that can easily be added to photos. The features range from basic editing tools to sophisticated real-time virtual-reality computer vision. I've written about theimpact of beauty filters in the past, and once I evenasked an AI how beautiful I am. As I found then, their impact on kids hasn't been well established through causal research, but we do have lots of anecdotal and correlative data, such as asurvey of 200 teensin which61% of respondents said beauty filters made them feel worse about themselves.

The case against Meta specifically calls out visual tools known to promote body dysmorphia" as one of the psychologically manipulative platform features designed to maximize young users' time spent on its social media platforms." It also says that Meta was aware that young users' developing brains are particularly vulnerable to certain forms of manipulation, and it chose to exploit those vulnerabilities through targeted features," like filters.

To better understand what's going on here and what to expect from the case, I called up someone I consider a real expert: Jessica DeFino, a reporter and cultural critic who focuses on howbeauty culture,onlineandotherwise, affectsindividuals. (She also writes a searing and worthwhile Substack newsletter calledThe Unpublishable.) Here's some of our conversation, which has been edited for length and clarity.

The case asserts that there is evidence, a lot of which appears to be redacted, that shows Meta and Meta's products exploit young users and that the company is aware of the effect of their products on the mental health of young people, specifically teenage girls. What do you know about the evidence?

From the standpoint of psychological health, there are definitelystudiesandsurveysthat show that teen girls specifically, but also women across the age spectrum, are experiencing higher instances of appearance-related anxiety, depression, body dysmorphia, facial dysmorphia, obsessive beauty behaviors, disordered eating, self-harm, and even suicide. And there'sa lotthat can betraced backin one way or another to this increasingly visual virtual world that we exist in.

On the more material side of things, there is a lot happening in the beauty industry that has been specifically inspired by Instagram. I think a really great example of this is the phenomenon ofInstagram face, which is basically a term that's been coined to describe the way that Instagram filters have inspired real-world procedures and surgeries.

I have interviewed a lot of cosmetic surgeons and cosmetic injectors over the years who tell me that patients use the filters and photo-editing tools that are really popular on Instagram, but maybe not owned by Meta or Instagram, to alter their own images and bring that in to a plastic surgeon or an injector consult and say,This is what I wanna look like.

I tell this story all the time because it was just so shocking to me and such a strong example of what's happening in the medical world in response to Instagram filters: I wasinterviewingthis cosmetic injector, a doctor and dermatologist named Anna Guanche, at an event hosted by Allergan, the makers of Botox Cosmetic, with a small group of journalists.

She said, One of the biggest things I tell my patients is, You want to look more like your filtered photos-what can we do to make you look more like them, so people don't see you in real life and go, what?'"

So that is a medical opinion that's being given by an actual doctor to clients. And of course, all of these behaviors and the surgeries that are being performed in response to Instagram filters come with a huge host of potential side effects and risks, including deaths.

One thing that was specifically named in the case is that Meta promotes platform features such as visual filters known to promote eating disorders and body dysmorphia in youth. Do we know that this is true?

We do know that this is true, I would say, and it's true because these platforms are engineered by people, and that these biases exist in people isvery well documented. There are very well-documented cases of these biasespopping up in some of the filter technology.

For instance, filters that are literally called beauty filters" will automatically give somebody a smaller nose, slightly lighten and brighten their skin, and widen their eyes. These are all beauty preferences that are passed down from systems of patriarchy, white supremacy, colonialism, and capitalism that end up in our lives, in our systems, in our corporations, and in our engineers and the filters that they create.

These issues are often talked about in the context of women and teen girls being insecure about their bodies rather than framed as untested, mass-deployed, sophisticated consumer-facing augmented-reality tech. Have you seen that dynamic play out?

Issues [that affect] teen girls have culturally, historically, been swept under the rug and dismissed. Things like beauty are seen as frivolous interests. And if they're dismissed, we end up not getting enough studies, enough data about the harms of beauty culture, when in reality there are these huge and harmful cultural implications.

It recentlycame out that period products have never been scientifically tested using blood, and periods have been around since the beginning of time. If periods, which have affected teen girls and women for literal millennia, are understudied, it does not surprise me that this relatively new phenomenon of beauty filters and beauty standards affecting the mental health of teen girls does not have a robust set of data yet.

What groups profit off the disconnect between the beauty ideal and real life?

It's a very capitalist ideal. The further away the standard is from the human body, the more products and procedures and surgeries need to be bought in order to meet that ideal. And so corporations benefit and the tech industry benefits.

One concrete example is how Instagram face has financially benefited Instagram. The Instagram-face phenomenon sort of came about in an earlier iteration of Instagram when it was primarily a social media platform. A couple of years later, Instagram transitioned into a social shopping platform. They put a huge emphasis on shopping; there was a shopping tab. At that point, not only was it distorting users' perception of beauty, but it's also selling them everything they need to distort their bodies to match andtaking a cut of all of those sales.

What in particular are you going to be watching as this lawsuit develops?

I would absolutely love to see more hard data on beauty standards and beauty culture and how social photo-editing technologies are contributing to that and their psychological impact.

But I'm more interested right now in the human behavior aspect of it. As evidence does come out in this lawsuit and we begin to see,Oh, these technologies that we've become obsessed with to an unhealthy degree are actually detrimental to our lives and our well-being in a lot of ways, I'm interested to see if people will, one, want to opt out of them and, two, be able to opt out of them. I'm interested to see if this lawsuit being more publicized will inspire more people to consider non-social-media lives.

As far as other potential regulations, there wasone study done in the UK[that looked at] whether disclaimers on Photoshopped images had a positive effect on people who were viewing these advertisements.[Ed. note: Similar labels have been suggested as a regulatory response to social media harms.]And what it found is that in a lot of cases, the disclaimer that an image had been Photoshopped actually made people feel worse, because knowing that an image has been doctored and that it's not physically possible doesn't actually lessen the pressure society places on young women to look as perfect as possible. So I am very skeptical of any potential regulation that would require disclosure of filters or Photoshop or any sort of photo-editing things.

What else I'm reading
  • California suspended Cruise robotaxis from operating in the state. State regulatorscited safety concernswith the driverless-car technology after a slew of incidents over the past few weeks, including an accident that trapped a pedestrian (who was hit by another car) underneath the robotaxi, and thecompany's alleged attemptsto withhold crash footage.
  • There are still a lot of AI meetings happening among lawmakers, including the US Congress's secondAI Insight Forumon October 24 and the UK'sAI Safety Summitcommencing on November 1. I'm excited to see whether these meetings lead to tangible results. I'll also be watching for aforthcoming executive order on AIfrom President Biden.
  • Apropos of nothing: I was engrossed bythis storyby Chiara Dello Joio in the Atlantic about humans who have cloned their pets and how their relationship with the cloned animal differs from their relationship with the original.
What I learned this week

A new tool called Nightshade can poison" training data for image-generating AI models and help artists fight back against copyright violations, my colleague Melissa Heikkilareported. A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it's scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways," she writes. Melissa's story comes from an exclusive preview of research out of the University of Chicago that has been submitted for peer review.

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments