Article 6R696 New Yorker’s ‘Social Media Is Killing Kids’ Article Waits 71 Paragraphs To Admit Evidence Doesn’t Support The Premise

New Yorker’s ‘Social Media Is Killing Kids’ Article Waits 71 Paragraphs To Admit Evidence Doesn’t Support The Premise

by
Mike Masnick
from Techdirt on (#6R696)
Story Image

These days, there's a formula for articles pushing the unproven claims of harm from social media. Start with examples of kids harming themselves, insist (without evidence) that but for social media it wouldn't have happened. Throw some shade at Section 230 (while misrepresenting it). Toss out some policy suggestions without grappling with what those policy suggestions would actually mean in practice, and never once deal with the actual underlying issues regarding mental health.

It's become so easy. And so wrong. But it fits the narrative.

I enjoy Andrew Solomon's writing and especially found his book, Far From the Tree, an exceptional read. So, when I saw that he had written a big story for the New Yorker on social media and teens, I had hoped that it would approach the subject in a manner that laid out the actual nuances, trade-offs, and challenges, rather than falling for the easy moral panic tropes.

Unfortunately, it fails woefully in that endeavor, and in the process gets a bunch of basic facts wrong. For all of the New Yorker's reputation for its great fact-checking efforts, in the few stories where I've been close enough to know what's going on, the fact-checking has been... terrible.

The whole article is somewhere around 10,000 words long, so I'm not going to go through all of the many problems with it. Instead, I will highlight some major concerns with the entire piece.

The typical moral panic tropes

The article follows the typical trope of many similar articles by framing it around a series of truly tragic and absolutely heart-wrenching stories of teenagers who died by suicide. These are, absolutely, devastating stories. But the supposed connection" to social media is never actually established.

Indeed, in reading some of the profiles, they reminded me of my own friend from high school, who took his own life in an era before social media. My friend had written notes that came out later about his own feelings of inadequacy, depression, and loneliness, that sound similar to what's in the article.

It is undeniable that we do not do enough to help everyone improve their mental health. We are especially terrible at teaching young people the nature of how much everyone's mental health matters, and how it can surprise and subvert you.

But the vast majority of the article is just tragic story after tragic story. Then, it is followed by a and, then, we found out they used Instagram/TikTok/etc. where they (1) saw some bad content or (2) expressed themselves in ways that vaguely suggested they were unhappy."

Again, though, none of that suggests a causal relationship, and the real issues are much more complex. Nearly everyone uses social media today. Teenagers writing angsty self-pitying works is... part of being a teenager. As for social media sharing similar posts, well, that's also way more complicated. For some, seeing that others are having a rough time of it is actually helpful and makes people realize they're not alone. For others, it can be damaging.

The problem is that there are so many variables here that you can't say there's a reasonable approach.

Take, for example, eating disorder content. Reading through some of it is absolutely awful and horrifying. But it's also nearly impossible to moderate. When platforms try to, it was found that teen girls very quickly work out code words and other language to get around any such moderation. Furthermore, it was found that when such content appeared on major platforms (i.e., Instagram and TikTok), it often includes comments/responses from people trying to help guide those engaged in such practices towards recovery resources. When the content moved to darker parts of the web, that was less likely.

That is to say, all of this is complicated.

But, Solomon barely touches on that. Indeed, he only kinda throws in a well, who can really say" bit way, way down in the article, after multiple stories all painting the picture that social media causes kids to take their own lives.

Unless you read to the 71st paragraph (no kidding, I counted), you won't find out that the science on this doesn't really support the claims that social media is the driving force causing kids to be depressed. Here are the 71st and 72nd paragraphs of the piece, which few readers are likely to ever actually reach, after a whole bunch of stories of people dying by suicide and parents blaming social media:

Almost every time a suicide is mentioned, an explanation is offered: he was depressed; her mother was horrible; they lost all their money. It would be foolish to ignore contributing factors, but it is equally absurd to pretend that any suicide makes sense because of concrete woes. This is also true when it comes to social media. It is surely a factor in many of these deaths, and substantial regulatory interventions, long overdue, may bring down the suicide rate in some populations, especially the young. Nonetheless, research has failed to demonstrate any definite causal link between rising social-media use and rising depression and suicide. The American Psychological Association has asserted that using social media is not inherently beneficial or harmful to young people," and a community of scientists, many of them outside the United States, has published research underscoring the absence of a clear link. Gina Neff, who heads a technology-research center at the University of Cambridge, told me, Just because social media is the easy target doesn't make it the right target."

Andrew Przybylski, a psychologist at the University of Oxford, has observed that decreasing life satisfaction among youths between the ages of ten and twenty usually led to an increase in social-media use. But the opposite isn't necessarily true," he writes. In most groups, the more time a child spends on social media doesn't mean their life satisfaction will decrease." Working with Amy Orben, a Cambridge psychologist, Przybylski has also noted that lower life satisfaction correlates slightly more strongly with wearing glasses than with digital-technology use.

Solomon gives a few paragraphs to the researchers saying, hey, this is more complicated and the evidence doesn't really support the narrative," including a few paragraphs about how social media and smartphones can actually be super helpful to some kids:

Smartphones can save lives in other ways. Randy P. Auerbach, a clinical psychologist at Columbia University, has been using phone tracking to monitor suicide risk. He measures changes in sleep patterns (many teens look at their phones right before and after sleep), changes in movement (depressed people move less), and changes in vocabulary and punctuation (people in despair start using personal pronouns more often). Matthew Nock, a clinical psychologist at Harvard and a MacArthur Fellow, has been examining the relationship between text-message frequency and mental vulnerability. Most suicides today, he said, come at the end of a trail of digital bread crumbs." Young people who are not responding to their peers-or whose messages no longer receive responses-may be in trouble. Nock's research team uses cell-phone tracking to determine when people are at highest risk and calls or messages them. We haven't equipped the field with the tools to find, predict, and prevent suicide, in the way we've done for other medical problems," he said. We just haven't developed the tools, other than to ask people, How are you doing? Are you hopeless? Are you depressed? Do you think you're going to kill yourself?,' which is not a very accurate predictor. We should be taking advantage of advances in computing and statistics to do what human brains can't."

He even notes that everyone blaming social media companies for these things can actually make it much harder to then use the technology to put in place interventions that actually have been shown to help:

Meta has programs to pick up troubling posts and has notified emergency services about them. But Nock points out that social-media companies may be afraid that, if they put systems in place and those systems fail, their liability could be enormous.

But literally two paragraphs later, the article is back to blaming social media companies for the grief families feel for kids who died.

There are interesting, thoughtful, nuanced stories to tell about all of this that explain the real tradeoffs, the ways in which kids are different from one another, and how they can deal with different issues in different ways. There are stories to be told (like that bit quoted above) about how companies are being pushed away from doing useful interventions because of the moral panic narrative.

But Solomon does... none of that?

The policy recommendations

This part was equally frustrating. Solomon does discuss policy, but in the most superficial (and often misleading to incorrect) way. Of course, there's a discussion about Section 230, but it's... wrong?

In most industries, companies can be held responsible for the harm they cause and are subject to regulatory safety requirements. Social-media companies are protected by Section 230 of the 1996 Communications Decency Act, which limits their responsibility for online content created by third-party users. Without this provision, Web sites that allow readers to post comments might be liable for untrue or abusive statements. Although Section 230 allows companies to remove obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable" material, it does not oblige them to. Gretchen Peters, the executive director of the Alliance to Counter Crime Online, noted that, after a panel flew off a Boeing 737 max 9, in January, 2024, the F.A.A. grounded nearly two hundred planes. Yet children keep dying because of Instagram, Snapchat, and TikTok," she said, and there is hardly any response from the companies, our government, the courts, or the public."

So, that first paragraph claims that social media companies can't be held responsible for the harm they cause," but again, that's wrong. The problem is that it's not clear in these cases what is actually causing" the harm. The research, again, does not support the claim that it's all from social media. And even if you argue that the content on social media is contributing, then again, is it the social media app itself, or the content? How do you disambiguate that in any useful manner?

For years, before the internet, people would blame fashion and celebrity magazines for making young girls have an unhealthy image of what women should look like. But we didn't have giant articles in the Conde Nast-owned New Yorker bemoaning the First Amendment for allowing Conde Nast to publish titles like Vogue, Glamour, Jane, Mademoiselle and others.

Yet here, Solomon falsely seems to think that the main issue is Section 230, rather than the lack of actual traceability. Indeed, while he mentions some lawsuits challenging Section 230, he leaves out how the courts have struggled with that lack of traceability from the platform itself to the harm alleged. And that's kinda important?

It's the sort of thoughtful nuance you would hope a publication like the New Yorker would engage in, but it doesn't here.

Its description of Section 230 is also just super confused.

One may sue an author or a publisher for libel, but usually not a bookstore. The question surrounding Section 230 is whether a Web site is a publisher or a bookstore. In 1996, it seemed fairly clear that interactive platforms such as CompuServe or Usenet corresponded to bookstores. Modern social-media companies, however, in recommending content to users, arguably function as both bookstore and publisher-making Section 230 feel as distant from today's digital reality as copyright law does from the Iliad.

The bookstore/publisher distinction is a weird twist on the typical wrong claim of platform/publisher," but it's no less wrong.

And here's where it would help if the New Yorker's fact-checkers did, well, any research. They could have read Jeff Kosseff's book on Section 230, which even starts off by explaining an early lawsuit that involves a bookstore. There is no question as to whether a website is more like a bookstore" or a publisher." The whole point of Section 230 is that a website isn't held liable for third party content even as its publisher.

That's the part everyone misses, and Solomon's piece confuses readers about.

As for the claim that Section 230 feels distant from today's digital reality," even the authors of Section 230 have called bullshit on the claim. You'd think maybe the fact checkers at the New Yorker might have asked them? Here are Ron Wyden and Chris Cox disputing the claim that the internet is somehow so different that 230 no longer applies.

Section 230, originally named the Internet Freedom and Family Empowerment Act, H.R. 1978, was designed to address the obviously growing problem of individual web portals being overwhelmed with user-created content. This is not a problem the internet will ever grow out of; as internet usage and content creation continue to grow, the problem grows ever bigger. Far from wishing to offer protection to an infant industry, our legislative aim was to recognize the sheer implausibility of requiring each website to monitor all of the user-created content that crossed its portal each day.

Critics of Section 230 point out the significant differences between the internet of 1996 and today. Those differences, however, are not unanticipated. When we wrote the law, we believed the internet of the future was going to be a very vibrant and extraordinary opportunity for people to become educated about innumerable subjects, from health care to technological innovation to their own fields of employment. So we began with these two propositions: let's make sure that every internet user has the opportunity to exercise their First Amendment rights; and let's deal with the slime and horrible material on the internet by giving both websites and their users the tools and the legal protection necessary to take it down.

The march of technology and the profusion of e-commerce business models over the last two decades represent precisely the kind of progress that Congress in 1996 hoped would follow from Section 230's protections for speech on the internet and for the websites that host it. The increase in user-created content in the years since then is both a desired result of the certainty the law provides, and further reason that the law is needed more than ever in today's environment.

But what would they know?

The failure to understand the policy issues goes deeper than misunderstanding Section 230.

The article effectively endorses the concept of a duty of care" for social media services. This is the kind of solution lots of people who have never dealt with the nuances or tradeoffs think is clever, and which everyone who has any responsibility for an internet service knows is one of the dumbest ideas imaginable.

The commission could require companies to develop and adhere to meaningful content-moderation policies, and would establish a duty of care" to users, obliging the companies to take reasonable measures to avoid foreseeable harm. Once a duty of care is legally established-stipulating, for instance, that landlords are responsible for fire safety and the removal of lead paint in properties they own-it becomes possible to sue for negligence.

For social-media companies, forestalling a duty of care is vital, as became clear last fall, when they sought to have much of the case against them dismissed in an ongoing federal litigation involving a panoply of plaintiffs. The presiding judge, Yvonne Gonzalez Rogers, grilled a TikTok attorney about whether the company had a duty to design a safe product. When he eventually said that, legally speaking, they didn't, she said, Let me write it down. No duty to design a platform in a safe way. That's what you said." There are questions of law and questions of decency, and even those who skirt the outer limits of the law attempt to keep up an appearance of probity. Here such a facade was not maintained.

The second paragraph's gotcha" is frustrating because it's so stupid. Of course the law doesn't require a duty of care, nor could it do so effectively, because the problems are speech. This is the point that Solomon fails to grapple with.

As we discussed above, with almost every kind of bad content," the reality is way more complicated than most people believe. With eating disorder" content, removing it made eating disorders worse. With terrorist content," the very first bit of content that was taken down was a human rights group tracking war crimes. There are studies detailing similar difficulties in dealing with suicide ideation, the very topic that Solomon centers the article on. Efforts to remove such content haven't always been effective and sometimes target people expressing distress, taking away their voices in ways that could be dangerous.

All it really does is sweep content under the rug, brushing it into darker places. That's not dealing with the root causes of depression and mental challenges. It's saying we don't talk about that."

Because the duty of care only serves to put potential liability on websites, they will take the lazy way out and remove all discussion of eating disorders, even if it's content to help with recovery. They will remove all discussion of self-harm, even if it's guiding people towards helpful resources.

It's sweeping the real problems under the rug. Why? To make elites feel satisfied that they're dealing with the problem of kids on social media." But it won't actually help those kids. It just takes away the resources that would help many of them.

If anyone ever recommends a duty of care," ask them to explain how that actually works. They can't answer. Because it doesn't work. It's a magic potion pushed by those who don't understand how things work, telling tech companies magically fix societal issues around mental health that we've never done shit to actually deal with, or we'll fine you."

It's not a solution. It's a way to make the elite New Yorker reader feel better about sweeping real problems under the rug.

A serious analysis by people who understand this shit would grapple with those problems.

Solomon and the New Yorker did the opposite. They took the easy way out. The way that leads to more harm and less understanding.

But I'm sure that we'll be seeing this article cited, repeatedly, as evidence as to why these ignorant policies must be put in place.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments