Article 6FN8C California Court, Ridiculously, Allows School Lawsuits Against Social Media To Move Forward

California Court, Ridiculously, Allows School Lawsuits Against Social Media To Move Forward

by
Mike Masnick
from Techdirt on (#6FN8C)
Story Image

Over the last year, we've covered a whole bunch of truly ridiculous, vexatious, bullshit lawsuits filed by school districts against social media companies, blaming them for the fact that the school boards don't know how to teach students (the one thing they're supposed to specialize in!) how to use the internet properly. Instead of realizing the school board ought to fire themselves, some greedy ambulance-chasing lawyers have convinced them that if courts force social media companies to pay up, they'll never have a budget shortfall again. And school boards desperate for cash, and unwilling to admit their own failings as educators, have bought into the still unproven moral panic that social media is harming kids. This is despite widespread evidence that it's just not true.

While there are a bunch of these lawsuits, some in federal court and some in state courts, some of the California state court ones were rolled up into a single case, and on Friday, California state Judge Carolyn Kuhl (ridiculously) said that the case can move forward, and that the social media companies' 1st Amendment and Section 230 defenses don't apply (first reported by Bloomberg Law).

There is so much wrong with this decision, it's hard to know where to start, other than to note one hopes that a higher court takes some time to explain to Judge Kuhl how the 1st Amendment and Section 230 actually work. Because this is not it.

The court determines that Defendants' social media platforms are not products" for purpose of product liability claims, but that Plaintiffs have adequately pled a cause of action for negligence that is not barred by federal immunity or by the First Amendment. Plaintiffs also have adequately pled a claim of fraudulent concealment against Defendant Meta.

As noted in that paragraph, the product liability claims fail, as the court at least finds that social media apps don't fit the classification of a product" for product liability purposes.

Product liability doctrine is inappropriate for analyzing Defendants' responsibility for Plaintiffs' injuries for three reasons. First, Defendants' platforms are not tangible products and are not analogous to tangible products within the framework of product liability. Second, the risk-benefit" analysis at the heart of determining whether liability for a product defect can be imposed is illusive in the context of a social media site because the necessary functionality of the product is not easily defined. Third, the interaction between Defendants and their customers is better conceptualized as a course of conduct implemented by Defendants through computer algorithms.

However, it does say that the negligence claims can move forward and are not barred by 230 or the 1st Amendment. A number of cases have been brought using this theory over the last few years, and nearly all of them have failed. Just recently we wrote about one such case against Amazon that failed on Section 230 grounds (though the court also makes clear that even without 230 it would have failed).

But... the negligence argument the judge adopts is... crazy. It starts out by saying that the lack of age verification can show negligence:

In addition to maintaining unreasonably dangerous features and algorithms", Defendants are alleged to have facilitated use of their platforms by youth under the age of 13 by adopting protocols that do not verify the age of users, and facilitate[ed] unsupervised and/or hidden use of their respective platforms by youth" by allowing youth users to create multiple and private accounts and by offering features that allow youth users to delete, hide, or mask their usage."

This seems kinda crazy to say when it comes less than a month after a federal court in California literally said that requiring age verification is a clear 1st Amendment violation.

The court invents, pretty much out of thin air, a duty of care" for internet services. There have been many laws that have tried to create such a duty of care, but as we've explained at great length over the years, a duty of care regarding speech on social media is unconstitutional, as it will easily lead to over-blocking out of fear of liability. Even as the court recognized that internet services are not a product in the product liability sense, because that would make no sense, for negligence... it cited a case involving... electric scooters? Yup. Electric scooters.

In Hacala, the Court of Appeal held that defendant had a duty to use care when it made its products available for public use and one of those products harmed the plaintiff. The defendant provided electric motorized scooters that could be rented through a downloadable app." (Id. at p. 311.) The app allowed the defendant to monitor and locate its scooters and to determine if its scooters were properly parked and out of the pedestrian right-of-way." (Id., internal quotation marks and brackets omitted.) The defendant failed to locate and remove scooters that were parked in violation of the requirements set forth in the defendant's city permit, including those parked within 25 feet of a single pedestrian ramp. (Id.) The defendant also knew that, because the defendant had failed to place proper lighting on the scooters, the scooters would not be visible to pedestrians at night. (Id. at p. 312.) The court found that these allegations were a sufficient basis on which to find that the defendant owed a duty to members of the public like the plaintiff, who tripped on the back wheel of one of the defendant's scooters when walking just after twilight." (Id. at p. 300.)

Here, Plaintiffs seek to hold Defendants liable for the way that Defendants manage their property, that is, for the way in which Defendants designed and operated their platforms for users like Plaintiffs. Plaintiffs allege that they were directly injured by Defendants' conduct in providing Plaintiffs with the use of Defendants' platforms. Because all persons are required to use ordinary care to prevent others from being injured as the result of their conduct, Defendants had a duty not to harm the users of Defendants' platforms through the design and/or operation of those platforms.

But, again, scooters are not speech. It is bizarre that the court refused to recognize that.

The social media companies also pointed out that the claims made by the school districts about kids saying they ended up suffering from depression, anxiety, eating disorders, and more from social media, can't be directly traced back to the social media companies. As the social media companies point out, if a student goes to a school and suffers from depression, she can't sue the schools for causing depression. But, no, the judge says that there's a close connection" between social media and the suffering (based on WHAT?!? she does not say).

Here, as previously discussed, there is a close connection between Defendants' management of their platforms and Plaintiffs' injuries. The Master Complaint is clear in stating that the use of each of Defendants' platforms leads to minors' addiction to those products, which, in turn, leads to mental and physical harms. (See, e.g., Mast. Compl., 11 80-95.) These design features themselves are alleged to cause or contribute to (and, with respect to Plaintiffs, have caused and contributed to) [specified] injuries in young people...." (Mast. Compl., 96, internal footnotes omitted; see also Mast. Compl., 102 [alleging that Defendants' platforms can have a detrimental effect on the psychological health of their users, including compulsive use, addiction, body dissatisfaction, anxiety, depression, and self-harming behaviors such as eating disorders"], internal quotation marks, brackets, and footnotes omitted.) Plaintiffs allege that the design features of each of the platforms at issue here cause these types of harms. (See, e.g., Mast. Compl., 11268-337 (Meta); 1 484-487, 489-490 (Snap); 11 589-598 (ByteDance); 713-773, 803 (Google).) These allegations are sufficient under California's liberal pleading standard to adequately plead causation.

The court also says that if the platforms dispute the level to which they caused these harms, that's a matter of fact, to be dealt with by a jury.

Then we get to the Section 230 bit. The court bases much of its reasoning on Lemmon v. Snap. This is why we were yelling about the problems that Lemmon v. Snap would cause, even as we heard from many (including EFF?) who thought that the case was decided correctly. It's now become a vector for abuse, and we're seeing that here. If you just claim negligence, some courts, like this one, will let you get around Section 230.

As in Lemmon, Plaintiffs' claims based on the interactive operational features of Defendants' platforms do not seek to require that Defendants publish or de- publish third-party content that is posted on those platforms. The features themselves allegedly operate to addict and harm minor users of the platforms regardless of the particular third-party content viewed by the minor user. (See, e.g., Mast. Compl., 11 81, 84.) For example, the Master Complaint alleges that TikTok is designed with continuous scrolling," a feature of the platform that makes it hard for users to disengage from the app," (Mast. Compl., 567) and that minor users cannot disable the auto-play function" so that a flow-state" is induced in the minds of the minor users (Mast. Compl., 1 590). The Master Complaint also alleges that some Plaintiffs suffer sleep disturbances because Defendants' products, driven by IVR algorithms, deprive users of sleep by sending push notifications and emails at night, prompting children to re-engage with the apps when they should be sleeping." (Mast. Comp., 107 [also noting that disturbed sleep increases the risk of major depression and is associated with future suicidal behavior in adolescents"].)

Also similar to the allegations in Lemmon, the Master Complaint alleges harm from filters" and rewards" offered by Defendants. Plaintiffs allege, for example, that Defendants encourage minor users to create and post their own content using appearance-altering tools provided by Defendants that promote unhealthy body image issues." (Mast. Compl., 194). The Master Complaint alleges that some minors spend hours editing photographs they have taken of themselves using Defendants' tools. (See, e.g., Mast. Compl., 318.) The Master Complaint also alleges that Defendants use rewards" to keep users checking the social media sites in ways that contribute to feelings of social pressure and anxiety. (See, e.g., Mast. Compl., 257 [social pressure not to lose or break a Snap Streak"].)

There's also the fact that kids secretly" used these apps without their parents knowing, but... it's not at all clear how that's the social media companies' fault. But the judge rolls with it.

Another aspect of Defendants' alleged lack of due care in the operation of their platforms is their facilitation of unsupervised or secret use by allowing minor users to create multiple and private accounts and allowing minor users to mask their usage. (Mast. Compl., 1929(d), (e), (f).) Plaintiffs J.S. and D.S., the parents of minor Plaintiff L.J.S., allege that L.J.S. was able to secretly use Facebook and Instagram, that they would not have allowed use of those sites, and that L.J.S. developed an addiction to those social media sites which led to a steady decline in his mental health, including sleep deprivation, anxiety, depression, and related mental and physical health harms." (J.S. SFC 11 7-8.)

Then, there's a really weird discussion about how Section 230 was designed to enable users to have more control over their online experiences, and therefore, the fact that users felt out of control means 230 doesn't apply? Along similar lines, the court notes that since the intent of 230 was to remove disincentives" for creating tools for parents to filter the internet for their kids, the fact that parents couldn't control their kids online somehow goes against 230?

Similarly, Congress made no secret of its intent regarding parental supervision of minors' social media use. By enacting Section 230, Congress expressly sought to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict children's access to objectionable or inappropriate online material." (47 U.S.C. 230, subd. (b)(4).) While in some instances there may be an apparent tension between Congress's goals of promoting free speech while at the same time giving parents the tools to limit the material their children can access over the Internet" (Barrett, supra, 40 Cal.4th at p. 56), where a plaintiff seeks to impose liability for a provider's acts that diminish the effectiveness of parental supervision, and where the plaintiff does not challenge any act of the provider in publishing particular content, there is no tension between Congress's goals.

But that's wholly misunderstanding both the nature of Section 230 and what's going on here. Services shouldn't lose 230 protections just because kids are using services behind their parents' backs. That makes no sense. But, here, the judge seems to think it's compelling.

The judge also claims (totally incorrectly based on nearly all of the case law) that if, as the social media companies claim, any harms from social media are due to third party content (which would mean Section 230 protections apply), that's a matter for the jury.

Although Defendants argue they cannot be liable for their design features' ability to addict minor users and cause near constant engagement with Defendants' platforms because Defendants create such engagement" with user-generated content" (Defs' Dem., at p. 42, internal italics omitted), this argument is best understood as taking issue with the facts as pleaded in the Master Complaint. It may very well be that a jury would find that Plaintiffs were addicted to Defendants' platforms because of the third-party content posted thereon. But the Master Complaint nonetheless can be read to state the contrary-that is, that it was the design of Defendants' platforms themselves that caused minor users to become addicted. To take another example, even though L.J.S. was viewing content of some kind on Facebook and Instagram, if he became addicted and lost sleep due to constant unsupervised use of the social media sites, and if Defendants facilitated L.J.S.'s addictive behavior and unsupervised use of their social media platforms (i.e., acted so as to maximize engagement to the point of addiction and to deter parental supervision), the negligence cause of action does not seek to impose liability for Defendants' publication decisions, but rather for their conduct that was intended to achieve this frequency of use and deter parental supervision. Section 230 does not shield Defendants from liability for the way in which their platforms actually operated.

But if that's the case, it completely wipes out the entire point of Section 230, which is to get these kinds of silly, vexatious cases dismissed early on, such that companies aren't constantly under threat of liability if they don't magically solve large societal problems.

From there, the court also rejects the 1st Amendment arguments. To get around those arguments, the court repeatedly keeps arguing that the issue is the way that social media designed its services, and not the content on those services. But that's tap dancing around reality. When you dig into any of these claims, they're all, at their heart, entirely about the content.

It's not the infinite scroll" that is keeping people up at night. It's the content people see. It's not the lack of age verification that is making someone depressed. Assuming it's even related to the social media site, it's from the content. Ditto for eating disorders. When you look at the supposed harm, it always comes back to the content, but the judge dismisses all of that and says that the users are addicted to the platform, not the content on the platform.

Because the allegations in the Master Complaint can be read to state that Defendants' liability grows from the way their platforms functioned, the Demurrer cannot be sustained pursuant to the protections of the First Amendment. As Plaintiffs argue in their Opposition, the allegations can be read to state that Plaintiffs' harms were caused by their addiction to Defendants' platforms themselves, not simply to exposure to any particular content visible on those platforms. Therefore, Defendants here cannot be analogized to mere publishers of information. To put it another way, the design features of Defendants' platforms can best be analogized to the physical material of a book containing Shakespeare's sonnets, rather than to the sonnets themselves.

Defendants fail to demonstrate that the design features of Defendants' applications must be understood at the pleadings stage to be protected speech or expression. Indeed, throughout their Demurrer, Defendants make clear their position that Plaintiffs' claims are based on content created by third parties that was merely posted on Defendants' platforms. (See, e.g., Defs' Dem., at p. 49.) As discussed above, a trier of fact might find that Plaintiffs' harms resulted from the content to which they were exposed, but Plaintiffs' allegations to the contrary control at the pleading stage.

There are some other oddities in the ruling as well, including dismissing the citation to the NetChoice/CCIA victory in the 11th Circuit regarding Florida's social media moderation law, because the judge says that ruling doesn't apply here, since the lawsuit isn't about content moderation. She seems to falsely think that the features on social media have nothing to do with content moderation, but that's just factually wrong.

There are a few more issues in the ruling, but those are basically the big parts of it. Now, it's true that this is just based on the initial complaints, and at this stage of the procedure, the judge has to rule assuming that everything pleaded by the plaintiffs is true, but the way it was done here almost entirely wipes out the entire point of Section 230 (not to mention the 1st Amendment).

Letting these cases move forward enables exactly what Section 230 was designed to prevent: creating massive liability and expensive litigation over choices regarding how a website publishes and presents content. The end result, if this is not overturned, is likely to be a large number of similar (and similarly vexatious) lawsuits that overwhelm websites over potential liability. If each one has to go to a jury before its decided, it's going to be a total mess.

The whole point of Section 230 was to have judges dismiss these cases early on. And here, the judge has gotten almost all of it backwards.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments