Article 34Z4J Another Ridiculous Lawsuit Hopes To Hold Social Media Companies Responsible For Terrorist Attacks

Another Ridiculous Lawsuit Hopes To Hold Social Media Companies Responsible For Terrorist Attacks

by
Tim Cushing
from Techdirt on (#34Z4J)
Story Image

Yet another lawsuit has been filed against social media companies hoping to hold them responsible for terrorist acts. The family of an American victim of a terrorist attack in Europe is suing Twitter, Facebook, and Google for providing material support to terrorists. [h/t Eric Goldman]

The lawsuit [PDF] is long and detailed, describing the rise of ISIS and use of social media by the terrorist group. It may be an interesting history lesson, but it's all meant to steer judges towards finding violations of anti-terrorism laws rather than recognize the obvious immunity given to third party platforms by Section 230.

When it does finally get around to discussing the issue, the complaint from 1-800-LAW-FIRM (not its first Twitter terrorism rodeo") attacks immunity from an unsurprising angle. The suit attempts to portray the placement of ads on alleged terrorist content as somehow being equivalent to Google, Twitter, et al creating the terrorist content themselves.

When individuals look at a page on one of Defendants' sites that contains postings and advertisements, that configuration has been created by Defendants. In other words, a viewer does not simply see a posting; nor does the viewer see just an advertisement. Defendants create a composite page of content from multiple sources.

Defendants create this page by selecting which advertisement to match with the content on the page. This selection is done by Defendants' proprietary algorithms that select the advertisement based on information about the viewer and the content being. Thus there is a content triangle matching the postings, advertisements, and viewers.

Although Defendants have not created the posting, nor have they created the advertisement, Defendants have created new unique content by choosing which advertisement to combine with the posting with knowledge about the viewer.

Thus, Defendants' active involvement in combining certain advertisements with certain postings for specific viewers means that Defendants are not simply passing along content created by third parties; rather, Defendants have incorporated ISIS postings along with advertisements matched to the viewer to create new content for which Defendants earn revenue, and thus providing material support to ISIS.

This argument isn't going to be enough to bypass Section 230 immunity. According to the law, the only thing social media companies are responsible for is the content of the ads they place. That they're placed next to alleged terrorist content may be unseemly, but it's not enough to hurdle Section 230 protections. Whatever moderation these companies engage in does not undercut these protections, even when their moderation efforts fail to weed out all terrorist content.

The lawsuit then moves on to making conclusory statements about these companies' efforts to moderate content, starting with an assertion not backed by the text of filing.

Most technology experts agree that Defendants could and should be doing more to stop ISIS from using its social network.

Following this sweeping assertion, two (2) tech experts are cited, both of whom appear to be only speaking for themselves. More assertions follow, with 1-800-LAW-FIRM drawing its own conclusions about how "easy" it would be for social media companies with millions of users to block the creation of terrorism-linked accounts [but how, if nothing is known of the content of posts until after the account is created?] and to eliminate terrorist content as soon as it goes live.

The complaint then provides an apparently infallible plan for preventing the creation of "terrorist" accounts. Noting the incremental numbering used by accounts repeatedly banned/deleted by Twitter, the complaint offers this "solution."

What the above example clearly demonstrates is that there is a pattern that is easily detectable without reference to the content. As such, a content-neutral algorithm could be easily developed that would prohibit the above behavior. First, there is a text prefix to the username that contains a numerical suffix. When an account is taken down by a Defendant, assuredly all such names are tracked by Defendants. It would be trivial to detect names that appear to have the same name root with a numerical suffix which is incremented. By limiting the ability to simply create a new account by incrementing a numerical suffix to one which has been deleted, this will disrupt the ability of individuals and organizations from using Defendants networks as an instrument for conducting terrorist operations.

Prohibiting this conduct would be simple for Defendants to implement and not impinge upon the utility of Defendants sites. There is no legitimate purpose for allowing the use of fixed prefix/incremental numerical suffix name.

Take a long, hard look at that last sentence. This is the sort of assertion someone makes when they clearly don't understand the subject matter. There are plenty of "legitimate purposes" for appending incremental numerical suffixes to social media handles. By doing this, multiple users can have the same preferred handle while allowing the system (and the users' friends/followers) to differentiate between similarly-named accounts. Everyone who isn't the first person to claim a certain handle knows the pain of being second... third" one-thousand-three-hundred-sixty-seventh in line. While this nomenclature process may allow terrorists to easily reclaim followers after account deletion, there are plenty of non-ominous reasons for allowing incremental suffixes.

That's indicative of the lawsuit's mindset: terrorist attacks are the fault of social media platforms because they've "allowed" terrorists to communicate. But that's completely the wrong party to hold responsible. Terrorist attacks are performed by terrorists, not social media companies, no matter how many ads have been placed around content litigants view as promoting terrorism.

Finally, the lawsuit sums it all up thusly: Monitoring content is easy -- therefore, any perceived lack of moderation is tantamount to direct support of terrorist activity.

Because the suspicious activity used by ISIS and other nefarious organizations engaged in illegal activities is easily detectable and preventable and that Defendants are fully aware that these organizations are using their networks to engage in illegal activity demonstrates that Defendants are acting knowingly and recklessly allowing such illegal conduct.

Unbelievably, the lawsuit continues from there, going past its "material support" Section 230 dodge to add claims of wrongful death it tries to directly link to Twitter, et al's allegedly inadequate content moderation.

The conduct of each Defendant was a direct, foreseeable and proximate cause of the wrongful deaths of Plaintiffs' Decedent and therefore the Defendants' are liable to Plaintiffs for their wrongful deaths.

This is probably the worst "Twitter terrorism" lawsuit filed yet, but quite possibly exactly what you would expect from a law firm with a history of stupid social media lawsuits and a phone number for a name.



Permalink | Comments | Email This Story
External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments