Article 6PG69 FTC Oversteps Authority, Demands Unconstitutional Age Verification & Moderation Rules

FTC Oversteps Authority, Demands Unconstitutional Age Verification & Moderation Rules

by
Mike Masnick
from Techdirt on (#6PG69)
Story Image

Call me crazy, but I don't think it's okay to go beyond what the law allows even in pursuit of good" intentions. It is consistently frustrating how this FTC continues to push the boundaries of its own authority, even when the underlying intentions may be good. The latest example is in its order against a sketchy messaging app, it has demanded things it's not clear that it can order.

This has been a frustrating trend with this FTC. Making sure the market is competitive is a good thing, but bringing weak and misguided cases makes a mockery of its antitrust power. Getting rid of non-competes is a good thing, but the FTC doesn't have the authority to do so.

Smacking down sketchy anonymous messaging apps preying on kids is also a good thing, but once again, the FTC seems to go too far. A few weeks ago, the FTC announced an order against NGL Labs, a very sketchy anonymous messaging app that was targeting kids and leading to bullying.

It certainly appears that the app was violating some COPPA rules on data collection for sites targeting kids. And it also appears that the app's founders were publicly misrepresenting aspects of the app, as well as hiding that when they charged you, users were actually signing up for a weekly subscription. So I have no issues with the FTC going after the company for those things. Those are the kinds of actions the FTC should be taking.

The FTC's description highlights at least some of the sketchiness behind the app:

After consumers downloaded the NGL app, they could share a link on their social media accounts urging their social media followers to respond to prompts such as If you could change anything about me, what would it be?" Followers who clicked on this link were then taken to the NGL app, where they could write an anonymous message that would be sent to the consumer.

After failing to generate much interest in its app, NGL in 2022 began automatically sending consumers fake computer-generated messages that appeared to be from real people. When a consumer posted a prompt inviting anonymous messages, they would receive computer-generated fake messages such as are you straight?" or I know what you did." NGL used fake, computer-generated messages like these or others-such as messages regarding stalking-in an effort to trick consumers into believing that their friends and social media contacts were engaging with them through the NGL App.

When a user would receive a reply to a prompt-whether it was from a real consumer or a fake message-consumers saw advertising encouraging them to buy the NGL Pro service to find out the identity of the sender. The complaint alleges, however, that consumers who signed up for the service, which cost as much as $9.99 a week, did not receive the name of the sender. Instead, paying users only received useless hints" such as the time the message was sent, whether the sender had an Android or iPhone device, and the sender's general location. NGL's bait-and-switch tactic prompted many consumers to complain, which NGL executives laughed off, dismissing such users as suckers."

In addition, the complaint alleges that NGL violated the Restore Online Shoppers' Confidence Act by failing to adequately disclose and obtain consumers' consent for such recurring charges.Many users who signed up for NGL Pro were unaware that it was a recurring weekly charge, according to the complaint.

But just because the app was awful, the founders behind it were awful, and it seems clear they violated some laws, does not mean any and all remedies are open and appropriate.

And here, the FTC is pushing for some remedies that are likely unconstitutional. First off, it requires age verification and blocking all kids under the age of 18.

  • Required to implement a neutral age gate that prevents new and current users from accessing the app if they indicate that they are under 18 and to delete all personal information that is associated with the user of any messaging app unless the user indicates they are over 13 or NGL's operators obtain parental consent to retain such data;

But, again, most courts have repeatedly made clear that government-mandated age verification or age-gating is unconstitutional on the internet. The Supreme Court just agreed to hear yet another case on this point, but it's still a weird choice for the FTC to demand this here, knowing that the issue could end up before a hostile Supreme Court.

On top of that, as Elizabeth Nolan Brown points out at Reason, it appears that some of the other things the FTC are mad about regarding NGL is simply that offering anonymous communications tools to kids is somehow inherently harmful behavior that shouldn't be allowed:

The anonymity provided by the app can facilitate rampant cyberbullying among teens, causing untold harm to our young people," Los Angeles District Attorney George Gasconsaidin a statement.

NGL and its operators aggressively marketed its service to children and teens even though they were aware of the dangers of cyberbullying on anonymous messaging apps," the FTC said.

Of course, plenty of apps allow for anonymity. That this has thepotentialto lead to bullying can't be grounds for government action.

So, yes, I think the FTC can call out violating COPPA and take action based on that, but I don't see how they can legitimately force the app to age gate at a time when multiple courts have already said the government cannot mandate such a thing. And they shouldn't be able to claim that anonymity itself is somehow obviously problematic, especially at time when studies often suggest the opposite for some kids who need their privacy.

The other problematic bit is that the FTC is mad that NGL may have overstated their content moderation abilities. The FTC seems to think that it can legally punish the company for not living up to the FTC's interpretation of NGL's moderation promises. From the complaint itself:

Defendants represent to the public the NGL App is safe for children and teens to use because Defendants utilize world class AI content moderation" including deep learning and rule-based character pattern-matching algorithms" in order to filter out harmful language and bullying." Defendants further represent that they can detect the semantic meaning of emojis, and [] pull[] specific examples of contextual emoji use" allowing them to stay on trend, [] understand lingo, and [] know how to filter out harmful messages."

In reality however, Defendants' representations are not true. Harmful language and bullying, including through the use of emojis, are commonplace in the NGL App-a fact of which Defendants have been made aware through numerous complaints from users and their parents. Media outlets have reported on these issues as well. For example, one media outlet found in its testing of the NGL App that the App's language filters allowed messages with more routine bullying terms . . . including the phrases You're fat,' Everyone hates you,' You're a loser' and You're ugly.'" Another media outlet reported that it had found that [t]hreatening messages with emojis that could be considered harmful like the knife and dagger icon were not blocked." Defendants reviewed several of these media articles, yet have continued to represent that the NGL App is safe" for children and teens to use given the world class AI content moderation" that they allegedly employ.

I recognize that some people may be sympathetic to the FTC here. It definitely looks like NGL misrepresented the power of their moderation efforts. But there have been many efforts by governments or angry users to sue companies whenever they feel that they have not fully lived up to public marketing statements regarding their moderation.

People have sued companies like Facebook and Twitter for being banned, arguing that public statements about free speech" by those companies meant that they shouldn't have been banned. How is this not any different than that?

And the FTC's claim here that if you promise your app is safe" and then someone can find harmful language and bullying" on the platform, that you've then violated the law just flies in the face of everything we just heard from the Supreme Court in the Moody case.

The FTC doesn't get to be the final arbiter of whether or not a company successfully moderates away unsafe content. If so, it will be subject to widespread abuse. Just think of whichever presidential candidate you dislike the most, and what would happen if they could have their FTC investigate any platform they dislike for not fairly living up to their public promises on moderation.

It would be a dangerous, free speech-attacking mess.

Yes, NGL seems like a horrible company, run by horrible people. But go after them on the basics: the data collection in violation of COPPA and the sneaky subscription charging. Not things like age verification and content moderation.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments