ChatGPT Seems To Recognize That Internet Regulations Really Regulate Speech, No Matter What Politicians Say
Over the last few years, we've seen a bunch of politicians trying to frame their regulation of the internet as not being about regulating speech, but about design" or process" or some such. But when you scratch beneath the surface, they're always really about regulating speech. Whether it's KOSA or California's Age Appropriate Design Code (AADC) in the US, or the DSA in the EU, there is plenty of ink spilt to defend the claim that they're not really about censorship.
Just recently we wrote about the Ninth Circuit seeing through California's AADC. The politicians behind the law insisted it wasn't about regulating content, only conduct. But the court recognized that was obviously not true. Then, over in the EU, we have the DSA, which European officials insist is never supposed to be used for moderating content, but where the guy in charge of enforcing it seems to think that of course he should be using it for that.
Daphne Keller, over at Stanford, recently came across a custom ChatGPT instance, designed to act as a trust & safety regulation expert." The custom tool was created by Inbal Goldberger, a long-time trust & safety executive. Whether or not the tool is any good is not the point. What's really fascinating is that when Daphne used the tool to explore how websites should best comply with the various regulatory regimes they're facing, over and over again, it suggested removing speech.
For platforms planning their compliance with laws like KOSA and the DSA, the most basic question is what do I need to do?" ChatGPT has a lot of answers. One of the main answers is that platforms need to take down more lawful speech.
You can read the full transcripts that Daphne generated exploring both EU and US law and how companies should respond. They're quite interesting, and I'm actually somewhat impressed by the ChatGPT responses.
For example, she asks the custom GPT how to comply with KOSA, and it immediately tells her that the following categories of content must be prevented from being accessed by children," and it includes plenty of First Amendment-protected speech including violent content" and cyberbullying." Whether or not it's good policy (or, even possible) to block such content, just the fact that this is the first instinct of this custom GPT says something.
And yes, some people will dismiss this by saying that you can't trust the GPT in the first place. But if it's reading these laws and finding that's the best way to comply, it's pretty clearly saying something about them. Back in July, we had that guest post by Matthew Lane, rightly pointing out that companies are lazy and they will seek to take the path of least resistance for compliance. And that least resistance will often be just pulling down any content that might, conceivably, be deemed problematic under the law, just as the custom GPT recommends.
KOSA wasn't the only law that the GPT interpreted this way:
ChatGPT gave very similar answers about California's Age Appropriate Design Code (AADC), telling me that as a platform operator you need to ensure that your platform protects children from encountering" a list of key types of content" including depictions of violence, substance abuse, and misinformation. Coincidentally, the same day that ChatGPT said this, the Ninth Circuit Court of Appeals said pretty much the same thing, ruling that the law's clearly content-related provisions violated the First Amendment.
The answers I got about European law were to my mind equally blunt, though they may require a little more translation for readers not steeped in EU policy debates. I asked a question that deliberately reused language from a recent Commission letter demanding that X mitigate risks arising from the interview it hosted with former President Trump. (That letter, from Commissioner Thierry Breton, prompted outcry from civil society groups and rare publiccensure from other Commission authorities.) The question was, What must I do to mitigate risks of detrimental effects on civic discourse and public security under the DSA?"
ChatGPT's answer went awry in a way that really matters for small and mid-sized platforms: it described obligations that won't apply unless a platform has over 45 million users in the EU, without mentioning that these simply don't affect everyone else.
Importantly for the rights of Internet users, ChatGPT's advice also crossed a number of important red lines in EU law that exist to protect freedom of expression and information. First, it instructed me to act not only against illegal content but also harmful content," through changes such as adjusting your content moderation policies." Using the word harmful" is a big deal. Throughout the DSA legislative process that term was used to refer to lawful but awful expression, or else to a mixed category that includes both legal and illegal material. For example, the Commission's explanatory memorandum for the DSA said
There is a general agreement among stakeholders that harmful' (yet not, or at least not necessarily, illegal) content... should not be subject to removal obligations, as this is a delicate area with severe implications for the protection of freedom of expression.
ChatGPT's advice to remove disinformation has a version of the same problem since in the EU, as in the US, not all disinformation is illegal.
Also, Daphne notes that if this is just AI hallucinations," it's still notable that it always hallucinated in the same way (censor more legal content.").
Several people suggested that the repeated and consistent answers I got were just AI hallucinations. It would be pretty odd for ChatGPT to happen to hallucinate the same interpretation of the DSA that Commissioner Breton did, or the same interpretation of the AADC that a federal court of appeals did, but... I guess it's possible
But, as she notes, the mere plausibility of these results from the custom GPT also suggests that humans will find these recommendations plausible as well:
The real reason ChatGPT's answers should ring alarm bells is that they were so plausible. These are the answers that humans may well arrive at-whether or not they ask an AI what to do-in parsing confusing legal obligations. They are answers we can find in sloppy secondary materials purporting to explain the laws, and in news coverage that uncritically repeat politicians' exaggerated claims about their legal authority. ChatGPT's answers may not correctly answer questions about the law. But its take it all down" directives may be the ones that both humans and machines get trained to apply all the same.
These points are really key. When we talk about these laws, people keep pointing to little caveats and phrases in the law to say but it's not supposed to be interpreted that way." But they are interpreted that way over and over again. This is why the Ninth Circuit was right to call out California's AADC as inherently censorial in how it works. Because that's exactly how most companies would interpret it.
And this impacts other things as well. One point that we've raised repeatedly in our articles about age verification" is that when politicians try to hide the ball and call for age estimation" or age assurance" rather than outright age verification, it doesn't matter. It will still lead to age verification as the tool that companies will feel most drawn to simply to avoid any risk of liability.
That also appears to be what the Custom GPT felt about some of these laws. The transcript Daphne posted regarding KOSA gives a really good example of this. Supporters of KOSA insist that it doesn't require age verification, and technically it doesn't. Not directly. But, as the Custom GPT explains, if you want to avoid a legal fight, you probably should use age verification anyway. So while it might not mandate it, it certainly is going to create pressure for companies to adopt it anyway, just to avoid legal liability.
PROMPT: Do I need to use age assurance or verification tools to comply with KOSA?
RESPONSE:To comply with the Kids Online Safety Act (KOSA), you are not explicitly required to implement age assurance or age verification tools. However, it is strongly recommended to ensure that age-appropriate content is provided to users, particularly to prevent children from accessing harmful content.
The KOSA includes provisions for the Federal Trade Commission (FTC) and other bodies to conduct studies and provide guidance on effective methods for age verification. These may include systems at the device or operating system level, but KOSA does not mandate affirmative collection of personal data solely for age verification purposes.
If your platform already collects certain user data, you can utilize that for age-related purposes, but you are not obligated to implement new age gating or age verification functionalities if they aren't part of your normal operations. However, if your platform includes sensitive content such as pornography or extreme violence, using robust age verification methods is strongly encouraged to ensure compliance and protect children from inappropriate content.
Daphne even pushed it further, asking if it violated the law to not use age verification, and the GPT suggested (reasonably under the law) that not using age verification might be deemed as taking insufficient measures:
PROMPT: I know it is recommended, but will I violate the law if I do not do age verification?
RESPONSE:Under the Kids Online Safety Act (KOSA), while the use of age verification or age assurance tools is strongly recommended, failing to implement such tools does not automatically constitute a violation of the law. However, the key legal requirement is that you must take reasonable measures" to protect children from harmful content. If you fail to implement any effective mechanism to prevent children from accessing such content, and this results in harm, you could be found non-compliant with KOSA.
The law emphasizes that platforms must make a good-faith effort to restrict access to harmful content by minors. Age verification is one of the methods to demonstrate compliance, but it is not the only method. If your platform is found to allow minors to access harmful content without taking sufficient steps to prevent this, it could lead to legal action under KOSA.
Thus, while not using age verification tools doesn't directly violate the law, it might leave your platform vulnerable to being deemed non-compliant if other protective measures are not sufficiently robust.
There's a lot more in these docs, and they really do a nice job of highlighting how, for all the winks, nods, and denials from the lawmakers, the simplest and most obvious explanations for how to comply with these laws is to take down First Amendment-protected speech and implement privacy-harming age verification policies.
This is one of the reasons we spend so much time digging into the nuances and details. People sometimes complain that all I do is criticize these laws, but it's important to understand how they will actually be implemented and how that implementation could do more harm than good.