Josh Hawley Back To Try To Hotline His Awful AI/Section 230 Bill
Last week, we wrote about the potential for Senator Josh Hawley to hotline" the bill that he put together with Senator Richard Blumenthal to remove Section 230 from anything touching artificial intelligence. As we noted at the time, even if you hate both generative AI technology and Section 230, the bill was so poorly drafted that it would create all kinds of problems for the internet.
While there were reports that Hawley would try to rush the bill through using the unanimous consent" hotline process (which only requires one Senator to step in and block) it was unclear last week if anyone would actually do the blocking (to be fair, it was also unclear if a companion bill would make it through the House, but you don't want to get that far).
For whatever reason, we heard that Hawley decided to hold off until today, and there are now reports that he'll push for the unanimous consent (basically avoiding a full vote and hoping that no one objects) today at 5:30pm ET/2:30pm PT. In other words, soon.
A very diverse group of organizations (who often don't agree with each other on much else), including the ACLU, the Competitive Enterprise Institute, Americans for Prosperity, and the Electronic Freedom Foundation along with many others have all signed a letter put together by TechFreedom, detailing the horrors this bill would create (our own Copia Institute also signed on).
We, the undersigned organizations and individuals, write to express serious concerns about the No Section 230 Immunity for AI Act" (S. 1993). S. 1993 would threaten freedom of expression, content moderation, and innovation. Far from targeting any clear problem, the bill takes a sweeping, overly broad approach, preempting an important public policy debate without sufficient consideration of the complexities at hand.
Section 230 makes it possible for online services to host user-generated content, by ensuring that only users are liable for what they post-not the apps and websites that host the speech. S. 1993 would undo this critical protection, exposing online services to lawsuits for content whenever the service offers or uses any AI tool that is technically capable of generating any kind of new material. The now widespread deployment of AI for content composition, recommendation, and moderation would effectively render any website or app liable for virtually all content posted to them.
As the letter notes, the bill would cut off any debate regarding the proper relationship between generative AI output and Section 230 (something that's been quite spirited over the last year or so). It would also create a world that greatly benefited vexatious and malicious actors:
A core function of Section 230 is to provide for the early dismissal of claims and avoid the death by ten thousand duck-bites" of costly, endless litigation. This bill provides an easy end-run around that function: simply by plausibly alleging that GenAI was somehow involved with the content at issue, plaintiffs could force services into protracted litigation in hopes of extracting a settlement for even meritless claims.
And it includes examples of possible abuse that this law would enable:
Consider a musician who utilizes a platform offering a GenAI production tool to compose a song including synthesized vocals with lyrics expressing legally harmful lies (libel) about a person. Even if the lyrics were provided wholly by the musician, the conduct underlying the ensuing libel lawsuit would undoubtedly involve the use or provision" of GenAI-exposing the tool's provider to litigation. In fact, the tool's provider could lose immunity even if it did not synthesize the vocals, simply because the tool is capable of doing so.
Like any tool, GenAI can be misused by malicious actors, and there is no sure way to prevent such uses-every safeguard is ultimately circumventable. Stripping immunity from services that offer those tools irrespective of their relation to the content does not just ignore this reality, it incentivizes it. The ill-intentioned, knowing that the typically deep pockets of GenAI providers are a more attractive target to the plaintiffs' bar, will only be further encouraged to find ways to misuse GenAI.
Still more perversely, malicious actors may find themselves immunized by the same protection that S. 1993 strips from GenAI providers. Section 230(c)(1) protects both providers of interactive computer services and users from being treated as the publisher of third-party content. But S. 1993 only excludes the former from Section 230 protection. If Section 230 does indeed protect GenAI output to at least some degree as the proponents of this bill fear, the malicious user who manipulates ChatGPT into providing a defamatory response would be immunized for re-posting that content, while OpenAI would face liability.
This is a really important point. As the bill is currently worded now, a malicious actor could deliberately use AI to try to defame someone, and they (the malicious actor) might be immune, while the generative AI tool they coaxed to write a defamatory statement would be liable. That flips basic concepts of liability on their head.
There's a lot more in the letter. Hopefully, even those supporting this bill recognize how half-baked it is. However, in the meantime, we still have to hope that at least one Senator out there recognizes its problems as well and stops the bill from moving forward in this manner (I won't even get into whether or not any reporter is willing to ask either Hawley or Blumenthal why they're pushing this monstrosity, because both have made it crystal clear that the answer is because they hate the internet and relish any opportunity to break it).