Article 6MGEV Regulations For Generative AI Should Be Based On Reality, Not Hallucinations.

Regulations For Generative AI Should Be Based On Reality, Not Hallucinations.

by
Mike Masnick
from Techdirt on (#6MGEV)
Story Image

In a haste to do something about the growing threat of AI-fueled disinformation, harassment, and fraud, lawmakers risk introducing bills that ignore some fundamental facts about the technology. For California lawmakers in particular, this urgency is compounded by the fact that they preside over the world's most prominent AI companies and are able to pass laws more quickly than congress can.

Take, for example, California SB 942, which is an attempt to regulate generative AI, but which appears to have hallucinated some of the assumptions on which it's built.

In short, SB 942 would:

  1. Require a visible disclosure on images, video, and text that the content is AI-generated; a disclosure in their metadata; and an imperceptible disclosure that is machine readable.
  2. Require AI providers to create a detection tool where anyone can upload content to check if it was made using that provider's service.

Sounds pretty good right? Wouldn't this maybe help fight AI-generated abuse?

It's unlikely.

In a hearing last week, tech and policy entrepreneur Tom Kemp testified saying that we need this bill. He opened by pointing out how Google CEO Sundar Pichai believes AI will be more profound than the invention of fire or the internet. Then, holding up a pack of gum, said that if we can require a food label for a pack of gum, we should at least also require labels on something as profound as AI.

He concludes, saying:

In summary, this bill puts AI content on the same level as a pack of gum in terms of disclosures, which is the transparency that Californians need."

Huh? It's a fun analogy, but not a useful one. The question we should ask is not whether generative AI is like food. After all, the regulation of food products has different legal considerations than the regulation of expressive generative AI content.

What we should ask is: Will this policy solve the problems we want to solve?

Visible Disclosures:

SB 942 requires disclosures for AI-generated text, but there is no effective method for flagging and detecting content as being AI-generated. Unlike, say, a watermark for an image, a disclosure for text would need to fundamentally alter the message to communicate its synthetic nature. A written disclosure could precede the generated text, but users could simply cut that portion out.

This part of the bill made my Trust & Safety senses tingle. Platform policies that are unenforceable erode trustwhen there is a mismatch between the rules and what consumers expect. Similarly, if there is a law requiring disclosures of AI generated text, it may give consumers a false sense of protection when there is no way to reliably communicate these notices.

The bill also assumes that generative AI can only be used for malicious purposes. There are many cases where having a disclosure simply doesn't matter or is even undesirable. For example, if I want to generate an image of myself playing basketball on the moon, there won't be any question about its inauthenticity. Or if I want to use Photoshop's generative fill tool for a piece of marketing, I surely don't want a watermark interrupting my design. To require by law that it all be labeled is a heavy handed approach that seems unlikely to withstand First Amendment scrutiny.

Detection Tools:

AI detection tools are actively being researched and developed, but at this point can't offer definitive answers to questions of inauthenticity. They give answers with widely varying degrees of uncertainty. This nuance sometimes gets ignored to great consequence, as with the cases where students were falsely accused of plagiarism.

In fact, the technology is so unreliable that last year OpenAI killed its own detection tool, citing its low rate of accuracy. If a safety-conscious AI company is pulling down its own detection tool because it does more harm than good, what incentive does a less conscientious business have to make their detection tool any less harmful?

There are already several generative AI detection services, many offered for free, that are competing for this niche market. If detection tools make big advancements in reliability it won't be because we required generative AI companies to also push one out just to comply with the law.

It's worth mentioning that during last week's hearing, the bill's author, Senator Becker, acknowledged that it's a work in progress and promised to continue collaborating with industry to strike the right balance." I appreciate his frankness, but I'm afraid that would essentially mean scrapping it. I expect he'll remove the mention of AI-generated text and hope he gets rid of the detection tool requirement, but that would still leave us with a vague and hard to comply with requirement to label all AI-generated images and video.

The law should try to account for new and developing technology, but it also needs to operate based on fundamental ground truths about it. Otherwise, it will be no more useful than an AI hallucination.

Alan Kyle is a tech governance enthusiast who is looking for his next work opportunity at the intersection of trust & safety and AI.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments