Article 6QVYV SB 1047: California’s Recipe For AI Stagnation

SB 1047: California’s Recipe For AI Stagnation

by
Mike Masnick
from Techdirt on (#6QVYV)
Story Image

As California edges closer to enacting SB 1047, the state risks throwing the entire AI industry into turmoil. The bill has already cleared the legislative process and now sits on Governor Newsom's desk, leaving him with a critical decision: veto this ill-conceived policy or sign away the U.S.' future in AI. While Newsom appears skeptical of 1047, he still has not made it clear if he'll actually veto the bill.

SB 1047 Summary

SB 1047 establishes an overly rigid regulatory framework that arbitrarily divides artificial intelligence systems into two categories: covered models" and derivative models." Both are subject to extensive requirements, though at different stages of development. Developers of covered models face strict pre-training and pre-release requirements, while those developing derivative models are burdened with the responsibility of ensuring the model's long-term safety, anticipating future hazards, and mitigating potential downstream abuses.

The bill also imposes a reasonableness standard on developers to demonstrate that they have exercised reasonable care" in preventing their models from causing critical risks. This includes the implementation and adherence to extensive safety protocols before and after development. In practice, the standard merely introduces legal ambiguity. The vague nature of what constitutes reasonable care" opens the door to costly litigation, with developers potentially stuck in endless legal battles over whether they've done enough to comply not only with the ever-evolving standards of care and best practices for AI development, but also their own extensive state-mandated safety protocols.

It's no surprise that industry experts have raised serious concerns about SB 1047's potential to stifle innovation, limit free expression through restrictions on coding, and undermine the future of U.S. AI development.

SB 1047 Will Cede U.S. AI Lead to Foreign Competitors

Under the bill, a covered model is defined as any advanced AI system that meets certain thresholds of computing power and cost. Models trained before January 1, 2027, are classified as covered if they use more than 1026 integer or floating-point operations and cost more than $100 million to develop.

But these thresholds are inherently flawed. Even cutting-edge AI systems like GPT-4, which are among the most advanced in the world, were trained using significantly less computing power than the bill's benchmark. For example, estimates suggest that GPT-3 required around 1023 operations-far below the bill's threshold. This highlights a key problem: the bill's requirements for covered models primarily targets large, resource-intensive AI labs today, but as AI technologies and hardware improve, even smaller developers could find themselves ensnared by these requirements.

However, there's a deeper irony here: scaling laws in AI suggest larger AI models generally perform better.The more computational power used to train a model, the better it tends to handle complex tasks, reduce errors like hallucinations, and generate more reliable results. In fact, larger AI models could actually reduce societal harms, making AI systems safer and more accurate over time-a result for which the California Legislature is supposedly striving.

This is why larger AI firms, like OpenAI and Google, are pushing for more computationally intensive models. While it may seem that the covered model requirements exclude startup companies for now, current advancements in hardware-such as specialized AI chips and quantum computing-suggest that even smaller commercial AI developers could potentially surpass this threshold within the next 5-10 years (i.e. Moore's Law). In other words, as time goes on, we can expect more market entrants to fall under the bill's regulatory framework sooner than expected.

What's worse, the threshold component seems to discourage companies from pushing the limits of AI. Instead of harnessing high-computing power to build truly transformative systems, businesses might deliberately scale down their models just to avoid falling under the bill's scope. This short-sighted approach won't just slow down AI innovation; it could stifle progress in computing power as a whole. If companies are reducing their need for cutting-edge processors and hardware, the broader tech ecosystem-everything from next-gen chips to data centers-will stagnate. The very innovation we need to lead the world in technology could grind to a halt, all because we've made it too risky for AI labs to aim big.

Pre-Training Requirements & Commercial Use Restrictions for Covered Models

Before training (i.e. developing) a covered model, developers must first decide whether they can make a positive safety determination" about the model. Developers must also implement a detailed safety and security protocol," including cybersecurity protections, testing procedures to assess potential harms, and the ability to enact a full shutdown if needed. Developers are prohibited from releasing their models for any purpose beyond training unless they can certify that the models pose no unreasonable risks of harm, either now or in the future.

The bill's vague language around hazardous capabilities" opens a Pandora's box of potential issues. While it primarily aims to address catastrophic risks like cyberattacks or mass casualties, it includes a broad catch-all provision for other risks to public safety or infrastructure. Given the many black-box" aspects of AI model development, developers will struggle to confidently rule out any unforeseen hazards, especially those arising from third-party developed derivatives. The reality is that most developers will find themselves constantly worried about potential legal and regulatory risks, chilling progress at a time when the global AI race is in full throttle.

SB 1047's Reporting Requirements Will Bring AI Innovation to A Grinding Halt

Developers must also maintain and regularly update their safety and security protocols for both covered models and derivative models. Several additional requirements follow:

  • Model developers must conduct an annual review of their safety and security protocols to ensure that protocols are kept current with evolving risks and industry standards. This includes any rules adopted per the bill's requirements after January 1, 2027. Developers must also update their protocols based on these reviews.
  • Beginning in 2026, developers must hire a third-party auditor to independently verify compliance with the safety protocols. The auditor's report must include an assessment of the steps taken by the developer to meet SB 1047's requirements (and any additional guidelines post-enactment) and identify any areas of non-compliance. Developers are required to address any findings by updating their protocols to resolve issues identified during these audits.
  • Model developers must retain an unredacted copy of the safety and security protocols for as long as the covered model is in commercial or public use, plus five years. They are also required to provide the Attorney General with an updated copy of the safety protocol upon request.
  • A conspicuously redacted copy of the safety and security protocols must be made publicly available.

In practice, the process of releasing new or updated models will be bogged down with arbitrary bureaucratic delays. This will demand significant resource allocation well before companies can even gauge the success of their products.

Not only that, the mandatory assessments will effectively derail essential safety practices, especially when it comes to red teaming-where teams simulate attacks to uncover vulnerabilities. The reality is that red teaming works best behind closed doors and with minimal friction, empowering developers to quickly (and honestly) address security issues. Yet, with the added layers of auditing and mandatory updates, developers may avoid these rigorous safety checks, fearing that each vulnerability discovered could generate legal liability and trigger more scrutiny and further delays.

In the same vein, the mandatory reporting component adds a layer of government scrutiny that will discourage timely security updates and continued transparency about discovered vulnerabilities. Knowing that every security flaw might be scrutinized by regulators, developers may hesitate to disclose issues or rapidly iterate on their models for fear of legal or regulatory backlash. Worse, developers may simply try their hardest to not know" about the vulnerabilities. Instead of fostering collaboration, the mandatory reporting requirement pits developers against the California AG.

As Eric Goldman observed, government-imposed reporting inherently chills expression, where companies become more conservative (i.e. even less transparent) to avoid regulatory scrutiny. The same applies to SB 1047.

SB 1047 Will Insulate Established AI Companies at the Expense of Startups

In contrast to covered models, derivative models-those fine-tuned or modified from existing covered models-are subject to safety assessments post-modification. Fine-tuning, a routine process where a model is adapted using new data, empowers AI to perform better on targeted tasks without requiring full retraining. But SB 1047 places undue burdens on developers of derivative models, forcing them to conduct safety assessments every time they make updates.

The lifeblood of AI innovation is this iterative, adaptive process. Yet, SB 1047 effectively punishes it, creating significant hurdles for developers looking to refine and improve their models. This not only flies in the face of software engineering principles-where constant iteration is key-but also discourages innovation in AI, where flexibility is essential to keeping pace with technological progress.

Worse, SB 1047 shifts liability for derivative models to the original developers. This means companies like Google or OpenAI could be held liable for risks introduced by third-party developers who modify or fine-tune their models. This liability doesn't just extend to the original version of the model but also to all subsequent changes, imposing a continuous duty of oversight. Such a framework not only contradicts long standing legal principles governing third-party liability for online platforms but also makes the AI marketplace unworkable for startups and independent developers.

Derivative Models Fuel the Current AI Marketplace

Derivative models are integral to the AI ecosystem. For example, Google's BERT model-a covered model under SB 1047-has been fine-tuned by countless companies for specialized tasks like sentiment analysis and question answering. Similarly, OpenAI's GPT-3 has been adapted for chatbots, writing tools, and automated customer service applications. OpenAI even operates a marketplace for third-party developers to customize GPT models for specific needs, similar to an app store for AI. While these derivative models serve legitimate purposes, there's a real risk that third-party modifications could lead to abuse, potentially resulting in harmful or malicious applications anticipated by the bill.

Drawing on lessons learned from online platform regulation, SB 1047's framework risks making the AI marketplace inaccessible to independent developers and startups. Companies like Google, Meta, and OpenAI, which develop powerful covered models, may become hesitant to allow any modifications, effectively dismantling a growing ecosystem that thrives on the ability to adapt and refine existing AI technologies. For venture capitalists, the message is clear: open models come with significant legal risk, turning them into liability-laden investments. The repercussions of this would be profound. Just as a diverse media landscape is crucial for maintaining a well-rounded flow of information, a variety of AI models is essential to ensuring the continued benefit of different methodologies, data sets, and fine-tuning strategies. Limiting innovation in this space would stifle the dynamic evolution of AI, reducing its potential to meet varied societal needs.

Ironically, for a state that has been increasingly hellbent on destroying big tech," California's approach to AI will (once again) ensure that only the largest, most well-funded AI companies-those capable of developing their own powerful covered models-will not only dominate, but single handedly shape the future of AI, while smaller applications that currently build on and refine models from the larger players evaporate.

SB 1047 Will Drag California Into More Costly Litigation Over Ill-Conceived Tech Regulations

California is already mired in legal battles over poorly crafted tech regulations. Now, with SB 1047, the state risks plunging into yet another costly, uphill legal fight. The bill's restrictions on the development and release of AI models could infringe on the constitutional right to code, which courts have recognized as a form of protected expression. For instance, in Bernstein v. U.S. Department of State, export controls on encryption code were deemed to violate the First Amendment, affirming that code is a form of speech. More broadly, courts have consistently upheld the rights of developers to code, underscoring that limitations on innovation through code can encroach on constitutional protections.

This debate is strikingly similar to the legal battles over social media regulation. Just as social media platforms are fundamentally speech products, entitled to editorial discretion under the First Amendment, so too are today's Generative AI services. Many of these AI systems center around the processing and production of expression, making them direct facilitators of speech. As with the algorithms that curate social media content, regulations targeting these models will inevitably raise serious First Amendment concerns, challenging the constitutionality of such measures.

SB 1047, far from being a model for responsible" AI innovation, risks debilitating the U.S.'s leadership in AI, reinforcing the dominance of existing tech firms, and punishing developers for improving and iterating upon their models. Governor Newsom has a choice: veto this bill and support the growth of AI innovation, or sign it and watch California lead the charge in destroying the very industry it claims to protect.

Jess Miers is currently Visiting Assistant Professor of Law, University of Akron School of Law.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments