Article 6PK3T From Sci-Fi To State Law: California's Plan To Prevent AI Catastrophe

From Sci-Fi To State Law: California's Plan To Prevent AI Catastrophe

by
BeauHD
from Slashdot on (#6PK3T)
An anonymous reader quotes a report from Ars Technica: California's "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act" (a.k.a. SB-1047) has led to a flurry of headlines and debate concerning the overall "safety" of large artificial intelligence models. But critics are concerned that the bill's overblown focus on existential threats by future AI models could severely limit research and development for more prosaic, non-threatening AI uses today. SB-1047, introduced by State Senator Scott Wiener, passed the California Senate in May with a 32-1 vote and seems well positioned for a final vote in the State Assembly in August. The text of the bill requires companies behind sufficiently large AI models (currently set at $100 million in training costs and the rough computing power implied by those costs today) to put testing procedures and systems in place to prevent and respond to "safety incidents." The bill lays out a legalistic definition of those safety incidents that in turn focuses on defining a set of "critical harms" that an AI system might enable. That includes harms leading to "mass casualties or at least $500 million of damage," such as "the creation or use of chemical, biological, radiological, or nuclear weapon" (hello, Skynet?) or "precise instructions for conducting a cyberattack... on critical infrastructure." The bill also alludes to "other grave harms to public safety and security that are of comparable severity" to those laid out explicitly. An AI model's creator can't be held liable for harm caused through the sharing of "publicly accessible" information from outside the model -- simply asking an LLM to summarize The Anarchist's Cookbook probably wouldn't put it in violation of the law, for instance. Instead, the bill seems most concerned with future AIs that could come up with "novel threats to public safety and security." More than a human using an AI to brainstorm harmful ideas, SB-1047 focuses on the idea of an AI "autonomously engaging in behavior other than at the request of a user" while acting "with limited human oversight, intervention, or supervision." To prevent this straight-out-of-science-fiction eventuality, anyone training a sufficiently large model must "implement the capability to promptly enact a full shutdown" and have policies in place for when such a shutdown would be enacted, among other precautions and tests. The bill also focuses at points on AI actions that would require "intent, recklessness, or gross negligence" if performed by a human, suggesting a degree of agency that does not exist in today's large language models. The bill's supporters include AI experts Geoffrey Hinton and Yoshua Bengio, who believe the bill is a necessary precaution against potential catastrophic AI risks. Bill critics include tech policy expert Nirit Weiss-Blatt and AI community voice Daniel Jeffries. They argue that the bill is based on science fiction fears and could harm technological advancement. Ars Technica contributor Timothy Lee and Meta's Yann LeCun say that the bill's regulations could hinder "open weight" AI models and innovation in AI research. Instead, some experts suggest a better approach would be to focus on regulating harmful AI applications rather than the technology itself -- for example, outlawing nonconsensual deepfake pornography and improving AI safety research.

twitter_icon_large.pngfacebook_icon_large.png

Read more of this story at Slashdot.

External Content
Source RSS or Atom Feed
Feed Location https://rss.slashdot.org/Slashdot/slashdotMain
Feed Title Slashdot
Feed Link https://slashdot.org/
Feed Copyright Copyright Slashdot Media. All Rights Reserved.
Reply 0 comments