Relating to AI, as California goes, so goes the nation. The most important state within the US by inhabitants can be the central hub of AI innovation for the whole globe, home to 32 of the world’s top 50 AI companies. That dimension and affect have given the Golden State the burden to turn out to be a regulatory trailblazer, setting the tone for the remainder of the nation on environmental, labor, and consumer protection rules — and extra just lately, AI as effectively.
Now, following the dramatic defeat of a proposed federal moratorium on states regulating AI in July, California policymakers see a restricted window of alternative to set the stage for the remainder of the nation’s AI legal guidelines. On September 29, Gov. Gavin Newsom signed SB 53, a invoice requiring transparency stories from the builders of extremely highly effective “frontier” AI fashions, into legislation.
The fashions focused characterize the cutting-edge of AI — extraordinarily adept generative techniques that require large quantities of knowledge and computing energy, like OpenAI’s ChatGPT, Google’s Gemini, xAI’s Grok, and Anthropic’s Claude.
AI can offer great advantages, however because the legislation is supposed to deal with, it’s not with out dangers. And whereas there isn’t a scarcity of present dangers from points like job displacement and bias, SB 53 — additionally known as the Transparency in Frontier Synthetic Intelligence Act — focuses on attainable “catastrophic dangers” from AI.
Such dangers embody AI-enabled organic weapons assaults and rogue techniques finishing up cyberattacks or different legal exercise that would conceivably deliver down essential infrastructure. These catastrophic dangers characterize widespread disasters that would plausibly threaten human civilization at native, nationwide, and world ranges. They characterize dangers of the type of AI-driven disasters that haven’t but occurred, somewhat than already-realized, extra private harms like AI deepfakes.
Precisely what constitutes a catastrophic danger is up for debate, however SB 53 defines it as a “foreseeable and materials danger” of an occasion that causes greater than 50 casualties or over $1 billion in damages {that a} frontier mannequin performs a significant function in contributing to. How fault is set in apply can be as much as the courts to interpret. It’s arduous to outline catastrophic danger in legislation when the definition is way from settled, however doing so might help us defend towards each near- and long-term penalties.
By itself, a single state legislation targeted on elevated transparency will most likely not be sufficient to stop devastating cyberattacks and AI-enabled chemical, organic, radiological, and nuclear weapons. However the legislation represents an effort to control this fast-moving expertise earlier than it outpaces our efforts at oversight.
SB 53 was the third state-level invoice to attempt to particularly concentrate on regulating AI’s catastrophic dangers, after California’s SB 1047, which handed the legislature solely to be vetoed by the governor — and New York’s Responsible AI Safety and Education (RAISE) Act, which just lately handed the New York legislature and is now awaiting Gov. Kathy Hochul’s approval.
SB 53, which was launched by state Sen. Scott Wiener in February, requires frontier AI firms to develop security frameworks that particularly element how they strategy catastrophic danger discount. Earlier than deploying their fashions, firms must publish security and safety stories. The legislation additionally offers them 15 days to report “essential security incidents” to the California Workplace of Emergency Companies, and establishes whistleblower protections for workers who come ahead about unsafe mannequin deployment that contributes to catastrophic danger. SB 53 goals to carry firms publicly accountable for his or her AI security commitments, with a monetary penalty as much as $1 million per violation.
“With a expertise as transformative as AI, we’ve a accountability to help that innovation whereas putting in commonsense guardrails to grasp and cut back danger,” mentioned Wiener in a press release. “With this legislation, California is stepping up, as soon as once more, as a worldwide chief on each expertise innovation and security.”
“The science of tips on how to make AI secure is quickly evolving, and it’s presently tough for policymakers to put in writing prescriptive technical guidelines for a way firms ought to handle security.”
— Thomas Woodside, co-founder of Safe AI Challenge
In some ways, SB 53 is the non secular successor to SB 1047, additionally launched by Wiener.
Each cowl giant fashions which might be skilled at 10^26 FLOPS, a measurement of very vital computing energy used in quite a lot of AI laws as a threshold for vital danger, and each payments strengthen whistleblower protections. The place SB 53 departs from SB 1047 is its concentrate on transparency and prevention
Whereas SB 1047 aimed to hold firms answerable for catastrophic harms attributable to their AI techniques, SB 53 formalizes sharing security frameworks, which many frontier AI firms, together with Anthropic, already do voluntarily. It focuses squarely on the heavy-hitters, with its guidelines making use of solely to firms that generate $500 million or extra in gross income.
“The science of tips on how to make AI secure is quickly evolving, and it’s presently tough for policymakers to put in writing prescriptive technical guidelines for a way firms ought to handle security,” mentioned Thomas Woodside, the co-founder of Secure AI Project, an advocacy group that goals to cut back excessive dangers from AI and was a sponsor of the invoice, over e mail. “This gentle contact coverage prevents backsliding on commitments and encourages a race to the highest somewhat than a race to the underside.”
A part of the logic of SB 53 is the power to adapt the framework as AI progresses. The legislation authorizes the California Lawyer Common to alter the definition of a giant developer after January 1, 2027, in response to AI advances.
Proponents of the invoice have been optimistic about its possibilities of being signed by the governor. On the identical day that Gov. Newsom vetoed SB 1047, he commissioned a working group focusing solely on frontier fashions. The ensuing report by the group provided the muse for SB 53. “I might guess, with roughly 75 % confidence, that SB 53 will likely be signed into legislation by the tip of September,” mentioned Dean Ball — former White Home AI coverage adviser, vocal SB 1047 critic, and SB 53 supporter — to Transformer. He was proper.
However a number of business organizations rallied in opposition, arguing that extra compliance regulation can be costly, provided that AI firms ought to already be incentivized to keep away from catastrophic harms. OpenAI lobbied towards it, and expertise commerce group Chamber of Progress argued that the invoice would require firms to file pointless paperwork and unnecessarily stifle innovation.
“These compliance prices are merely the start,” Neil Chilson, head of AI coverage on the Abundance Institute, informed me over e mail, earlier than SB 53 grew to become legislation. “The invoice, if handed, would feed California regulators truckloads of firm data that they’ll use to design a compliance industrial advanced.”
Against this, Anthropic enthusiastically endorsed the invoice in early September. “The query isn’t whether or not we’d like AI governance – it’s whether or not we develop it thoughtfully immediately or reactively tomorrow,” the corporate defined in a weblog publish. “SB 53 affords a strong path towards the previous.” (Disclosure: Vox Media is certainly one of a number of publishers which have signed partnership agreements with OpenAI, whereas Future Excellent is funded partially by the BEMC Basis, whose main funder was additionally an early investor in Anthropic. Neither group has editorial enter into our content material.)
The talk over SB 53 ties into broader disagreements about whether or not states or the federal authorities ought to drive AI security regulation. However for the reason that overwhelming majority of those firms are based mostly in California, and almost all do enterprise there, the state’s laws issues for the whole nation.
“A federally led transparency strategy is way, far, far preferable to the multi-state various,” the place a patchwork of state rules can battle with one another, mentioned Cato Institute expertise coverage fellow Matthew Mittelsteadt in an e mail. However “I like that the invoice has a provision that will enable firms to defer to a future various federal commonplace.”
“The pure query is whether or not a federal strategy may even occur,” Mittelsteadt continued. “For my part, the jury is out on that however the risk is way extra seemingly that some recommend. It’s been lower than 3 years since ChatGPT was launched. That’s hardly a lifetime in public coverage.”
However in a time of federal gridlock, frontier AI developments gained’t await Washington.
The catastrophic danger divide
The legislation’s concentrate on, and framing of, catastrophic dangers is just not with out controversy.
The concept of catastrophic danger comes from the fields of philosophy and quantitative danger evaluation. Catastrophic dangers are downstream of existential risks, which threaten humanity’s precise survival or else completely cut back our potential as a species. The hope is that if these doomsday situations are recognized and ready for, they are often prevented or no less than mitigated.
But when existential dangers are clear — the tip of the world, or no less than as we all know it — what falls below the catastrophic danger umbrella, and the easiest way to prioritize these dangers, is determined by who you ask. There are longtermists, folks targeted totally on humanity’s far future, who place a premium on issues like multiplanetary expansion for human survival. They’re usually mainly involved by dangers from rogue AI or extraordinarily deadly pandemics. Neartermists are extra preoccupied with present dangers, like local weather change, mosquito vector-borne illness, or algorithmic bias. These camps can mix into each other — neartermists would additionally prefer to keep away from getting hit by asteroids that would wipe out a metropolis, and longtermists don’t dismiss dangers like local weather change — and the easiest way to consider them is like two ends of a spectrum somewhat than a strict binary.
You may consider the AI ethics and AI security frameworks because the near- and longtermism of AI danger, respectively. AI ethics is concerning the ethical implications of the methods the expertise is deployed, together with issues like algorithmic bias and human rights, within the current. AI security focuses on catastrophic dangers and potential existential threats. However, as Vox’s Julia Longoria reported within the Good Robot series for Unexplainable, there are inter-personal conflicts main these two factions to work towards one another, a lot of which has to do with emphasis. (AI ethics folks argue that catastrophic danger considerations over-hype AI capabilities and ignores its impression on weak folks proper now, whereas AI security folks fear that if we focus an excessive amount of on the current, we gained’t have methods to mitigate larger-scale issues down the road.)
However behind the query of close to versus long-term dangers lies one other one: what, precisely, constitutes a catastrophic danger?
SB 53 initially set the usual for catastrophic danger at 100 somewhat than 50 casualties — just like New York’s RAISE Act — earlier than halving the edge in an modification to the invoice. Whereas the common particular person would possibly contemplate, say, many individuals pushed to suicide after interacting with AI chatbots to be catastrophic, such a danger is outdoors of the legislation’s scope. (In September, the California State Meeting passed a separate invoice to control AI companion chatbots by stopping them from taking part in discussions about suicidal ideation or sexually specific materials.)
SB 53 focuses squarely on harms from “expert-level” frontier AI mannequin help in creating or deploying chemical, organic, radiological, and nuclear weapons; committing crimes like cyberattacks or fraud; and “lack of management” situations the place AIs go rogue, behaving deceptively to keep away from being shut down and replicating themselves with out human oversight. For instance, an AI mannequin might be used to information the creation of a brand new lethal virus that infects tens of millions and kneecaps the worldwide financial system.
“The 50 to 100 deaths or a billion {dollars} in property injury is only a proxy to seize actually widespread and substantial impression,” mentioned Scott Singer, lead creator of the California Report for Frontier AI Policy, which helped inform the idea of the invoice. “We do take a look at like AI-enabled or AI doubtlessly [caused] or correlated suicide. I feel that’s like a really severe set of points that calls for policymaker consideration, however I don’t assume it’s the core of what this invoice is making an attempt to deal with.”
Transparency is useful in stopping such catastrophes as a result of it might probably assist increase the alarm earlier than issues get out of hand, permitting AI builders to right course. And within the occasion that such efforts fail to stop a mass casualty incident, enhanced security transparency might help legislation enforcement and the courts work out what went mistaken. The problem there may be that it may be tough to find out how a lot a mannequin is accountable for a particular end result, Irene Solaiman, the chief coverage officer at Hugging Face, a collaboration platform for AI builders, informed me over e mail.
“These dangers are coming and we must be prepared for them and have transparency into what the businesses are doing,” mentioned Adam Billen, the vice chairman of public coverage at Encode, a company that advocates for accountable AI management and security. (Encode is one other sponsor of SB 53.) “However we don’t know precisely what we’re going to wish to do as soon as the dangers themselves seem. However proper now, when these issues aren’t occurring at a big scale, it is smart to be type of targeted on transparency.”
Nevertheless, a transparency-focused invoice like SB 53 is inadequate for addressing already-existing harms. Once we already know one thing is an issue, the main focus must be on mitigating it.
“Perhaps 4 years in the past, if we had handed some type of transparency laws like SB 53 however targeted on these harms, we would have had some warning indicators and been capable of intervene earlier than the widespread harms to youngsters began occurring,” Billen mentioned. “We’re making an attempt to type of right that mistake on these issues and get some type of forward-facing details about what’s occurring earlier than issues get loopy, mainly.”
SB 53 dangers being each overly slender and unclearly scoped. We’ve not but confronted these catastrophic harms from frontier AI fashions, and probably the most devastating dangers would possibly take us totally abruptly. We don’t know what we don’t know.
It’s additionally definitely attainable that fashions skilled under 10^26 FLOPS, which aren’t coated by SB 53, have the potential to trigger catastrophic hurt below the invoice’s definition. The EU AI Act units the threshold for “systemic danger” on the smaller 10^25 FLOPS, and there’s disagreement concerning the utility of computational energy as a regulatory commonplace in any respect, particularly as fashions turn out to be extra environment friendly.
Because it stands proper now, SB 53 occupies a unique area of interest from payments and legal guidelines targeted on regulating AI use in psychological healthcare or knowledge privateness, reflecting its authors’ need to not step on the toes of different laws or chew off greater than it might probably moderately chew. However Chilson, the Abundance Institute’s head of AI coverage, is a part of a camp that sees SB 53’s concentrate on catastrophic hurt as a “distraction” from the actual near-term advantages and considerations, like AI’s potential to speed up the tempo of scientific analysis or create nonconsensual deepfake imagery, respectively.
That mentioned, deepfakes may definitely trigger catastrophic hurt. For example, think about a hyper-realistic deepfake impersonating a financial institution worker to commit fraud at a multibillion-dollar scale, mentioned Nathan Calvin, the vice chairman of state affairs and basic counsel at Encode. “I do assume among the strains between these items in apply generally is a bit blurry, and I feel in some methods…that’s not essentially a foul factor,” he informed me.
It might be that the ideological debate round what qualifies as catastrophic dangers, and whether or not that’s worthy of our legislative consideration, is simply noise. The legislation is meant to control AI earlier than the proverbial horse is out of the barn, and it’s now one of many strongest US AI rules on the books. The typical particular person isn’t going to fret concerning the probability of AI sparking nuclear warfare or organic weapons assaults, however they do take into consideration how algorithmic bias would possibly have an effect on their lives within the current. However in making an attempt to stop the worst-case situations, maybe we are able to additionally keep away from the “smaller,” nearer harms. In the event that they’re efficient, forward-facing security provisions designed to stop mass casualty occasions can even make AI safer for people.
Now that Gov. Newsom signed SB 53 into legislation, it may encourage different state makes an attempt at AI regulation by an identical framework, and finally encourage federal AI security laws to maneuver ahead.
How we take into consideration danger issues as a result of it determines the place we focus our efforts on prevention. I’m a agency believer within the worth of defining your phrases, in legislation and debate. If we’re not on the identical web page about what we imply after we discuss danger, we are able to’t have an actual dialog.
Replace, September 30, 2025, 4:55 pm ET: This story was initially revealed on September 12 and has been up to date a number of instances, most just lately to replicate the California governor signing the invoice into legislation.
