Why Governor Newsom’s Veto of SB 1047 Leaves Californians Vulnerable to AI Risks

Loading the Elevenlabs Text to Speech AudioNative Player...

On September 29, 2024, Governor Gavin Newsom returned Senate Bill 1047 without his signature, effectively halting a progressive piece of legislation aimed at addressing the rapidly evolving and potentially dangerous field of artificial intelligence (AI). SB 1047, officially known as the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," was designed to place essential safeguards around the development of the most powerful AI systems. By vetoing the bill, Newsom missed an opportunity to protect Californians from the unprecedented risks posed by AI.

Here’s why this decision is deeply concerning.

Governor Newsom’s Statement: A Summary

In his veto message, Governor Newsom acknowledged that AI poses real and evolving risks, from threats to democratic processes and misinformation to privacy violations and infrastructure vulnerabilities. He commended the intent behind SB 1047, recognizing the importance of taking action before a major AI-driven catastrophe occurs.

However, he expressed concerns that the bill focuses too narrowly on large AI models, those requiring $100 million or more in computing power to develop, arguing that smaller models could present equally dangerous risks. He believes SB 1047 could give a false sense of security by not accounting for these smaller, potentially harmful models. Furthermore, Newsom emphasized the importance of flexibility and adaptability in regulating AI, given the speed at which the technology is evolving. He criticized the bill for imposing stringent standards on even basic AI systems without considering whether they are deployed in high-risk environments or involve sensitive data.

The governor’s alternative approach, as outlined in his statement, calls for AI regulation to be informed by empirical evidence, adaptability to changing technology, and ongoing collaboration with federal partners and experts. He also highlighted several actions his administration is already undertaking to manage AI risks, including risk assessments for California’s infrastructure under an executive order issued in 2023.

What Governor Newsom Failed to Address

While Governor Newsom’s points about the rapid evolution of AI and the importance of flexibility in regulation are valid, there are several critical elements of SB 1047 that his veto message overlooked:

  1. Concrete Risk Management Procedures: SB 1047 provided actionable steps for developers to assess and mitigate the risks posed by advanced AI models. These include rigorous risk assessments before the release of AI models, mandatory third-party audits, and the implementation of a “full shutdown” capability to halt dangerous AI models in real-time. These are essential safeguards to protect against the worst-case scenarios — something Newsom’s veto does not adequately address.

  2. Whistleblower Protections and Accountability: The bill proposed strong whistleblower protections, ensuring that employees of AI companies could safely report ethical or safety violations. This would have fostered transparency and accountability, a necessary check on an industry often driven by profit and speed rather than public safety. Newsom’s statement made no mention of how this essential safeguard would be replaced or preserved under his alternative approach.

  3. CalCompute Public Cluster: SB 1047 proposed the creation of CalCompute, a public cloud computing cluster that would have democratized access to AI development resources. This would allow researchers, startups, and community organizations to participate in AI development, ensuring that innovation isn’t monopolized by a few large corporations. Newsom’s failure to address this important element misses a critical opportunity for California to lead not only in AI regulation but in equitable AI development.

  4. Proactive Measures to Prevent Catastrophic Harm: SB 1047 was built around the concept of preventing “critical harms,” such as AI systems causing mass casualties, massive economic damage, or other severe societal disruptions. These catastrophic risks were central to the bill's intent. Newsom’s statement largely ignored the specific measures SB 1047 would have implemented to mitigate these risks, opting instead to emphasize the need for adaptability. However, adaptable regulation without concrete guardrails leaves the state exposed to exactly the types of disasters the bill sought to prevent.

Why SB 1047’s Safeguards Are Needed

AI experts, including employees from frontier AI companies like OpenAI, have openly acknowledged the serious risks posed by AI. Their warnings range from the dangers of misinformation and loss of control over AI systems to the potential for autonomous AI to cause human extinction. SB 1047 aimed to put necessary controls in place before these risks manifest at scale, particularly in large AI models with the capacity to cause unprecedented harm.

The bill’s requirements for risk assessments, shutdown capabilities, third-party audits, and whistleblower protections weren’t just regulatory hurdles; they were essential tools to prevent AI models from spiraling out of control. While Newsom’s veto statement suggests that SB 1047’s approach could stifle innovation, the reality is that these safeguards would have allowed for innovation while ensuring public safety.

By framing his veto around adaptability and the need to evaluate risks as they emerge, Newsom effectively postponed the urgent need for regulation. The governor’s actions leave California vulnerable to AI-related risks without a clear, structured framework for oversight.

The Path Forward

Governor Newsom’s decision to veto SB 1047 is a missed opportunity for California to lead the nation in AI safety and governance. While he calls for collaboration and adaptability, those efforts must be paired with concrete, enforceable regulations. The risks of AI are not hypothetical — they are here today. Without strong protections in place, we may find ourselves reacting to the very disasters SB 1047 sought to prevent.

California has always been at the forefront of technological innovation. It must now take the lead in ensuring that this innovation is safe, equitable, and beneficial for all. SB 1047 provided a roadmap for achieving that goal, and while it may have been vetoed, the fight for responsible AI governance is far from over.

As we move forward, let’s demand that our leaders prioritize public safety over corporate interests. Let’s push for a balanced approach that encourages innovation while safeguarding society from the potentially catastrophic consequences of unregulated AI.

The stakes are too high to wait.

Previous
Previous

The World, at the Mercy of California

Next
Next

Determined to Sound the Alarm: Ensuring President Macron Acts on the Rogue AI Threat