A Missed Opportunity for AI Safety: Newsom’s Veto of SB 1047

Loading the Elevenlabs Text to Speech AudioNative Player...

The recent veto of California’s SB 1047 by Governor Gavin Newsom is a stark reminder of the ongoing struggle between public safety and corporate interests. In a decision that has deeply disappointed advocates of AI safety and ethical governance, the veto marks a missed opportunity to place necessary regulations on the development of artificial intelligence. As highlighted in the Future of Life Institute’s newsletter and official statement, this outcome reinforces the belief that Big Tech remains largely unaccountable to the public and continues to play by its own rules.

Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech. Their official statement was the following:

"A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.

In a statement by Anthony Aguirre, the Executive Director of the Future of Life Institute, he expressed that the aggressive lobbying against SB 1047 only serves to strengthen the perception that these companies believe they should operate without oversight. Aguirre’s words capture the frustration of many who supported the bill, noting that “this veto only reinforces that belief.” He further emphasized the urgent need for legislation at the state, federal, and global levels to hold Big Tech accountable.

Anthony Aguirre, FLI Executive Director stated:

“The furious lobbying against the bill can only be reasonably interpreted in one way: these companies believe they should play by their own rules and be accountable to no one. This veto only reinforces that belief. Now is the time for legislation at the state, federal, and global levels to hold Big Tech to their commitments.”

The Potential of SB 1047

SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was a significant step towards ensuring that AI development prioritizes safety and responsible innovation. It proposed implementing safety testing protocols for AI systems that would ensure transformative technologies like AI do not pose risks to society. The bill gained support from a broad range of stakeholders, including labor unions, advocacy groups, and 80% of the public across the United States. Had it been signed into law, SB 1047 would have positioned California as a leader in AI regulation, setting a precedent for other states and nations to follow.

The rejection of this bill, however, indicates a disturbing trend: a reluctance to challenge the power of Big Tech. In the context of the rapid expansion of AI technologies, the absence of regulation leaves us vulnerable to the dangers these technologies can pose, including misinformation, privacy violations, and job displacement.

What Comes Next?

While the veto of SB 1047 is undeniably disappointing, it does not signify the end of efforts to regulate AI development. The significant momentum generated by this bill, and the public awareness surrounding it, gives hope for future initiatives. Public concern about AI risks is growing, and the conversation has only just begun.

As the Future of Life Institute notes, progress in this area takes time. The incredible support for SB 1047, from advocacy organizations to everyday citizens, shows that people are ready to hold Big Tech accountable for the impact of its technologies on society. This advocacy will continue, and with it, the push for responsible AI governance will gain even more strength.

A Call for Continued Advocacy

We cannot afford to be passive spectators in the rise of artificial intelligence. The risks posed by unregulated AI are too great to ignore. We must demand that our leaders—at every level—take these risks seriously and act accordingly. Although Governor Newsom’s veto is a setback, it has also highlighted the growing public consensus that AI safety is crucial for the future of our society.

As Aguirre emphasized, the veto is not the end of this fight. In fact, it is a call to action. We must continue to push for legislation that ensures AI development is safe, ethical, and beneficial for everyone—not just the companies that stand to profit from it.

Let’s not wait for the dangers of unregulated AI to become a reality. The time for action is now.

Kevin Bihan-Poudec
Advocate for Ethical AI and Workforce Preservation
Founder, Voice for Change Foundation

#DemandAIRegulation #SB1047 #AISafety #ProtectOurFuture #VoiceForChange #BigTech

@GuardiansOfAI, the Voice For Change official X/Twitter account: Grateful to have the support of @futureoflifeinstitute on this critical issue. AI safety is more important than ever—together we can ensure a safer future for all. Let’s keep the conversation going!
#AISafety #FutureOfLife #EthicalAI #SB1047

Previous
Previous

The AI Revolution to Regulate AI Has Officially Started: Governor Newsom’s Veto of SB 1047

Next
Next

The World, at the Mercy of California