The World, at the Mercy of California

Loading the Elevenlabs Text to Speech AudioNative Player...

The decision by Governor Gavin Newsom to veto Senate Bill 1047 (SB 1047), a landmark AI safety regulation, has not only left Californians vulnerable but also put the entire world at risk. This bill, known as the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," was set to provide essential guardrails to ensure the responsible development of advanced artificial intelligence systems. Its veto is a warning sign that we can no longer afford to ignore.

Artificial intelligence is evolving at an alarming pace, and without meaningful regulations in place, the consequences could be catastrophic. SB 1047 sought to mitigate the dangers posed by powerful AI systems by enforcing risk assessments, third-party audits, and a real-time ability to stop dangerous AI models before they wreak havoc. With the stroke of a pen, Governor Newsom missed an opportunity to protect not only his constituents but the global population.

Why This Decision Matters Beyond California

California, a global leader in technological innovation, has long been a pioneer in setting trends that the rest of the world follows. From privacy laws to environmental standards, California has shaped the direction of global governance. That is why the veto of SB 1047 is not just a local issue—it has far-reaching global implications.

AI, as we know it today, is not limited by borders. The risks posed by unregulated AI development—mass disinformation, loss of democratic integrity, breaches in privacy, and even potential physical harm—are not confined to California. These threats can, and will, affect every corner of the world. The technology is too powerful and too ubiquitous for us to remain complacent.

The Global Stakes of AI Development

We are already seeing the warning signs. Major AI labs have openly admitted the real dangers that AI poses, from misinformation campaigns to potential threats to public safety. Yet, despite these warnings, the veto of SB 1047 leaves advanced AI systems in the hands of corporations driven by profit, free from meaningful oversight or enforceable regulations.

Without the protections outlined in SB 1047, AI systems could spiral out of control, resulting in consequences that could be irreversible. Imagine autonomous AI being used for malicious purposes, or self-learning systems making decisions beyond human intervention. The absence of regulatory measures could lead to massive disinformation campaigns capable of undermining democracies, or worse, AI systems so powerful they operate without human understanding or input, posing a direct threat to public safety.

Governor Newsom’s Missed Opportunity

Governor Newsom’s veto message acknowledged the real and evolving risks of AI but critiqued SB 1047 for focusing too narrowly on large-scale models. While he argued that smaller models may present similar dangers, this reasoning neglects the fact that SB 1047 was designed to address the most immediate threats from the largest, most powerful AI models. His suggestion of flexibility and adaptability in regulation without enforcing strict guidelines leaves us dangerously exposed.

The fact is, the voluntary measures currently adopted by some major AI companies are simply not enough. Without enforceable regulations, these companies remain free to prioritize profit over public safety. Governor Newsom’s veto has delayed meaningful action, and in doing so, has left the world vulnerable.

The Need for an International Coalition

AI safety is not just a state issue or even a national issue—it is a global issue. California’s failure to act on AI regulation now requires an international coalition to step in. Countries across the globe must unite to ensure that the future of AI is governed by principles that prioritize human safety and security over corporate interests.

The United Nations, global tech leaders, and forward-thinking governments must collaborate to fill the regulatory vacuum left by Governor Newsom’s veto. We cannot wait for the next crisis to force our hand. We must act now, with urgency, to establish a global framework for AI governance that includes:

  • Internationally enforceable AI regulations: We need a comprehensive framework that can be adopted globally to ensure that AI systems, especially the most powerful ones, are subject to strict oversight, risk assessments, and fail-safe mechanisms to prevent harm.

  • Whistleblower protections: Employees in the AI industry must be able to report ethical or safety violations without fear of retaliation, ensuring transparency and accountability in AI development.

  • Equitable access to AI resources: Public AI research clusters, like the proposed CalCompute, must be supported to prevent innovation from being monopolized by a few corporations. This would democratize AI development and allow for broader participation in ensuring AI is developed responsibly.

Senator Scott Wiener’s Full Official Statement

Senator Scott Wiener, the author of SB 1047, released a statement following Governor Newsom’s veto. His words capture the urgency and danger this decision poses, not just for Californians but for the world. Below is his full official statement:

"This veto is a setback for everyone who believes in oversight of big corporations making critical decisions affecting public safety, well-being, and the future of the planet. The companies developing advanced AI systems acknowledge that the risks these models present to the public are real and rapidly increasing. While major AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary industry commitments are not enforceable and rarely work well for the public. This veto confronts us with the troubling reality that companies seeking to create extremely powerful technology are subject to no binding restrictions from U.S. policymakers, particularly given the continued paralysis of Congress in meaningfully regulating the tech industry.

This veto is a missed opportunity for California to once again become a leader in innovative tech regulation—just as we were for data privacy and net neutrality—and we are all less safe because of it.

At the same time, the debate around SB 1047 has significantly advanced the issue of AI safety on the international stage. Major AI labs have been forced to clarify the protections they can offer the public through policies and controls. Leaders from civil society, from Hollywood to women’s groups to youth activists, have found their voices in advocating for proactive, common-sense technology safety measures to protect society from foreseeable risks. The work of this incredible coalition will continue to bear fruit as the international community reflects on the best ways to protect the public from the risks presented by AI.

California will continue to be a leader in this conversation—we are not stopping."

-Senator Scott Wiener

Escalating the Fight for AI Safety: My Appeal to President Macron and the Call for International Intervention

Subject: Urgent Follow-up on Case Number I017807: Global Implications of Governor Newsom's Veto of SB 1047 and the Need for International Action

Dear President Macron,

I am writing to follow up on my phone call regarding case number I017807, which is currently being escalated. In light of recent developments, I feel compelled to provide additional details concerning a critical decision made by Governor Gavin Newsom of California, which has profound implications for both the safety of the global population and the future of artificial intelligence (AI) governance.

On September 29, 2024, Governor Newsom vetoed Senate Bill 1047, the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.” This decision represents a grave risk not only for the residents of California but also for the world at large, as the unchecked development of advanced AI systems continues to accelerate without meaningful regulatory safeguards.

Senator Scott Wiener, the legislator responsible for SB 1047, captured the alarming reality in his statement following the veto:

"This veto is a setback for everyone who believes in oversight of big corporations making critical decisions affecting public safety, well-being, and the future of the planet. The companies developing advanced AI systems acknowledge that the risks these models present to the public are real and rapidly increasing. (...) This veto confronts us with the troubling reality that companies seeking to create extremely powerful technology are subject to no binding restrictions from U.S. policymakers...”

SB 1047 passed through all branches of legislation with unanimous approval, which demonstrates the consensus on the need for immediate action. By vetoing this bill, Governor Newsom has left California, and by extension the world, vulnerable to the unchecked risks posed by advanced AI systems.

Why the veto endangers Californians and the world:

  • Unchecked AI risks: Without SB 1047’s regulatory safeguards, there is no legal framework requiring AI developers to assess, mitigate, or control the risks posed by their models. The veto ignores critical procedures such as third-party audits, risk assessments, and the ability to stop dangerous AI models in real time.

  • No protection for whistleblowers: SB 1047 would have provided strong protections for whistleblowers in the AI industry, ensuring employees could report unethical practices or safety violations without fear of retaliation.

  • Missed opportunity for public collaboration (CalCompute): The bill proposed a public AI development cloud, democratizing access to AI resources. The veto ensures that powerful AI tools remain concentrated in the hands of a few corporations.

  • Preventing catastrophic AI failures: The bill aimed to prevent large-scale societal harm, such as AI models that could result in significant economic damage, mass casualties, or disruptions to critical infrastructure.

  • The international dimension of AI safety: SB 1047 would have established California as a global leader in AI governance, setting a precedent for other countries. The veto delays this necessary progress, leaving the global community exposed to the risks of unregulated AI.

The urgency of this matter cannot be overstated. The time to act is now, before AI technologies evolve beyond our ability to control them. I urge your office to consider escalating this issue to the United Nations and seek international intervention to ensure that responsible AI governance is enforced globally. Given the leadership that France has shown in matters of international policy, your involvement could help reverse this dangerous decision or at least ensure that global pressure is applied to prevent further missteps.

For more details, please refer to my latest article on the subject: The World, at the Mercy of California. The page is translatable into French for ease of reference.

I trust that with your influence and leadership, we can secure the international collaboration necessary to ensure AI is developed in a way that prioritizes human safety and security over corporate interests. The future of technological innovation must be both safe and equitable for all, and we cannot afford to delay any longer.

Sincerely,
Kevin Bihan-Poudec
Advocate for Ethical AI and Workforce Preservation
Voice For Change Foundation

The Future Is Now

The risks posed by unregulated AI are not hypothetical—they are happening now. The veto of SB 1047 is a missed opportunity for California to lead the world in AI governance, but it’s not too late for the global community to step in. The time for international intervention is now.

If we fail to act, we risk waiting until the very systems we’ve built to enhance our lives end up controlling them, or worse, destroying them. We must demand that our leaders, both local and global, prioritize AI safety over corporate interests. This is not just a fight for Californians—it’s a fight for the future of humanity.

The world is at the mercy of California’s decision, but together, we can ensure that the future of AI is one that protects and uplifts humanity rather than endangers it.

#DemandAIRegulation

Previous
Previous

A Missed Opportunity for AI Safety: Newsom’s Veto of SB 1047

Next
Next

Why Governor Newsom’s Veto of SB 1047 Leaves Californians Vulnerable to AI Risks