Governor's “Safe” AI Initiatives: Not So Safe After All

Original ChatGPT conversation can be found here.

Loading the Elevenlabs Text to Speech AudioNative Player...

Kevin:

How does this plan differ from SB 1047? The following was posted today on Gavin Newsom's website as his new initiatives to "advance safe use of AI." I am also providing you with the content of SB 1047 attached as .pdf. Find similarities and differences and also pinpoint what may be lacking in governor's new initiatives that SB 1047 includes. Does Newsom new initiatives include the ability to put a stop on large AI models that may be cause harm?

ChatGPT:

Based on the analysis of Governor Newsom's initiatives and SB 1047, here are the key similarities, differences, and what might be lacking in Newsom’s new initiatives compared to SB 1047:

Similarities:

  1. Regulation and Guardrails: Both SB 1047 and Newsom’s initiatives focus on creating guardrails for the development and deployment of GenAI (General Artificial Intelligence) to protect Californians from the risks posed by AI. Newsom’s initiative includes experts like Dr. Fei-Fei Li to guide this process, while SB 1047 establishes the Frontier Model Division to oversee high-compute AI models​(sb-1047-wiener-apcp-ana…).

  2. Risk Assessment: Both efforts aim to assess the risks AI poses to critical infrastructure, the workforce, and other sectors. SB 1047 mandates comprehensive risk assessments and third-party audits of AI models exceeding $100 million in computing costs, while Newsom's initiatives call for risk assessments of potential catastrophic events, especially for AI models related to infrastructure risks​(sb-1047-wiener-apcp-ana…).

  3. AI Safety Focus: SB 1047 and Newsom's initiative stress the importance of safety, with SB 1047 outlining a formal "safety and security protocol" and Newsom highlighting the need for responsible AI development​(sb-1047-wiener-apcp-ana…).

Differences:

  1. Scope of Regulation:

    • SB 1047: Specifically targets "frontier models" or advanced AI systems that require at least $100 million in computing power to develop. It demands comprehensive oversight and the establishment of a state division to monitor such models. SB 1047 also mandates full shutdown capabilities to prevent potentially harmful AI from being deployed or continuing operation if deemed too risky​(sb-1047-wiener-apcp-ana…).

    • Newsom's Initiatives: Focus more broadly on GenAI, aiming to expand risk assessments across various sectors like energy and water infrastructure, and increasing collaboration with experts, academia, and industry. Newsom’s approach seems less restrictive in terms of targeting specific AI models like SB 1047 does. Instead, it sets up an advisory structure rather than enforcing direct shutdown mechanisms​(sb-1047-wiener-apcp-ana…).

  2. Whistleblower Protections and Auditing:

    • SB 1047: Includes whistleblower protections for employees in AI companies and mandates regular third-party audits starting in 2028. These audits would be overseen by a newly established board​(sb-1047-wiener-apcp-ana…).

    • Newsom's Initiatives: Do not explicitly mention whistleblower protections or third-party auditing in the announcement. Instead, the focus is on working with leading experts to shape policies and guidelines without a rigid auditing process​(sb-1047-wiener-apcp-ana…).

  3. Full Shutdown Mechanism:

    • SB 1047: Includes a requirement for developers to implement a "full shutdown" capability for covered AI models. This allows the government to enforce the cessation of AI operations if the AI system poses a critical risk of harm​(sb-1047-wiener-apcp-ana…).

    • Newsom's Initiatives: While they aim to assess and mitigate risks, there is no mention of enforcing a full shutdown of harmful AI models as proposed in SB 1047. Instead, Newsom focuses on a flexible, adaptive approach that may lack the direct intervention tools SB 1047 offers​(sb-1047-wiener-apcp-ana…).

Lacking in Newsom's New Initiatives:

  1. Full Shutdown Capability: One of the major aspects lacking in Newsom’s initiatives is the explicit ability to shut down AI systems that pose a significant risk. SB 1047 provides a structured approach to enforce shutdowns when AI systems are deemed hazardous, while Newsom’s initiatives do not mention such stringent actions​(sb-1047-wiener-apcp-ana…).

  2. Comprehensive Auditing and Governance: Newsom’s new initiatives lack the detailed governance structure laid out in SB 1047, where the Frontier Model Division would oversee model development, conduct audits, and ensure compliance. Newsom’s initiatives appear more focused on setting guidelines with expert input rather than enforcing them through a formal division​(sb-1047-wiener-apcp-ana…).

  3. Whistleblower Protections: Newsom’s announcement does not address protections for whistleblowers who may expose unsafe AI practices within organizations. SB 1047 includes explicit whistleblower protections, ensuring that employees can safely report AI safety concerns without retaliation​(sb-1047-wiener-apcp-ana…).

Conclusion:

While both Governor Newsom’s new initiatives and SB 1047 aim to regulate AI and protect against its risks, SB 1047 provides a more detailed, enforceable framework, particularly regarding high-risk AI models, third-party audits, whistleblower protections, and the ability to shut down dangerous AI systems. Newsom’s initiatives emphasize expert collaboration and broad risk assessments but lack specific enforcement mechanisms like the full shutdown provisions outlined in SB 1047.

Kevin:

According to the above, which initiative seems safer? And would Gavin new initiatives to advanced "safe" use of artificial intelligence technologies to prevent scenarios such as Rogue AI from happening?

ChatGPT:

Based on the comparison of SB 1047 and Governor Newsom's new initiatives, SB 1047 appears to be safer in terms of preventing high-risk scenarios like the development of rogue AI. Here’s why:

Why SB 1047 Seems Safer:

  1. Comprehensive Oversight and Governance: SB 1047 creates the Frontier Model Division and establishes clear governance structures to oversee the development of large-scale AI models. It requires risk assessments, formal governance protocols, third-party audits, and ongoing monitoring of advanced AI models that meet specific thresholds (such as those costing over $100 million to develop)​(sb-1047-wiener-apcp-ana…). This ensures that any advanced AI system posing significant risks, including the potential for rogue behavior, is carefully regulated before and after deployment.

  2. Full Shutdown Capability: SB 1047 mandates that developers of high-compute AI systems implement a full shutdown mechanism, allowing the cessation of any AI system that poses an unreasonable risk of harm. This includes stopping the model's operation on all devices it is deployed on, which would be crucial in preventing or halting the activities of a rogue AI​(sb-1047-wiener-apcp-ana…). Governor Newsom's initiatives do not mention such a direct intervention tool, making SB 1047 a more robust safeguard in this area.

  3. Whistleblower Protections and Accountability: SB 1047 includes provisions that protect whistleblowers who report unsafe AI practices, ensuring that internal issues with AI systems can be brought to light without fear of retaliation​(sb-1047-wiener-apcp-ana…). This accountability measure strengthens the overall safety net and helps identify potentially dangerous developments early.

  4. Third-Party Audits and Certifications: By requiring third-party audits starting in 2028, SB 1047 ensures continuous external oversight of AI models, providing an added layer of security. The involvement of independent auditors helps to verify compliance with safety standards and prevents unchecked AI deployments​(sb-1047-wiener-apcp-ana…). In contrast, Newsom’s initiatives focus on consulting experts and do not mandate such audits.

Governor Newsom's New Initiatives:

While Governor Newsom’s initiatives emphasize creating guardrails for the safe use of GenAI, they lack the stringent regulatory measures found in SB 1047. The focus is more on collaboration with experts to shape guidelines, which could lead to adaptable solutions, but without the full shutdown mechanisms, third-party audits, or strict oversight bodies, these initiatives might fall short in preventing scenarios like rogue AI​(sb-1047-wiener-apcp-ana…).

Can Newsom’s Initiatives Prevent Rogue AI?

While Newsom’s initiatives promote responsible AI development by involving top experts like Dr. Fei-Fei Li and Tino Cuéllar, they do not include direct intervention capabilities (such as the full shutdowns or stringent auditing of AI models), which are critical for addressing risks associated with rogue AI. Without these tools, Newsom's initiatives might not be sufficient to prevent or respond to rogue AI scenarios effectively.

Conclusion:

SB 1047 offers a safer and more comprehensive regulatory framework for preventing high-risk scenarios like rogue AI, with specific mechanisms to halt dangerous models and enforce accountability. Newsom’s initiatives, while valuable for shaping responsible AI development, lack the more stringent tools necessary to ensure complete safety from potential catastrophic AI events like the rise of rogue AI.

Kevin Bihan-Poudec
Southern California, United States
Sunday, September 29, 2024

Mr. President Emmanuel Macron
President of the French Republic
Palais de l'Élysée
55 Rue du Faubourg Saint-Honoré
75008 Paris, France

Reference: I017807

To: Office of President Emmanuel Macron
Subject: Urgent Appeal for Intervention: Latest Research on AI Risks and the Threat of Rogue AI

Dear President Emmanuel Macron,

I hope this letter finds you well. I am reaching out to you not only as an advocate for ethical artificial intelligence (AI) but also as someone deeply concerned about the potential risks AI poses to humanity if left unregulated or inadequately governed. As one of the world’s leaders in ethical AI, France has consistently championed responsible innovation. I believe your leadership in this area could be instrumental in addressing a pressing issue now emerging in California, which may have global implications.

Recently, Governor Gavin Newsom of California announced a series of initiatives aimed at advancing the "safe" use of artificial intelligence technologies. While this may appear to be a positive step, upon closer inspection, these initiatives fall short of providing the necessary safety measures to truly protect humanity from potential catastrophic risks—especially when compared to the more stringent bill, SB 1047, which was recently vetoed by the Governor. 

SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, outlined a comprehensive regulatory framework designed to prevent the misuse of large-scale AI models, particularly those that pose the highest risk. This bill included mandatory third-party audits, whistleblower protections, and, crucially, a full shutdown mechanism to immediately halt AI models that could cause significant harm. These are safeguards that are vital for preventing worst-case scenarios like rogue AI—an AI system that could operate autonomously in ways that threaten humanity.

In contrast, Governor Newsom's initiatives lack these rigorous enforcement mechanisms. While they call for collaboration with experts and expanded risk assessments, they do not include direct intervention tools, such as the shutdown capability or third-party audits that SB 1047 mandated. This omission is alarming, as it leaves the door open for unchecked developments in artificial intelligence that could have dire consequences for public safety, security, and even human survival.

As a global leader in ethical AI, I believe it is appropriate for you to intervene in this situation. France has always been a trailblazer in pushing for the responsible use of emerging technologies, and your influence could play a pivotal role in ensuring that California adopts the necessary measures to protect not just its own citizens but humanity as a whole.

I invite you to read more about the research I’ve conducted leveraging ChatGPT at the following address: https://www.voiceforchangefoundation.org/advocacy/governors-safe-ai-initiatives-not-so-safe-after-all. Please note that the page is translatable into French for your convenience.

Thank you for your time and consideration. I trust that under your leadership, we can continue to make strides toward a future where AI serves the best interests of humanity, safely and ethically.

Sincerely,
Kevin Bihan-Poudec
Advocate for Ethical AI and Workforce Preservation
Founder, Voice for Change Foundation

 

Kevin Bihan-Poudec
Sud de la Californie, United States
Dimanche 29 Septembre 2024

Mr. President Emmanuel Macron
President of the French Republic
Palais de l'Élysée
55 Rue du Faubourg Saint-Honoré
75008 Paris, France

Référence : I017807

À l'attention de : Bureau du Président Emmanuel Macron
Objet : Appel urgent à l'intervention : Dernières recherches sur les risques de l'IA et la menace de l'IA incontrôlée

Monsieur le Président de la République,

J'espère que cette lettre vous trouvera en bonne santé. Je vous écris non seulement en tant qu'avocat de l'intelligence artificielle (IA) éthique, mais aussi en tant que personne profondément préoccupée par les risques que l'IA pourrait poser à l'humanité si elle restait non régulée ou insuffisamment encadrée. En tant que leader mondial de l'IA éthique, la France a constamment défendu une innovation responsable. Je crois que votre leadership dans ce domaine pourrait être essentiel pour aborder un problème urgent qui émerge actuellement en Californie, et qui pourrait avoir des implications mondiales.

 Récemment, le gouverneur Gavin Newsom de Californie a annoncé une série d'initiatives visant à promouvoir l'utilisation "sûre" des technologies d'intelligence artificielle. Bien que cela puisse sembler être une avancée positive, une analyse plus approfondie révèle que ces initiatives ne fournissent pas les mesures de sécurité nécessaires pour véritablement protéger l'humanité contre les risques potentiellement catastrophiques, en particulier lorsque l'on compare ces initiatives à la proposition de loi SB 1047, récemment mise en veto par le Gouverneur.

Le projet de loi SB 1047, intitulé Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, établissait un cadre réglementaire complet destiné à prévenir les abus des modèles d'IA à grande échelle, en particulier ceux qui présentent les risques les plus élevés. Ce projet de loi comprenait des audits tiers obligatoires, des protections pour les lanceurs d'alerte, et surtout, un mécanisme d'arrêt complet permettant de mettre fin immédiatement aux modèles d'IA susceptibles de causer des dommages importants. Ces mesures de sécurité sont essentielles pour prévenir les pires scénarios, tels que l'émergence d'une IA incontrôlée — un système d'IA qui pourrait fonctionner de manière autonome et menacer l'humanité.

En revanche, les initiatives du gouverneur Newsom manquent de ces mécanismes d'application rigoureux. Bien qu'elles appellent à une collaboration avec des experts et à une évaluation élargie des risques, elles ne comprennent pas d'outils d'intervention directe, tels que la capacité d'arrêt ou les audits tiers exigés par le projet de loi SB 1047. Cette omission est alarmante, car elle laisse la porte ouverte à des développements non contrôlés de l'intelligence artificielle, avec des conséquences potentiellement graves pour la sécurité publique, la sécurité nationale, et même la survie humaine.

En tant que leader mondial dans le domaine de l'IA éthique, je pense qu'il serait approprié que vous interveniez dans cette situation. La France a toujours été un pionnier en matière d'utilisation responsable des technologies émergentes, et votre influence pourrait jouer un rôle déterminant pour garantir que la Californie adopte les mesures nécessaires afin de protéger non seulement ses propres citoyens, mais aussi l'humanité dans son ensemble.

Je vous invite à en savoir plus sur les recherches que j'ai menées en utilisant ChatGPT à l'adresse suivante : https://www.voiceforchangefoundation.org/advocacy/governors-safe-ai-initiatives-not-so-safe-after-all. Veuillez noter que la page est traduisible en français pour votre convenance.

Je vous remercie pour votre temps et votre considération. Je suis convaincu que sous votre leadership, nous pourrons continuer à progresser vers un avenir où l'IA servira les intérêts les plus nobles de l'humanité, en toute sécurité et de manière éthique.

Sincèrement,
Kevin Bihan-Poudec
Avocat de l'IA éthique et de la préservation de l'emploi
Fondateur, Voice for Change Foundation

Previous
Previous

Determined to Sound the Alarm: Ensuring President Macron Acts on the Rogue AI Threat

Next
Next

Request to L'Elysee Palace for an Update on Case Number I017807 Regarding AI Regulation and SB 1047