Determined to Sound the Alarm: Ensuring President Macron Acts on the Rogue AI Threat

Loading the Elevenlabs Text to Speech AudioNative Player...

A Call to Action: Reaching Out to President Macron About AI Safety

In an urgent effort to bring attention to the rising risks associated with unregulated artificial intelligence (AI), I recently took the step of contacting President Emmanuel Macron’s office directly. My outreach was not just another advocacy effort but a call to protect humanity from the potential catastrophic dangers of rogue AI.

The letter I sent (featured below) addressed a crucial topic: the need for immediate action on AI regulation, especially given California's recent failure to implement the stringent safeguards necessary to curb the most dangerous AI systems. After Governor Gavin Newsom’s veto of SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, I knew it was imperative to bring international attention to the issue. The bill, had it been signed into law, would have implemented rigorous measures, such as full shutdown mechanisms for AI models that pose significant risks. Unfortunately, without it, we face an uncertain future where unchecked AI systems could have dire consequences for security, privacy, and humanity as a whole.

Eager to convey this urgency, I followed up with l’Elysée by phone after submitting my letter. However, the pace of the conversation made it challenging to fully express the gravity of the situation. The agent on the other end of the call needed answers to questions very rapidly, which didn’t allow me the time or space to elaborate on why President Macron, specifically, would be so well positioned to step in and lead this effort. Instead, I had to focus on the subject title and provide a brief summary of the urgency of the request. It felt as though the conversation was rushed, and the importance of the issue I was raising got lost in the rapid exchange.

That said, the outcome of the call was somewhat positive. After some insistence on my part, I was told my case would be “escalated.” Initially, the representative mentioned that it might take a week or two to receive a reply, but I underscored the pressing nature of this issue and the global risks it posed. I plan to follow up in about three business days, to provide ample time for the French administration to address this urgent concern.

I remain hopeful that, as the message reaches the higher offices, the urgency and potential global impact of this situation will become clearer to those in power. France, as a global leader in ethical AI, has an opportunity to play a crucial role in shaping a safer future for humanity, and I firmly believe President Macron’s intervention could be instrumental.

The experience left me reflecting on the gap between what is truly at stake and how it's being perceived by those in positions of power. As I wait for a further response, I remain determined to keep pushing for awareness and action on this critical issue. Whether or not this call leads to a direct intervention from President Macron, I believe we must continue to demand accountability from world leaders to ensure AI's development and deployment remains within the bounds of safety, ethics, and humanity’s best interests.

I invite you to read the letter I sent and join me in this crucial conversation about the future of artificial intelligence. You can also explore the research here I’ve conducted comparing Governor Gavin Newsom’s new initiatives to advance “safe” and responsible AI and protect Californians to SB 1047, and how safe the Governor’s new initiatives would be in the event of rogue AI. Together, we can advocate for a future where technological advancement doesn't come at the expense of humanity's safety.

Stay tuned for updates as I continue this dialogue with the leaders of the world.

Sincerely,

Kevin Bihan-Poudec

Advocate for Ethical AI and Workforce Preservation

Founder, Voice for Change Foundation

Previous
Previous

Why Governor Newsom’s Veto of SB 1047 Leaves Californians Vulnerable to AI Risks

Next
Next

Governor's “Safe” AI Initiatives: Not So Safe After All