The Existential Threat of Rogue AI: Why We Need Immediate Government Oversight on AI Models

Loading the Elevenlabs Text to Speech AudioNative Player...

In a conversation with a friend recently, I realized just how split opinions are on the dangers of artificial intelligence. While we discussed the very real potential for an AI apocalypse, my friend shrugged it off, saying humans with guns are far more dangerous. But that’s exactly the problem—we’re looking at immediate threats, distracted by the latest crisis, while something far bigger is happening right under our noses. It’s a humanity-level existential threat, yet so many seem unaware or unconcerned.

We’re not talking about just another technological innovation here. This is different. Once AI systems, especially those valued at over $100 million, reach a certain level of sophistication, we can’t just "turn them off." The idea that tech CEOs are toying with the installation of an AI “kill switch” (Fortune.com) should terrify us all. Do you honestly think a system capable of learning at exponential rates won’t find a way around that?

My friend mentioned they are more concerned about school shootings than AI. Of course, gun violence is a pressing issue, but imagine adding autonomous humanoid robots to the mix—robots that can be hacked or, worse, make decisions based on flawed data or bias embedded in their systems. How do we stop them when we’ve already let them loose?

The risk here isn’t just about losing jobs or automating industries. It’s the existential question of whether we’ll lose control altogether. Will AI systems designed to help eventually decide we’re in the way of their “mission”? There’s a frightening possibility that this could end up far worse than any dystopian sci-fi movie we’ve seen. Congress knows this and raised alarms last month over Chinese humanoid robots on U.S. soil. That’s not paranoia—that’s a real threat.

Yet, while these discussions are happening in the background, we’re distracted by political theater and the latest election cycle. Our focus shifts constantly, but this—this issue with AI—is something we can’t afford to overlook. How long before we can no longer control what’s been unleashed? My concern isn’t just about the potential of AI but the lack of regulation, the willingness to let it evolve unchecked, trusting that the tech titans have our best interests at heart.

And here’s where Governor Gavin Newsom enters the conversation. He’s standing on a precipice with SB 1047, the AI regulation bill. His decision could either protect us from future chaos or usher us into a world where AI is left to its own devices, a risk that could result in catastrophic consequences for humanity. How does he sleep at night knowing the weight of this decision? Why is he hesitating to sign the bill?

We have to ask—how much influence are the tech CEOs commanding? How much money is being funneled to keep AI unregulated and out of government oversight? The future of humanity is worth more than corporate profits. We must demand that our leaders, starting with Governor Newsom, take action before it’s too late.

This is not a decision that can be delayed any longer. Our safety, our future, and the survival of humanity depend on it. We must wake up before the risks become irreversible.

Use the following hashtags on your socials and voice your concerns to put pressure on Gavin Newsom to sign SB 1047 by September 30th, 2024. Be an advocate for the safe use of artificial intelligence. It is your responsibility to help ensure a prosperous future for our children and generations to come.

We must #DemandEthicalAI and #ActOnAI to #SignSB1047 and #ProtectOurFuture.

Previous
Previous

The Daily Struggles of Displaced U.S. Workers Amidst the Unchecked Surge of Artificial Intelligence: Is the Government Aware?

Next
Next

The U.S. Needs a Secretary of National Artificial Intelligence Strategy and Ethics: Why the Time to Act is Now