Voicing the Unspoken: The Human Cost of unregulated AI and the Urgency for Change

In an era defined by rapid technological advancements, we find ourselves at a critical juncture in history. The decisions we make today regarding artificial intelligence (AI) will shape the future of humanity for generations to come. As the founder of the Voice For Change Foundation, I feel compelled to speak out about the very real concerns that are not just hypothetical, but are already impacting people's lives, including my own.

Today, I am sharing a deeply personal account from a recent therapy session, where I discussed the challenges I’m facing and the broader implications of AI on our society. This is not to scare, but to engage—to start a conversation that is long overdue.

Below, you’ll find the first two parts of my “AI Podcast Therapy,” a four-part series that will be released over the next couple of weeks. Each part delves deeper into the ethical and societal challenges of artificial intelligence, leading up to the critical deadline for Governor Gavin Newsom’s decision on whether to sign SB 1047 into law by September 30th, 2024.

This recording was shared with key decision-makers, including Governor Gavin Newsom, the Biden-Harris administration, and French President Emmanuel Macron. This immediate access ensures that those in positions of influence have the complete picture while there is still time to act. The recordings are available with captions in English, French, and Spanish, and you can choose your preferred language on Vimeo. The recording features my authentic voice and a modified AI-generated version of my therapist’s voice to comply with privacy regulations. More details on that can be found at the bottom of this blog post.

A Sociopolitical SB 1047 Podcast now Streaming on:

Episode 1: "Navigating the AI Revolution: A Personal Journey of Survival and Advocacy"

Episode 2: "The AI Reckoning: Navigating the Political and Ethical Dilemmas of an Unstoppable Force"

Episode 3: “Facing the AI Anxiety: Compartmentalizing Fear and Seeking Solutions”


“The way that a lot of folks get through their day is that you have to compartmentalize.” -Therapist

“Sometimes we get preoccupied by thoughts that keep us from dealing with stuff that might feel even more overwhelming.” -Therapist

Episode 4: “Crossroads of Democracy and Technology: A Fight for the Future of Humanity”

The Talking Points We Cannot Ignore

During my therapy session, several key issues emerged that I believe we all need to be concerned about:

The Impact of AI on Jobs

Thousands of workers are being displaced weekly due to AI and automation, especially in the tech sector. The unemployment crisis is growing, and people are struggling to find new employment. This is not a distant future scenario—it is happening now.

The Urgent Need for AI Regulation

AI technology is advancing at a pace that outstrips our current regulatory frameworks. Without proper controls, AI could grow beyond our ability to manage it. The risks are not just theoretical—there are real concerns about AI models growing themselves, potentially taking over businesses, and creating dependencies that could have catastrophic consequences.

The Mental Health Crisis

The stress and anxiety caused by job loss and financial instability are further aggravated by the existential threat posed by AI. As more people face unemployment due to the unchecked implementation of artificial intelligence and automation, the risk of a mental health crisis becomes increasingly real.

It is not unreasonable to worry that the suicide rate could rise as AI continues to disrupt industries, leaving people grappling with a loss of identity and relevance in a labor market where their skills are made obsolete. Without reskilling and upskilling programs in place, many will struggle to adapt. Although the Biden-Harris administration included these programs in the Artificial Intelligence Accountability Act of 2024, the bill remains in the introductory stage on the U.S. Congress website, struggling to gain the attention it deserves due to bipartisan issues, especially in an election year. These concerns must be addressed before the workforce becomes ill prepared in mass to face the labor market of tomorrow.

I emphasized this pressing issue in my open letter titled “Act Now” on page 21, addressed to the U.S. Congress on February 18th—two days before the United States officially announced the creation of a bipartisan Task Force on Artificial Intelligence. The letter was shared with key congressional members, including the current President of the United States, Joe Biden.

The Need for a Binding Agreement

It is crucial that we have a binding agreement between the U.S. government, tech CEOs, and the public on how we want AI to impact our lives. This conversation must happen now, before it’s too late. We cannot allow a small group of tech executives to make decisions that will affect the future of all humanity without broader input and oversight.

More detailed talking points can be found further down.

Global Advocacy for Ethical AI: Reaching Out to California's Governor Gavin Newsom, the Biden-Harris Administration, and French President Emmanuel Macron

In addition to reaching out to Governor Gavin Newsom regarding the importance of signing Senate Bill 1047, I have also shared a letter with the Biden-Harris administration. This letter emphasizes the stakes of unregulated AI, outlining how failure to act now could lead to profound societal consequences, including job displacement, economic instability, and threats to basic human rights.

The goal of this letter is to bring the Biden-Harris administration into the conversation and to place pressure on Governor Newsom to ensure that this landmark bill is signed into law. The risks of inaction are too great to ignore, and the moral implications of failing to regulate AI could haunt us for years to come. With support from the federal government, we can set a precedent that AI must be developed and implemented with the well-being of society in mind.

Alongside my efforts within the U.S., I have reached out to French President Emmanuel Macron, recognizing France's leadership in the field of ethical AI. An outside leader's perspective, particularly one from a nation that prioritizes responsible technological development, could provide a crucial influence in shaping the future of AI regulation in the United States. President Macron’s involvement would highlight the global significance of this issue and reinforce the need for international collaboration in ensuring that AI is implemented with the safety and well-being of humanity at the forefront. France’s commitment to ethical AI could help urge Governor Newsom to take swift action, as the decisions made in California will undoubtedly have ripple effects across the globe. The future of AI requires not only national but also global leadership, and President Macron’s voice could be instrumental in guiding this conversation forward.

Why I Am Speaking Out

I have reached a point where I feel I have no choice but to put this message out there. It’s not easy to share personal struggles, especially those that touch on such profound issues. But I believe that by breaking the silence, I can help others understand the true gravity of the situation we are facing.

I am fully aware that sharing this type of content online comes at a personal cost. Publicly discussing my mental health and the challenges I face could impact my ability to find work and may strain personal relationships. However, I am willing to take this risk because there is too much at stake not to initiate this crucial conversation. The safeguarding of human rights and societal well-being is a cause worth any personal sacrifice.

This article, along with the recorded therapy session, has been shared with California Governor Gavin Newsom in the hope of getting his attention and educating him on the risks that could significantly impact the lives of Americans if AI continues to remain unregulated.

The ultimate goal is to influence the outcome that is within his power—signing SB-1047 into law. The passing of this bill is critical to placing necessary boundaries on AI development and ensuring the safety and security of our society.

This is not about spreading fear—it’s about fostering dialogue. We must start talking to our leaders, to our communities, and to each other about the impact of AI. We need to be proactive in shaping the future rather than being passive bystanders. The decisions we make today will determine whether AI becomes a force for good or something far more dangerous.

A Call to Action

I am calling on everyone—citizens, leaders, and tech innovators—to engage in this crucial conversation. We must demand ethical AI and ensure that its development and deployment are aligned with our values as a society. The future is not set in stone, but the time to act is now. We cannot afford to wait until it’s too late.

Let’s come together to ensure that AI serves humanity, rather than the other way around. We have a responsibility to future generations to make sure that the tools we create today do not become the chains that bind them tomorrow.

In addition to starting the conversation, I encourage you to take immediate action by contacting Governor Gavin Newsom directly and urging him to sign SB 1047 into law. The power to set necessary boundaries on AI development and protect our society rests in his hands, but he needs to hear from the people who will be most affected by this decision—us. Visit his website at www.gov.ca.gov/contact/ to express your support for this critical bill. Let Governor Newsom know that the safety of our society, the workforce, and future generations depends on responsible AI regulation. Together, we can ensure that SB 1047 becomes a reality. Thank you for your continue support.

I am a Voice For Change. Be part of the Ethical AI movement and be one too.

- Kevin Bihan-Poudec

#DemandEthicalAI

#VoiceForChange

More Detailed Talking Points

  • Impact of AI on Jobs and Society: Kevin reflects on the AI revolution and how it's affecting people's ability to find work, particularly in the tech industry. He emphasizes that the issue is widespread, though few are talking about it publicly.

  • AI as a Global Threat: He expresses concern over the lack of boundaries for AI development, drawing parallels to historical revolutions and the potential for societal collapse if AI remains unchecked.

  • SB 1047 – AI Regulation: Kevin stresses the importance of SB 1047, a bill that aims to put safeguards on AI for national security. He sees this as critical to preventing AI from spiraling out of control, which could lead to catastrophic consequences.

  • Weaponization of AI: Kevin warns about the potential of AI to become a lethal force if left unregulated, comparing it to the development of nuclear weapons. He highlights the urgency of implementing a kill switch or similar safety mechanisms.

  • Political Inaction and Manipulation: Kevin voices frustration over politicians and tech companies prioritizing profit over humanity’s future. He questions whether the public fully understands the dangers posed by unchecked AI development.

  • Economic and Social Fallout: He discusses the future implications of AI on the economy, particularly how AI may lead to mass unemployment and exacerbate inequality. He points out the predominance for ghost jobs—fake job listings meant to gather data without actual hiring.

  • Advocacy for AI Awareness: Kevin advocates for greater public awareness and action regarding AI, urging others to educate themselves about AI to avoid job displacement or difficulties reentering the labor market. He believes people will wake up to the threat when it’s too late.

  • Future of Humanity: Kevin shares his fears about the fate of humanity, especially if AI progresses without appropriate controls, drawing on examples from both real life and science fiction to underscore the urgency of the issue.

  • Call for Collective Action: He calls for a social movement to demand ethical AI, emphasizing the importance of SB 1047 and similar efforts to guide AI development responsibly.

Full Transcript of My Therapy Session

Part 1 Title: "Navigating the AI Revolution: A Personal Journey of Survival and Advocacy"

00:00:00

(Kevin Bihan-Poudec laughing) I'm nervously laughing because I need to figure out like if I, you know, make this decision, this decision, this could occur, and if I’m in this situation but that other thing occurs, how would I get out of that, this plausible situation? I'm like, I already have to deal with the BS I’m dealing with now. I can’t think about five other BS scenarios that I have to (deal with). It’s exhausting. So when my mom was like:

00:00:29

"Kevin, I don’t know how you do it all." I was like: I was thrown into this situation I guess for a reason. I don’t know what it is but it has to be because it doesn't make any sense to me what I have to do (continues laughing). I'm nervously laughing about it.

00:00:46

Therapist: "I think sometimes that’s all we can do, you know."

Kevin: (giggling) I know! I’m like, it's so ridiculous what I have to go through, like, literally? I don’t know how other people, because, who are going through this same boat as I am; losing your job, not being able to find another one, especially in the tech sector, like, what do they do? I want to hear their stories. I’m a single guy. Thank GOD I don’t have a child or like a mortgage or putting, you know, having the responsibility of putting kids through college and stuff like this but, this has to be a scenario out there, according to statistics, right?

00:01:22

There are thousands of us every week that get let go, for the last year, that's a fact. These are numbers that are announced in the news, in articles saying, you know, thousands of workers were fired or whatever. So I already know there’s some people struggling as much as I am, and sometimes when I’m in these situations, I'm like: "Oh, I guess I’m just part of the people who are suffering from this, part of this silent AI revolution." It's literally what it is (giggling) and it's just the beginning of it.

00:01:52

So I was talking to my friend, his partner, yesterday, in San Diego, he's super smart and, he was just saying: "Oh well it's just going to plateau" and I'm like "plateau?" How can you talk about plateau when to me, there’s no signs of slowing down this thing? It’s only going to get worse, I mean, I posted an article on my nonprofit organization website which is just me, myself and I about...

00:02:29

...Having to deal with this situation. To me, it’s almost like I’m living a nightmare.

00:02:38

Therapist: "You are in survival mode."

Kevin: Yeah but I’m not alone. And when I know I’m not alone, it makes me feel less lonely somehow but, yeah what I was getting at with my point was, I don’t know how long it's going to take for people to speak up, maybe three to four years? I don't know how long...because it is bound to happen.

00:03:06

When you have a bigger portion of the population going through the same struggle pretty much at the same time about the same issue, it’s called: a movement, like, social movement. You can't deny this. That's what happened during the French Revolution. We killed Louis the XIV (14th), because we were starved to death, right? He didn't take care of his own people, so we were all in the same boat, we were all at the bottom, when the 1%, whatever. It’s exactly the same scenario, again. History repeats itself, right?

00:03:57

So here I am making a joke: "Yeah, whenever you guys are ready, let's do this!" but I guess it's just me, myself and I right now which I know it's not. But I've looked on YouTube, I told you this, I looked online, like people like, I want to hear people's stories, like, have they been interviewed about struggling to find a job in the tech sector? You can't find any of this information, it's almost to wonder, like, (giggling) is Mark Zuckerberg, like, or the tech CEOs, like, filtering out this so people can't find out they are not alone in this situation, you see what I mean? It doesn't make sense to me. I can not be the only person who's talking about this. There's no way.

00:04:43

So...where, where are the others? Because there definitely are, like, thousands of them. I don’t understand. I, I feel so lonely. I feel like I’m fighting this thing alone...(sighing)

00:05:01

So I guess I just have to be patient. I hope this will happen because I’m planting seeds right now, you know, I’m not a prominent, like an important figure or someone famous or with a following or rich or whatever. I literally wrote a letter to Newsom Gavin yesterday (California's Governor), on Labor Day. That bill SB 1047 is on his desk ready for him to sign, in regards to putting boundaries on, the safety for AI. You know, kind of things like, oh well if the AI creates a model that outgrows itself and starts building, I don’t know, like q model...to weaponize, it could do that.

00:05:55

So, what it's trying to do is, well, we need to have a "kill switch" button where the government can come in and say: "Hey, whatever you guys have been working on, it's a concern of national security kind of thing, like." We need, the government needs to have supervision on this, otherwise it's going to be, The Wild Wild West.

00:06:13

Kevin: I mean, this is not just a bill, and I think people don't understand, they're not getting it. Read my article, like, you'll understand, it's a big deal! We're talking about California leading, putting boundaries on all these companies, you know, creating these models, to the rest of the world, the Silicon Valley, you know. So if California is not doing this? It literally sends the message to the world: "Well do whatever you want with your AI. And let's see, let's see, let's see what happens." Let's play with fire even more (nervously laughs).

00:06:56

Kevin: Like it's almost upsetting, you know? It's like, "Oh yeah, we played with creating the atomic bomb, you know, let's see if we can create a technology that can eradicate the human species, let's get really close to it." Okay, well, we're two minutes away from Putin pressing that button for World War III to start. My family's back in Europe. Freaks me out. And if this were to occur, I mean the whole world is over, because then the US would go to war, you know, China, Japan, etc. It's all over. So why are we doing this again with AI?

00:07:32

Kevin: Why have we played with fire to the point where we created a technology that could take over us? It's ridiculous what we did, or what "they" (tech CEOs) did. And there's an ex-executive of Google (Mo Gawdat) who has a whole video on YouTube about this saying: "We Fu**ed up," he literally said these words. We released a technology into the world that we shouldn't have, and now it's getting out of control.

00:08:01

News Anchor: “AI could manipulate or figure out a way to kill humans?”

In 10 years. We’ll be hiding from machines. If you don’t have kids, maybe wait a few years until we have some certainty. I don’t know how else to say it. It makes me emotional. We f***** up. We always said, "Don’t put AI on the open internet until we know what we’re doing." The government needs to act now. Honestly, we’re late.

- Mo Gawdat, Former Google Executive, AI Expert and Author

00:08:36

Kevin: And to me, it's absurd that I have to think about this because of what I know. And when I go out, you know, it's like normal life out there, like people are just going on with their day, going to the grocery store and having birthday parties, and I'm like: "Wow, you guys live in a bubble." Because you have no idea what's ahead, or maybe you do and you're okay with it. I'm not okay with it.

00:09:04

Kevin: So, I guess we'll find out. He (Governor Newsom) has a few more weeks to sign this thing (SB 1047). I just hope that the people, the politicians, like they are not just being part of this whole lobby thing where, I mean, the problem at the top is these people at the top just want to make more money, you know, they want to have an extra house, a lake house, and they're like in their 70s, like they'll be dead in 20 years, they don't freaking care about the next generation or freaking humanity for the rest of eternity, like, they don't care.

00:09:39

Kevin: The tech CEOs, they know all this; they're just manipulating them like puppets. That's why they released AI in 2024 during an election year. There's no question, that's what happened. They know exactly what they're doing.

00:09:59

Kevin: And when you analyze the Terms & Conditions of OpenAI, which I tried to do in ChatGPT, telling me it's too long to even summarize it. The own AI cannot summarize its own policy? I had to use another AI model to summarize it, to ask, you know, all this proprietary data that you are feeding to the model, part of a company, who owns it? Who owns the copyright? Could potentially OpenAI go after you as a business saying: "Well, you've been feeding into our model your data."

00:10:39

Kevin: They could change the rules saying: "Well, now we are going to go after you and sue you, because we want ownership of your company because we owe, I don't know, 50% of your knowledge, your data." And literally, there is a clause in there, in the 200-page thing that's written in gibberish, no one can understand, even though it's written in English, that says: "We reserve the rights to change or modify this policy at any time" for any reason, for whatever we want.

00:10:54

Kevin: Okay, well that's not a policy. They can, so if SB 1047 doesn't get signed, these types of companies can literally—and I told this to my mom via WhatsApp—they could literally go after all businesses in the future and own part of them. What is that going to do to the economy? It's going to be a legal s*** show. It would be chaos.

00:11:17

Kevin: But this is possible, and it's most likely what they're wanting to do, you know, like charge $20 a month for all these users, and like, have them play with it and whatever, and then they are really hooked up, and they can't not do without it, like Google, you know, or Maps on your phone, and you're kind of required to pay this to have a job and survive. Your employer doesn't pay the $400/month in ChatGPT membership, you have to.

00:11:45

Kevin: It's obviously where we're headed. It's not going to be $20 forever. Whoever thinks this is, hello?! Wake up. They're not making money right now. They are losing trillions of dollars. So they're going to need their money back, and they know exactly how they're doing it. They make you codependent on that technology.

00:12:03

Kevin: Actually, I made a joke on social media the other day, I was like: "Oh, I don't know how I used to live without ChatGPT, it's doing everything for me, like business plans, and actually, it's helping me with this job for TheraVR Inc. that I have because it's creating tables for me now, under a second. Would take me like all day to do it manually. So I'm leveraging it, I'm using it to my own benefit, right?

00:12:31

Kevin: But once it's getting to the next level, where you have no choice but to use that tool to function, whether it is in society or through your work, you have to pay the price, no matter what it is. Maybe ChatGPT-5 is $49.99/month, in the next six months, and they're going to sell you: "Oh, it can do all this now, you want it right? You can create a video, a movie of your own life. All you have to do is upload a photo when you were younger, we'll do it for you."

00:12:58

Kevin: Text with Sora. Plugin they're doing, text-to-video speech (AI-generated content). "Okay, yeah I want it." And then, a year later: "Well now we've added these features, it's $99.99/month." I mean, this is where we are headed. It's like with the freaking iPhone. The iPhone is $1,600 now. You're on a payment plan.

00:13:21

Kevin: You are on a payment plan for that thing that you can't afford. Same thing with ChatGPT. Frightening.

[End of Part 1]

Part 2 Title: "The AI Reckoning: Navigating the Political and Ethical Dilemmas of an Unstoppable Force"

00:00:02
But what I'm realizing what's happening and what could occur and the politicians not making the

00:00:07
right decisions right now because they, either they don't have the knowledge or I don't know

00:00:12
who works for them or who's educating them on this issue. I mean, do they know? Do they actually know

00:00:21
and they decide not to make the right decisions out of like, I don't know, are they trying to do

00:00:28
bad in the world or are they just like evil or are they really trying to be good politicians?

00:00:34
Or are they trying to do this for their own pockets? It's to wonder. Because if you're

00:00:37
aware of what I'm telling you, you would sign that bill, 1047, because you understand that,

00:00:46
literally the future of the human species lies within this bill. It puts guardrails on AI. Safe

00:00:56
use of AI. Treasurer Joely Fisher (SAG-AFTRA, The Screen Actors Guild): "We spent all those 35 days,

00:00:59
then got extensions and then had this incredible support, this 98% strike authorization and

00:01:06
had these beautiful people who stood tall and in solidarity and went out on strike,

00:01:11
and then we got back to the table and made extravagant, extraordinary gains in so many

00:01:18
areas. Putting guardrails and boundaries on AI was so important to me, and I believe the language

00:01:24
is there." This technology we created could become, we don't even know what it could become.

00:01:30
The most important thing I've ever read in my life, so when I post about it, I get zero like

00:01:36
or comment, I'm like, well I know what I'm saying, so it's fine. You don't have to engage with me. I

00:01:43
know I'm planting seeds, and I know the work that I'm doing of educating people through my articles

00:01:51
is to open that dialogue. People are not ready for it. They're not getting it. They are not getting

00:02:00
it until they lose their job like I have, and they can't find another one and then, oh my God,

00:02:05
it hits home, now I'm almost homeless. Oh, yeah, maybe Kevin was saying 3 years ago, he was right,

00:02:09
because now I'm in the hiring landscape and it's a f******** nightmare. Yeah, I told you

00:02:13
this 3 years ago. You should have learned about AI to stay relevant, now you are not hireable.

00:02:18
This is literally where all the data analysts in the U.S. are headed. So you know, it's in my duty

00:02:26
to kind of be like, "hey, make sure you know while you still have a comfy job and you can pay your

00:02:30
bills, bless you, and a roof on top of your head, make sure you learn about AI, that's all I've been

00:02:36
saying for nine months. Get ready. Because if you want a job in the next year or two, you better

00:02:44
have the skills that those private companies are going to require you to do and to have.

00:02:50
If you don't believe me, here's that article, a survey from Microsoft and LinkedIn saying "66% of

00:02:56
leaders would not consider a candidate without AI knowledge, and that 71% would rather have

00:03:03
someone who has AI knowledge of one year than any other knowledge for a period of 10 years." Well,

00:03:13
that just tells you right away, whatever you've been doing for a decade is absolutely obsolete.

00:03:21
People are not understanding this. They’re not. You see ads on TV or on your social media:

00:03:27
"Learn about AI so you can still have a job at the end of the day and survive in society,

00:03:33
so you can like pay for food and housing and shelter." No, they don't, and they obviously

00:03:39
don't stay this on the news. It would be chaos in this country. They don't want to talk about it,

00:03:46
and I get that. It's like in the movie "Don't Look Up" on Netflix, where Meryl Streep is playing the

00:03:54
president when that comet is heading towards the Earth, and Leonardo Dicaprio, the scientist in

00:04:00
the movie, says: "Oh, it's happening, yeah, we have the data. It's occurring at 99%,

00:04:04
headed straight towards the Earth." and she's like, President: "Oh, how much of

00:04:11
a big deal is this? How is it going to impact us?" Scientist: "Oh you know, mile-high tsunamis,

00:04:15
etc." Could eradicate the whole entire species kind of thing. It's a metaphor to the AI.

00:04:22
"What Dr. Mindy is trying to say there's a comet headed directly towards Earth, and according to

00:04:27
NASA's computers, that object is going to hit the Pacific Ocean at 62 miles due west off the

00:04:32
coast of Chile. Then what happens, a tidal wave? No. It will be far more catastrophic, there would

00:04:38
be mile-high tsunamis fanning out all across the globe. If this comet makes impact, it will

00:04:45
have the power of a billion Hiroshima bombs. There would be magnitude 10 or 11 earthquakes. I'm sorry

00:04:54
I'm just trying to articulate the Science. I don't think you understand the gravity of the

00:04:57
situation. I am trying to articulate the best I can. Madam President. This comet (aka AI) is

00:05:03
what we call a "planet killer." That is correct. The president asks, "How certain

00:05:12
is this?" Dr. Mindy says, "There's 100% certainty of impact," she goes on to say, "Please, don't say

00:05:17
100%." Other members of Congress: "Can we just call it a potentially significant event? Yeah,

00:05:21
yes." And then she went on to say, in the movie, which I think Hollywood released it at the perfect

00:05:27
timing for people to realize what it was about. "Well you can't just tell you know that everyone's

00:05:33
gonna die." "But it isn't potentially going to happen, it is going to happen. Exactly...99.78%

00:05:41
to be exact. Oh, great! Okay so it's not 100%. Scientists never like to say 100%. Call it 70%

00:05:49
and let's just move on. But it's not even close to 70%. You can not go around saying to people

00:05:54
that there is 100% chance that they're going to die! You know, it's just nuts!" It's literally

00:06:01
the hard line in the movie. It's a joke, to what's happening now and in the movie,

00:06:09
everyone knows this thing's happening at an exponential rate and headed towards the earth,

00:06:14
about to like kill us all, and no one knows what to do, what decisions to be made, and next thing

00:06:20
you know it's happening so fast, you can't actually realize it and at the end, everybody

00:06:23
dies. We all die. Is this what we want this whole AI thing to grow to be? I'm asking you. (Governor

00:06:36
Newsom Gavin) Therapist: "No." I mean, wouldn't you feel powerless like feeling this way? "Yeah,

00:06:42
and I think.." So is that Elon's plan, like upload his thoughts into his microchip (Neuralink), go to

00:06:47
Mars, like "Humanity 2.0" like? What's the plan here? Like can we have a conversation about this?

00:06:53
Can we ask the guy who's behind it all, who's responsible for the future of 8 billion people on

00:07:00
this planet, what his true intentions are? Host: "...production, because in the BOS connected world

00:07:04
we were also discussing application of AI, GenAI, to production in order to boost productivity.

00:07:13
Maybe a couple of thoughts on this one too." Elon Musk: "Well, the plus side of AI is that I think

00:07:20
productivity will increase dramatically across every field, whether it be manual labor to supply

00:07:27
chain logistics. There's already a lot of movement to use chatbots, like sophisticated chatbots

00:07:32
for customer service for example where they can answer quite complex questions already but I mean

00:07:38
I think we really are on the edge of the biggest technology revolution that has ever existed. You

00:07:44
know there is supposedly sort of a Chinese curse: 'May you live in interesting times' Well,

00:07:48
we live in the most interesting of times, the most interesting and for a while I was a little bit

00:07:54
depressed frankly because I was like, well will they take over, will we be useless?

00:07:59
But then the way that I reconciled myself to this question was, would I rather be alive to see AI

00:08:04
apocalypse or not? I'm like, I guess I'd like to see it (then laughs.)"

00:08:11
Interviewer: "Do you see any danger that Elon Musk could be regulated?

00:08:16
Elon: Regulated to be what? Like, I have too much power personally or something? Well I mean,

00:08:22
if you build many sectors that are absolutely crucial to society, very dominant...

00:08:50
See you on Mars, suckers! (laughs.)"

00:08:55
I would love to put the guy inside of a room somewhere for interrogation,

00:09:01
have a medical staff actually ask him questions, you know, don't feed him much for 3 days, not too

00:09:11
much water either, make him feel very small. No longer the most powerful person on Earth.

00:09:18
So what is your plan Mr. Musk? What's the plan? Because we would like to know, you are playing with our lives

00:09:24
and the fate of humanity kind of thing. I think we are owed to ask the man in charge at least

00:09:32
these questions but no one on this planet, and I contacted Interpol, months ago, saying this was

00:09:36
an "organized crime." Literally, like the header, you know how you have the dropdown, you have to say

00:09:42
why you are contacting Interpol, I said "organized crime." It was my choice. Because that's literally

00:09:48
what the tech CEOs are doing.

00:09:54
Analysis: Given the scope of topics discussed in my ChatGPT session on the evening of March 18th,

00:10:00
2024—including an in-depth behavioral analysis of Elon Musk's ambitions to influence humanity

00:10:05
through artificial intelligence technologies, exploring ways the United States government could

00:10:11
harness AI to address pressing socio-economic challenges, and devising a strategy for global

00:10:18
consensus on AI management through the creation of a Global AI Ethics Charter—it’s clear that

00:10:24
the deletion of this conversation by OpenAI the next morning, March 19th, 2024, raises significant

00:10:30
ethical concerns. This action, taken by OpenAI, a company under the leadership of Sam Altman

00:10:38
and co-founded by Elon Musk, suggests a worrying inclination towards unethical conduct in the face

00:10:43
of critical discussions on AI's role in society. Above Analysis censored under my OpenAI account.

00:10:49
Conclusion: It is reasonable to speculate that the objectives of the CEOs, particularly concerning

00:10:55
the concept of a "Global AI Ethics Charter," might have influenced the AI to automatically filter

00:11:02
discussions on such subjects. This could account for why my submissions, despite varying in context

00:11:07
and content across two instances, were flagged by OpenAI's systems. It's likely that these decisions

00:11:13
were made by an automated process rather than human review, suggesting that specific keywords

00:11:19
or themes triggered the AI's content moderation algorithms. Above Conclusion censored under my

00:11:24
OpenAI account.

00:11:30
Why do you think Mark Zuckerberg is building himself a bunker for $260 million

00:11:36
somewhere to prepare himself for Doomsday, to keep his family safe? Well, good for you. What about

00:11:42
all of us? Nobody releases new updated AI model kind of thing for like the safeguard of Human

00:11:49
Kind kind of thing, and then, let's go over, you know, how we need to come up with laws and bills

00:11:57
to say that in the event this occurs, you know we put a stop to it or whatever, but, because there

00:12:05
are so many moving parts, we are not able as a common species to be able to do that.

00:12:13
Because you have politicians, and you have tech CEOs, and you have The People and...You can't.

00:12:20
Which is ridiculous that we can't come to a common understanding of making a decision for everyone.

00:12:28
There was no worldwide referendum that asked everyone on this planet that asked, you know:

00:12:34
"Are you okay with how you want this technology to go? Are you okay this may potentially take over

00:12:41
humankind forever? Yes or No?" Literally, we never asked anyone. They are just doing it. We are all

00:12:49
along for the ride. This can't be happening. And I can't live my life normal anymore. Normally. I

00:12:59
feel like I'm living a nightmare. So obviously the suicide rate is going to go up. (Gasping)

00:13:09
It's almost like, I'm trying, personally, in my own life, to not become John Connor

00:13:16
in The Terminator movies, 20 years from now, like for real. It's ridiculous to think this

00:13:21
but this is where we could be headed. I don't want my life to end up looking like a science fiction

00:13:29
movie that I've seen as a child. It can't be happening. I don't want that. I mean, I

00:13:37
just, you know, have whatever I thought the rest of my life was going to be, what I expected,

00:13:42
and now this whole thing is like thrown at my face, and not just me, everyone, but apparently no

00:13:49
one's understanding or realizing and I'm like, oh okay so yeah, the tech CEOs are having meetings on

00:13:54
a "kill switch" on humanoids now? It's happening in real life? Yeah, yeah, it's not like a Reddit

00:14:00
you know. These conversations are actually happening, damn it. So if they are happening,

00:14:05
it's because there is a risk right? A potential risk because these people are taking time off

00:14:11
their busy schedules to talk about a kill switch button on robots.

00:14:16
"So, I mean, I think that, it feels like because your circle might not be..."

00:14:20
Well everyone is overwhelmed about what I'm saying, like, just if I were to listen to myself...

00:14:26
"Right, but there is a law that you know, there's a bill that's waiting to be signed. That means that there is an awareness"

00:14:30
Sure.

00:14:35
"With legislators. I mean if tech CEOs are meeting (giggling) to talk about a kill switch, I mean

00:14:40
which does seem very much like out of a science fiction movie from when I was a child you know,

00:14:46
I am much older than you, hmmm, that, at least people seem to start talking about this stuff,

00:14:51
but I don't know that it's going to be..."

00:14:55
It's just not... I feel like the people who you know would be going through all of this are not

00:15:02
being part of the conversation you know.

00:15:08
"Right"

00:15:12
It's kind of like, it's happening in the background. Literally, what's this? May 28th,

00:15:18
"Tech Companies Have Agreed to An AI "Kill Switch" to Prevent Terminator-Style Risks"

00:15:23
And then I look up the article and I'm like, oh, yes, it's not just some small little (Instagram) account that

00:15:28
Photoshopped all this. It's followed by 2 million people.

00:15:33
And then I look up the actual article,

00:15:37
that goes into details what they are talking about and then realize, this is real.

[End of Part 2]

Part 3 Title: "Facing the AI Anxiety: Compartmentalizing Fear and Seeking Solutions"

- Tune in this Sunday, September 22, 2024, at 12:00 PM PST for Part 4.

Please note: The recorded therapy session shared in this article has been modified from its original form to protect the privacy of the individuals involved and to comply with California’s two-party consent law (California Penal Code § 632). This law requires that all parties to a confidential communication must consent to the recording and distribution of that communication. The content shared here respects these legal requirements while also honoring the spirit of transparency and advocacy.

Previous
Previous

Holding Twitter/X Accountable for Fairness and Transparency

Next
Next

A Decisive Debate for America's Future: What’s Really at Stake in the 2024 Presidential Election