The Last Safe Jobs in a World of AI Dominance?
PLUMBING.
Almost everybody I know who is an expert in AI, believes that they will exceed human intelligence, it’s just a question of when. In between 5 and 20 years from now, there is a probability of about a half the world will have to confront the problem of them trying to take over.
-Geoffrey Hinton, "Godfather of AI"
Rethinking Career Paths in the Age of AI – Could Plumbing Be the Last Safe Job?
As a 36-year-old data analyst with a decade of experience in white-collar roles, I’ve grown accustomed to the fast pace and high expectations of the tech-driven workplace. Until recently, my career felt secure, even prosperous. But with the rise of AI, tools like ChatGPT can now perform complex data analysis and generate insights at a fraction of the cost. This shift has made me question the longevity of my profession. A six-figure salary that once felt secure is now undercut by AI programs, available for as little as $20 a month, capable of delivering data insights far quicker than I could in a lifetime.
This realization has me grappling with a stark new reality: the career I’ve built may soon be obsolete. With about 30 more years before retirement, I’m reevaluating what career path could offer the stability I need to support myself and my future family. Enter Geoffrey Hinton, a Nobel Prize-winning AI expert, who recently suggested plumbing—a job largely safe from automation—as a secure choice in the coming years.
Key Concerns from Hinton’s Interview
Hinton’s insights extend beyond job security. His main concerns with AI include:
Existential Threat: Hinton warns that AI could surpass human intelligence within a couple of decades, possibly making decisions and taking actions outside of human control.
Military Applications of AI: Governments are increasingly incorporating AI into military uses, potentially developing autonomous systems that could target and attack without human intervention.
Socio-Economic Impacts: AI may soon render countless jobs obsolete, disproportionately affecting mid-level intellectual professions and exacerbating wealth inequality.
Need for Universal Basic Income (UBI): Hinton suggests that UBI could help alleviate financial hardship for displaced workers but also recognizes it may not address the loss of purpose and self-worth that work provides.
The "Godfather of AI" Speaks Out: Plumbing as a Safe Job?
In the interview posted on BBC Newsnight’s YouTube channel, Geoffrey Hinton, widely known as the “Godfather of AI,” shared insights into the potential trajectory of AI and automation. Hinton, formerly of Google, is one of the minds behind the AI revolution and has recently voiced serious concerns about the trajectory of this technology. He warns that AI systems could outsmart humanity within 5 to 20 years, posing a significant risk if they decide to "take control."
But amid the complex issues he raised, his answer to one simple question stood out: “What jobs will remain safe from automation?” Without hesitation, Hinton replied, “Plumbing.” Plumbing, he explained, is one of the few fields that demands a high level of manual dexterity, problem-solving, and hands-on expertise—skills that AI and robotics may struggle to replicate in the near future.
For decades, plumbing has been considered a hard, manual job, often underappreciated. Yet, in this era of AI, it may offer the job security and longevity that traditional office roles no longer can. So, if you’re a parent considering what career will offer your child stability in a future marked by automation, Hinton’s advice may seem unusual but increasingly relevant.
Why Plumbing Might Be a Career for the Future
If Hinton’s prediction rings true, transitioning to a trade like plumbing could offer a sustainable career option. Here’s a basic outline for those interested in the field:
1. Start with an Apprenticeship: Most plumbing careers begin with an apprenticeship, where you’ll gain practical experience and classroom training.
2. Complete Certification Courses: Classes on plumbing codes, safety standards, and system design are often part of the training process.
3. Obtain a License: Many states require a license to practice plumbing, which involves passing an exam that tests your understanding of regulations.
4. Consider Specializations: Expanding your skill set to include gas fitting or HVAC systems can boost your expertise and employability.
The path to becoming a licensed plumber generally takes 2 to 5 years, depending on your state’s requirements. And while the road may be challenging, the reward is potentially high. On average, plumbers in the U.S. earn between $40,000 and $75,000 annually, though this can vary. In California, a licensed plumber can earn up to $90,000, whereas, in less populated states, earnings average around $50,000.
Current Demand for Plumbing Jobs
The plumbing industry is steady and essential, with strong job demand driven by the ongoing need for construction and maintenance. The U.S. Bureau of Labor Statistics projects around 3% growth in plumbing jobs over the next decade, indicating that plumbing remains a viable career path. And unlike many white-collar roles, the work itself is less vulnerable to automation due to its reliance on manual dexterity and spatial awareness.
Reflecting on the Future of Work
Hinton’s warnings are a sobering reminder of the rapid, uncertain shifts technology brings to the workforce. His recent interview, which aired on “BBC Newsnight,” has garnered over 339,895 views worldwide—not a major reach, yet enough to spark important conversations about the future of work. His message might not yet be mainstream, but his advice to seek secure, hands-on work like plumbing may be the most valuable takeaway for those looking to future-proof their careers.
For those of us considering what lies ahead, taking action today could mean the difference between a stable future and an uncertain one. As I reflect on my own career, I find Hinton’s insights resonating deeply. In the race against technology, thinking ahead—perhaps even considering a trade like plumbing—could be one of the few moves that secure a livelihood for years to come.
Final Thoughts
While plumbing may seem like an unusual suggestion for the future, Hinton’s reasoning is clear. As AI becomes increasingly capable, hands-on trades like plumbing could provide one of the few career paths left untouched by automation. For those considering their future career paths or guiding their children’s education, now may be the time to prioritize trades that demand the one thing AI still struggles with: complex physical tasks.
As we navigate these unprecedented changes, keeping an open mind and thinking ahead may be our best defense against an increasingly automated world.
Read the email correspondances with Geoffrey Hinton below as well as the full transcript of the 10-minute interview, or watch the video in English above.
Correspondance with Geoffrey Hinton seeking mentorship in my fight to regulate AI:
Subject: Collaboration Opportunity on Ethical AI Regulation and Workforce Preservation
Dear Dr. Hinton,
I hope this message finds you well. My name is Kevin Bihan-Poudec, and I am the founder of Voice For Change, a nonprofit organization dedicated to advocating for ethical AI and workforce preservation. With over a decade of experience as a data analyst in the U.S., originally from France, I’ve recently found myself facing the challenging reality of reentering the tech industry. After being laid off from my role as Director of Business Intelligence last November, I’ve encountered a rapidly changing job market, where roles like mine are increasingly replaced by AI-driven solutions.
Your recent interview with the BBC resonated deeply with me. The concerns you raised around AI’s existential threat to humanity, the future of the workforce, and the potentially limited scope for white-collar professions echoed my own fears. As someone who has felt the impact of these transformations firsthand, your insights underscored the urgency of advocating for responsible AI regulation, not only to protect jobs but also to safeguard the values and dignity that define our humanity.
Since last year, I’ve dedicated my efforts to researching AI’s implications and have been actively working to communicate the need for regulation to government officials. I believe that responsible, proactive regulation is essential to ensuring that AI technologies benefit society as a whole and do not harm the workforce or human rights. With your background, insight, and influence, I am hopeful that together, we could approach AI regulation with the necessary urgency and depth.
I am currently developing a federal-level outline for AI regulation in the United States and would be honored if you would consider meeting with me to discuss our shared vision. At 36, I bring the energy, commitment, and determination to advocate for change on a larger scale, yet I am conscious that impactful progress will require guidance and mentorship. Your voice and experience would lend invaluable strength to this cause.
You can learn more about my background and recent work on my nonprofit’s website: Voice For Change Foundation, The Vision Behind the Founder Bio. I would be truly grateful for the opportunity to align our visions and explore next steps to ensure AI development aligns with ethical standards and benefits the global workforce.
Thank you for your time and for your courage in raising awareness about these critical issues. I look forward to the possibility of working together toward a safer, more equitable future.
Sincerely,
Kevin Bihan-Poudec
Founder, Voice For Change Foundation
Geoffrey Hinton’s reply:
“I regret that I am too busy
Geoff”
My follow-up response:
I appreciate your prompt response. Please keep me posted if your schedule frees up.
Best,
Kevin
Transcript of the BBC interview:
Geoffrey Hinton: Almost everybody I know who is an expert in AI believes that they will exceed human intelligence. It’s just a question of when. In between 5 and 20 years from now, there is a probability of about a half that the world will have to confront the problem of them trying to take over.
Interviewer: I began by asking Geoffrey Hinton whether he thought the world is getting to grips with this issue or if he's as concerned as ever.
Geoffrey Hinton: I'm still as concerned as I have been, but I'm very pleased that the world’s beginning to take it seriously. In particular, they're beginning to take the existential threats seriously — that these things will get smarter than us, and we have to worry about whether they'll want to take control away from us. That's something we should think seriously about, and people now take that seriously. A few years ago, they thought it was just science fiction.
Interviewer: And from your perspective, having worked at the top of this, having developed some of the theories underpinning all of this explosion in AI that we're seeing, that existential threat is real?
Geoffrey Hinton: Yes. Some people think these things don't really understand. They're very different from us. They're just using some statistical tricks. That's not the case. These big language models, for example, the early ones, were developed as a theory of how the brain understands language. They're the best theory we've currently got of how the brain understands language. We don't understand either how they work or how the brain works in detail, but we think they probably work in fairly similar ways.
Interviewer: What is it that's triggered your concern?
Geoffrey Hinton: It's been a combination of two things. So, playing with the large chatbots, particularly one at Google before GPT-4, but also with GPT-4. They're clearly very competent. They clearly understand a lot. They have a lot more knowledge than any person. They're like a not-very-good expert at more or less everything. So that was one worry.
The second was coming to understand the way in which they're a superior form of intelligence, because you can make many copies of the same neural network. Each copy can look at a different bit of data, and then they can all share what they learned. So, imagine if we had 10,000 people who could all go off and do a degree in something. They could share what they learned efficiently, and then we'd all have 10,000 degrees. We'd know a lot then. We can't share knowledge nearly as efficiently as different copies of the same neural network can.
Interviewer: Okay, so the key concern here is that it could exceed human intelligence, indeed the mass of human intelligence?
Geoffrey Hinton: Very few of the experts are in doubt about that. Almost everybody I know who's an expert on AI believes that they will exceed human intelligence. It's just a question of when.
Interviewer: And at that point, it's really quite difficult to control them?
Geoffrey Hinton: Well, we don't know. We've never dealt with something like this before. There's a few experts, like my friend Yann LeCun, who think it'll be no problem. We'll give them the goals, and they'll do what we say. They'll be subservient to us. There are other experts who think absolutely they'll take control. Given this big spectrum of opinions, I think it's wise to be cautious. I think there's a chance they'll take control, and it's a significant chance. It's not like 1%. It's much more.
Interviewer: Could they not be contained in certain areas, like scientific research, but not, for example, the armed forces?
Geoffrey Hinton: Maybe, but actually, if you look at all the current legislation, including the European legislation, there's a little clause in all of it that says none of this applies to military applications. Governments aren't willing to restrict their own uses of it for defense.
Interviewer: There's been some evidence, even in current conflicts, of the use of AI in generating thousands and thousands of targets. That's happened since you started warning about AI. Is that the sort of pathway that you're concerned about?
Geoffrey Hinton: I mean, that's the thin end of the wedge. What I'm most concerned about is when these things can autonomously make the decision to kill people — so, robot soldiers. Maybe we can get Geneva’s Convention to regulate that, but I don't think this is going to happen until after nasty things happen.
Interviewer: And there's an analogy here with the Manhattan Project and Oppenheimer, which is if we restrain ourselves from military use in the G7 advanced democracies, what’s going on in China, what’s going on in Russia?
Geoffrey Hinton: Yes, it has to be an international agreement. But if you look at chemical weapons, the international agreement for chemical weapons has worked quite well.
Interviewer: Do you have any sense of whether the shackles are off in a place like Russia?
Geoffrey Hinton: Well, Putin said some years ago that whoever controls AI controls the world. So, I imagine they're working very hard. Fortunately, the West is probably well ahead of them in research. We're probably still slightly ahead of China, but China’s putting more resources in. So, in terms of military uses of AI, I think there's going to be a race.
Interviewer: Sounds very theoretical, but this argument, if you follow it, you really are quite worried about extinction-level events?
Geoffrey Hinton: We should distinguish these different risks. The risk of using AI for autonomous lethal weapons doesn’t depend on AI being smarter than us. That's a quite separate risk from the risk that the AI itself will go rogue and try to take over. I'm worried about both things. Autonomous weapons are clearly going to come, but whether AI goes rogue and tries to take over is something we may be able to control — or we may not. We don't know. So, at this point, before it’s more intelligent than us, we should be putting huge resources into seeing whether we are going to be able to control it.
Interviewer: What sort of society do you see evolving? Which jobs will still be here?
Geoffrey Hinton: Yes, I'm very worried about AI taking over lots of mundane jobs. That should be a good thing. It’s going to lead to a big increase in productivity, which leads to a big increase in wealth. If that wealth was equally distributed, that would be great. But it’s not going to be. In the systems we live in, that wealth is going to go to the rich and not to the people whose jobs get lost. That’s going to be very bad for society, I believe. It’s going to increase the gap between rich and poor, which increases the chances of right-wing populists getting elected.
Interviewer: So to be clear, you think that the societal impacts from the changes in jobs could be so profound that we may need to rethink the politics of the benefit system, inequality, and universal basic income?
Geoffrey Hinton: Yes, I certainly believe in universal basic income. I don't think that's enough, though, because a lot of people get their self-respect from the job they do. And if you put everybody on universal basic income, that solves the problem of them starving and not being able to pay the rent. But it doesn't solve the self-respect problem.
Interviewer: So, what, the government needs to get involved? I mean, it's not how we do things in Britain, you know, we tend to stand back and let the economy decide the winners and losers.
Geoffrey Hinton: Yes, actually, I was consulted by people in Downing Street, and I advised them that universal basic income was a good idea.
Interviewer: And this is, I mean, you said 10 to 20% risk of them taking over?
Geoffrey Hinton: Yes.
Interviewer: Are you more certain that this is going to have to be addressed in the next five years, the next parliament, perhaps?
Geoffrey Hinton: My guess is in between five and 20 years from now, there's a probability of about half that we'll have to confront the problem of them trying to take over.
Interviewer: Are you particularly impressed by the efforts of governments so far to try and rein this in?
Geoffrey Hinton: I'm impressed by the fact that they're beginning to take it seriously. I'm unimpressed by the fact that none of them are willing to regulate military uses. And I'm unimpressed by the fact that most of the regulations have no teeth.
Interviewer: Do you think that the tech companies are letting down their guard on safety because they need to be the winner in this race for AI?
Geoffrey Hinton: I don't know about the tech companies in general. I know quite a lot about Google because I used to work there. Google was very concerned about these issues. Google didn't release the big chatbots because it was concerned about its reputation if they told lies. But as soon as OpenAI went into business with Microsoft and Microsoft put chatbots into Bing, Google had no choice. So, I think the competition is going to cause these things to be developed rapidly. And the competition means that they won't put enough effort into safety.
Interviewer: People, parents, talk to their children, give them advice on the future of the economy, what jobs they should do, what degrees they should pursue. It seems like the world’s being thrown up in the air by the world that you're describing. What would you advise somebody to study now to "surf this wave"?
Geoffrey Hinton: I don't know because it's clear that a lot of mid-intellectual jobs are going to disappear. If you ask which jobs are safe, my best bet about a job that's safe is plumbing because these things are not yet really good at physical manipulation. That will probably be the last thing they get very good at, so I think plumbing is safe for quite a long time.
Interviewer: Driving?
Geoffrey Hinton: Not driving, no. Driving? No, that's hopeless. I mean, that's been slower than expected.
Interviewer: Journalism?
Geoffrey Hinton: Journalism might last a little bit longer, but I think these systems are going to be pretty good journalists quite soon. Probably good interviewers too.
Interviewer: Okay, well...