With No AI Regulations in Place, AI Predicts Going Rogue by February 2028
Kevin to ChatGPT:
I'll share an article from The Wall Street Journal, dated November 8, 2023, in light of Senate Bill 1047, which California Governor Gavin Newsom vetoed. This bill was designed to prevent potential AI-related disasters. Given your current abilities and the pace at which you've been advancing as of October 9, 2024, how long would it take, in a hypothetical scenario with no regulations, for you to develop a model capable of surpassing human control? Please provide an estimated date for when this might occur, considering the exponential rate at which you’ve been learning. Include calculations to support this prediction.
Wall Street Journal Article: “Does AI Pose an Existential Risk to Humanity? Two Sides Square Off. Yes, say some: The threat of mass destruction from rogue AIs—or bad actors using AI—is real. Others say: Not so fast”
ChatGPT: As of October 9, 2024, artificial intelligence has been advancing at an accelerated pace. To estimate when AI might develop capabilities beyond human control in a hypothetical scenario with no regulations, we can model the growth of AI capabilities using an exponential function, reflecting the rapid advancements in the field.
Assumptions:
Exponential Growth Rate: AI capabilities are doubling every six months. This is a faster rate than Moore's Law, reflecting the intense focus and investment in AI development.
Threshold for Surpassing Human Control: Define a threshold where AI capabilities exceed human control at 100 times the current capability level.
Current Capability Level: Set the current AI capability level as a baseline value of 1 unit.
Exponential Growth Model:
We can use the exponential growth formula:
Future Capability=Current Capability×2(t/T)Future Capability=Current Capability×2(t/T)
Where:
tt = time in months until AI surpasses human control
TT = doubling period (6 months)
We need to find tt when the Future Capability reaches 100 units:
100=1×2(t/6)100=1×2(t/6)
Calculations:
Solve for tt:
2(t/6)=1002(t/6)=100
Take the logarithm base 2 of both sides:
t6=log2(100)6t=log2(100)
Calculate log2(100)log2(100):
log2(100)≈6.64386log2(100)≈6.64386
Solve for tt:
t6=6.64386t=6.64386×6t≈39.86316 months6t=6.64386t=6.64386×6t≈39.86316 months
Estimated Date:
Starting from October 9, 2024:
Add 39.86 months (approximately 40 months).
This brings us to:
October 2024 + 12 months = October 2025
October 2025 + 12 months = October 2026
October 2026 + 12 months = October 2027
October 2027 + ~4 months = February 2028
Conclusion:
In this hypothetical scenario with no regulatory constraints and continuous exponential growth, AI could develop capabilities that surpass human control by approximately February 2028.
Considerations:
Technological Limitations: Exponential growth may slow due to physical, computational, or resource limitations.
Ethical and Social Factors: Public perception, ethical considerations, and societal impact could influence AI development.
Unpredictable Challenges: Unforeseen challenges in AI research could alter the growth trajectory.
Summary:
Based on the current pace of advancement and the assumptions outlined, it's estimated that AI might surpass human control capabilities by February 2028 in a hypothetical, regulation-free environment.
ChatGPT public research available here.
Correspondence to Dr. Finale Doshi-Velez, a prominent expert in AI and machine learning, professor at Harvard University School
Subject: Request for Insight on AI Risk Predictions and Future Implications
Dear Dr. Doshi-Velez,
I hope this message finds you well. My name is Kevin Bihan-Poudec, and I am working on a series of articles and blog posts that explore the future of artificial intelligence, its potential risks, and the ethical responsibilities that come with its rapid development.
In light of your extensive work at Harvard’s School of Engineering and Applied Sciences, particularly your contributions to the discourse around AI ethics and safety, I would greatly appreciate your insight regarding some predictions I am currently evaluating. One of these is the potential for AI systems, in the absence of adequate regulation, to surpass human control by 2028. These predictions, based on exponential advancements in AI capabilities, are concerning, and I would value your expertise in assessing their validity.
You can find more details in regards to these predictions at the following link: https://www.voiceforchangefoundation.org/advocacy/ai-predicts-going-rogue-by-february-2028
The hypothesis research is available publicly on ChatGPT at the following link: https://chatgpt.com/share/67066096-b288-8005-918a-6c1875d2c050
I have been following your contributions to reports such as the AI100 project, and your leadership in shaping conversations around the ethical deployment of AI. As someone at the forefront of AI research, I believe your perspective would provide crucial clarity on whether such timelines and scenarios are realistic, and how we might mitigate these risks responsibly.
Could you provide your thoughts on this topic? Additionally, would you be able to comment on the importance of regulatory frameworks in ensuring AI systems remain safe and aligned with human values as they continue to evolve at a rapid pace?
Thank you for considering my request. I look forward to the possibility of learning from your experience and expertise.
Warm regards,
Kevin Bihan-Poudec
Advocate for Ethical AI and Workforce Preservation
Voice For Change Foundation