OpenAI: Leader of AI Tools Like ChatGPT Accelerates Profit, but at What Cost?

Loading the Elevenlabs Text to Speech AudioNative Player...

In recent years, OpenAI has emerged as a dominant force in artificial intelligence, driven by the groundbreaking success of its language model, ChatGPT. The company has now introduced its latest version, OpenAI o1-preview, released in September 2024. This tool represents a significant leap forward from earlier iterations, combining conversational AI with new capabilities in reasoning, problem-solving, and coding. To put this into perspective, ChatGPT’s capabilities have evolved at an exponential rate since its initial release in 2020, going from basic text generation to advanced functions such as assisting with complex mathematics, coding tasks, and even mimicking human-like reasoning.

Are We Approaching AI's Tipping Point? The Unchecked Acceleration and Its Potential Consequences

Exponential Growth: The Evolution of ChatGPT

See a chart above representing the acceleration rate of ChatGPT's capabilities over time, showing the advancements from GPT-2 in 2019 to the latest OpenAI o1-preview in 2024. Each version demonstrates significant growth in AI capabilities, highlighting the rapid development of OpenAI's language models. ​

To appreciate how far ChatGPT has come, let’s compare its evolution over time:

  • GPT-2 (2019): Capable of generating coherent text from a prompt but still limited in handling complex conversations or tasks.

  • GPT-3 (June 2020): Marked a significant leap in language modeling, with 175 billion parameters, allowing it to handle more nuanced text generation, translate languages, and generate creative writing.

  • ChatGPT (November 2022): The first public version, based on GPT-3.5, introduced a conversational interface that could handle more sophisticated questions, making it useful for customer support, research assistance, and even rudimentary code generation.

  • GPT-4 (March 2023): An even more refined model capable of handling multi-modal inputs (images and text), solving more complex problems, providing deeper analysis, and assisting with more advanced tasks such as generating Python code or writing essays.

  • OpenAI o1-preview (September 2024): A new frontier, o1-preview introduces the ability to reason through complex problems, simulate human-like decision-making, and work with logic-based tasks like solving intricate coding issues or even mathematical proofs. It’s not just a text generator; it can now perform tasks akin to what many knowledge workers do on a daily basis.

With each new version, the rate of progress has accelerated. What ChatGPT was capable of doing at the beginning—basic text generation—has now evolved into handling detailed coding queries, solving high-level math problems, and engaging in logical reasoning. This exponential growth is startling. For example, tasks that required significant human input, such as generating functional code or performing data analysis, can now be done in a fraction of the time, thanks to these advancements.

AI on the Brink: Will ChatGPT’s Capabilities Reach a Point of No Return in the Next Three Years?

The chart above illustrates the projected exponential growth of ChatGPT's capabilities over the next three years, assuming the current acceleration rate continues. Starting with the OpenAI o1-preview capability level in 2024, the model suggests a steep increase, with potential capabilities reaching unprecedented levels by 2027. This highlights the rapid pace of AI advancements and raises important questions about the implications of such growth for the future of work and society.

The Next Decade of AI: Will ChatGPT's Explosive Growth Push Us Beyond Control by 2033?

This chart, on the other hand, projects the exponential growth of ChatGPT's capabilities over the next 10 years, extending through 2033. The model predicts an astonishing increase in AI capabilities, with rapid advancements each year, assuming the same growth rate continues. This projection underscores the transformative potential of AI, as it continues to evolve beyond today's cutting-edge capabilities, raising crucial questions about its impact on society, employment, and human-machine interactions in the future.

The exponential growth of AI, as seen in the chart above, mirrors a warning that Al Gore famously gave in his documentary An Inconvenient Truth, where he illustrated the alarming rate of global warming. Gore emphasized how environmental change, if left unchecked, could reach a tipping point where the damage becomes irreversible.

See above a still from the 2006 documentary An Inconvenient Truth, where Al Gore stepped onto an elevator to illustrate the alarming rise in global temperatures projected over the next 50 years, warning of a point of no return. In the documentary, you can hear people laughing in the background. I experienced this moment firsthand at a 2017 conference in Houston, TX, where I had the opportunity to attend Al Gore’s presentation while helping victims of Hurricane Harvey for three months. I also had the privilege of meeting Al Gore in person backstage.

Shared Visions, Personal Connections

On October 30th, 2017, at a conference in Houston, TX, Kevin Bihan-Poudec had the opportunity to meet former Vice President Al Gore during his mission to combat climate change and reverse carbon emissions, with a focus on preventing the environmental crisis from reaching the point of no return. Seven years later, Kevin, now a data analyst, continues his own journey by leveraging his expertise in artificial intelligence and data analytics to tackle the pressing challenges of automation, job displacement, and the urgent need for workforce reskilling in the face of rapid technological advancements.

Gore's predictions, based on data analytics, highlighted the devastating consequences of carbon emissions—consequences we are now fully aware of, as companies continue to pour billions into fossil fuels. As we witness the exponential growth of AI today, one has to ask: Who’s laughing now?

Today, we face a similar situation with artificial intelligence. Just as we saw with climate change, AI's rapid advancements, if not regulated and carefully managed, could lead us to a point of no return. With each breakthrough, the capabilities of AI accelerate at an unprecedented pace—raising crucial questions: Are we heading toward an irreversible tipping point in AI's growth? What potential impacts could this have on humankind, and are we prepared to manage them before it’s too late? The time to act responsibly is now, before we find ourselves facing unforeseen consequences.

So What Can ChatGPT Do Now?

With OpenAI o1-preview, the model goes beyond simple question-answering. It can:

  • Solve complex math and coding problems, reason like a human, and adapt to new types of queries with higher levels of abstraction.

  • Generate detailed creative outputs, such as technical reports, essays, and even simulated dialogues.

  • Assist in logical reasoning, helping users build business strategies, analyze trends, or simulate real-world scenarios based on data.

  • Integrate with tools to perform tasks like managing spreadsheets, analyzing large datasets, or generating reports autonomously.

This trajectory brings us to an important question: What could ChatGPT achieve in the next six months to a year? Given its rapid evolution, it’s plausible that AI could soon reach the point where it can fully replace various office jobs. But how close are we to this reality?

The Rise of AI Agents: The Next Frontier

Adding to this conversation is Sam Altman’s recent announcement about “AI agents,”coming next year, the next iteration of AI technology. These agents could execute tasks autonomously, such as opening software applications, performing actions within those apps, and even navigating a web browser or organizing information across platforms. Imagine telling ChatGPT to open a browser, search for research articles, summarize them, and insert the summaries into a Word document—all without lifting a finger. This level of automation is already in development and will soon be available to the masses.

The video above displays the capabilities of an “AI agent” today, which are undoubtedly remarkable—but at what cost when it comes to the impact of such technologies on the workforce?

As a data analyst with a decade of experience, ChatGPT can already create bar charts on your behalf and provide insights in regards to data analytics which you can see in this very own blog post, the charts were created in ChatGPT. This makes me wonder every day what the future holds for my field as a data analyst. If my skills are no longer required, will I become an implementer or user of AI plugins that could perform my data analytics tasks much faster, executing 25 tasks in an 8-hour workday instead of 4 to 6? The question is: Could my role eventually become obsolete if an AI agent can replicate those skills, replacing me as a human worker? Could this evolution render both data analytics and the implementation of AI tools completely obsolete in the long term? Only the future of AI regulation will tell.

The video above displays ChatGPT's current capabilities in replicating tasks typically performed by a data analyst, such as creating charts and analyzing data, which the AI can now do without requiring any specialized skills.

If this technology, dubbed “AI agents,” becomes widely accessible, it could revolutionize the workplace—but potentially not for the better if misused. I raised this concern to the U.S. Congress earlier this year in my open letter titled “Act Now,” as well as in an explanatory video seen below. Both aimed to explain what AI technology is, what it can do, and more specifically, the concept of AI agents and their potential drawbacks on the workforce if overutilized by companies. The video, titled “What is Artificial Intelligence (AI) and AI Agents? And Their Negative Impacts on the Worldwide Workforce,” was shared with several members of Congress back in April of this year, but I have yet to hear back.

I also raised the concern of AI agents in a letter to California’s Governor Gavin Newsom on September 19, 2024, ahead of his decision on Senate Bill 1047, which aimed to regulate AI models larger than $100 million to prevent AI disasters. Unfortunately, he eventually vetoed the bill.

So the questions remain: Why would employers maintain large teams of office workers when AI agents could replicate their tasks at a fraction of the cost and with far greater efficiency? The implications for administrative, technical, and even creative jobs are enormous. The potential for automation is unprecedented, but so are the risks.

What Are the Stakes for Workers?

Without regulation, we may face a labor crisis unlike anything seen before. Entire sectors, from customer support to marketing, legal, and even programming, could be drastically reduced as companies seek to maximize profits by replacing human workers with AI. As AI agents become more capable, there is a growing concern that corporations will rely more on this technology, leaving millions of jobs vulnerable to automation.

The question then becomes: Are we prepared for the fallout of such rapid AI advancement? Without clear regulatory frameworks, we risk a future where AI dominates the workplace, creating widespread unemployment, income inequality, and social unrest.

It’s not a distant possibility—it’s a reality we could face within the next few years. OpenAI’s accelerationist approach, combined with the capabilities of tools like o1-preview and the imminent AI agents, could push us to a tipping point. Are we ready to deal with the consequences of unchecked AI development, or are we heading toward a labor crisis catastrophe beyond anyone’s comprehension today?

The future of work is changing faster than we think. The only question that remains is: Will we act in time to ensure that AI benefits everyone—not just the few at the top?

The Rise of OpenAI

Founded in 2015 as a nonprofit with a mission to safely guide AI development, OpenAI originally aimed to democratize artificial intelligence. With collaboration, safety, and transparency as its core values, it was seen as a pillar of responsible AI development. In 2019, however, OpenAI pivoted, forming a for-profit subsidiary and securing massive investments from companies like Microsoft, which now owns nearly half of the company.

This change has fueled an explosion of innovation, with OpenAI releasing AI tools that have transformed industries, such as ChatGPT. The ability of these models to generate human-like text, solve problems, and even reason in complex scenarios has captivated businesses and individuals alike, creating massive demand.

The Cost of Accelerationism

While the company’s success has been undeniable, it comes with significant risks. OpenAI's CEO, Sam Altman, has embraced a philosophy known as accelerationism—the idea of pushing AI development forward at all costs, in the belief that its benefits will ultimately outweigh any risks. This drive for rapid progress has alarmed many within and outside the company.

In recent months, several key figures, including Chief Technology Officer Mira Murati and Chief Research Officer Bob McGrew, have resigned from OpenAI, citing concerns over safety. These departures reflect deeper philosophical divisions over whether the company is prioritizing speed and profit over responsible development. Some, like Ilya Sutskever, a co-founder who also recently left, have raised alarms about the dangers of pursuing advanced AI models without rigorous safeguards.

One of the most concerning aspects of OpenAI’s recent behavior is the push toward artificial general intelligence (AGI)—machines capable of human-level problem-solving. Experts worry that if AGI is reached too quickly without adequate control measures, it could lead to unintended consequences, including autonomous cyberattacks, economic upheavals, and even the development of new weapons.

Profit Over Safety?

The shift to a for-profit model has brought OpenAI substantial financial backing, but it has also intensified pressure to release products faster, with fewer checks and balances. OpenAI’s latest release, a project known as o1, has sparked concerns for its ability to reason like a human. While revolutionary, experts have urged for more rigorous testing before such powerful tools are made available to the public.

Moreover, the company’s financial motivations have added to the debate. With projected losses of $5 billion this year, OpenAI has secured billions in capital from partners like Microsoft and Nvidia. The need to turn a profit, combined with the steep cost of running AI models, has raised questions about whether safety is taking a back seat to revenue generation.

What’s at Stake?

The departure of safety-focused employees, including those working on the now-dissolved “superalignment” team, has shifted attention to the very real dangers of unregulated AI. Former OpenAI staff members, like William Saunders, have even testified before the U.S. Senate, warning that the unchecked acceleration of AI could lead to catastrophic risks for society. These include economic disruptions as AI displaces millions of workers, as well as the potential for systems to autonomously launch cyberattacks or create biological weapons.

In short, as OpenAI continues to push boundaries in the AI space, it must balance innovation with responsibility. While technological breakthroughs like ChatGPT and o1 represent the future, the company’s accelerationist approach threatens to outpace the necessary safety measures needed to ensure these tools benefit society without causing harm.

A Call for Responsible Innovation

As we stand on the precipice of AGI, it’s critical that companies like OpenAI take a step back and consider the broader impact of their actions. Profit and progress should not come at the expense of safety, transparency, and ethical oversight. The decisions made today will shape the future of AI, and if caution is not exercised, we may face unintended consequences with no easy solutions.

In the race to build the most advanced AI, we must ask ourselves: Are we prepared for the consequences? OpenAI’s success is undeniable, but the cost of this success may be too high if safety and ethical considerations are not given the attention they deserve.

The future of AI holds immense promise, but it also carries risks. It’s time for OpenAI—and the industry as a whole—to ensure that the pursuit of innovation is matched with a commitment to responsible development. Otherwise, we may be building a future that we cannot control.

Previous
Previous

Harnessing AI and Transcription Technology: A Deeper Look into the 2024 U.S. Election Discourse

Next
Next

Shaping America’s Future: Kamala Harris and Responsible AI Governance