OpenAI returns old models to ChatGPT as Sam Altman admits βbumpyβ GPT-5 rollout

The pressure is on for OpenAI to prove that GPT-5 isnβt just an incremental update, but a true step forward.Read More
Tomohiro Ohsumi via Getty Images
Sam Altman says OpenAI has yet to crack AGI.
The OpenAI CEO said that while the highly anticipated GPT-5,Β which launched Thursday, is a major advancement, it isn't what he considers artificial general intelligence, a still theoretical threshold where AI can reason like humans.
Developing AGI that benefits all of humanity is OpenAI's core mission.
"This is clearly a model that is generally intelligent, although I think in the way that most of us define AGI, we're still missing something quite important, or many things quite important," Altman told reporters during a press call on Wednesday before the release of GPT-5.
One of those missing elements, Altman said, is the model's ability to learn on its own.
"One big one is, you know, this is not a model that continuously learns as it's deployed from the new things it finds, which is something that to me feels like AGI. But the level of intelligence here, the level of capability, it feels like a huge improvement," he said.
The exact definition of AGIΒ andΒ how far awayΒ the world-changing technology might be are topics of much debate in the AI industry.
Some AI leaders, like Meta's chief AI scientist, Yann LeCun, have said we may still be "decades" away.
Altman said that looking back at OpenAI's previous releases, GPT-5 is still a step in the right direction.
"If I could go back five years before GPT-3, and you told me we have this now, I'd be like, that's a significant fraction of the way to something very AGI-like," he said on Wednesday's call.
In an earlier blog post, Altman wrote that he and OpenAI's cofounders "started OpenAI almost nine years ago because we believed that AGI was possible, and that it could be the most impactful technology in human history."
While AGI remains the company's mission, Altman says OpenAI is already looking beyond it to superintelligence, a still theoretical advancement in which artificial intelligence can reason far beyond human capability.
"Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity," Altman wrote in January.
Silicon Valley's AI talent war just reached a compensation milestone that makes even the most legendary scientific achievements of the past look financially modest. When Meta recently offered AI researcher Matt Deitke $250 million over four years (an average of $62.5 million per year)βwith potentially $100 million in the first year aloneβit shattered every historical precedent for scientific and technical compensation we can find on record. That includes salaries during the development of major scientific milestones of the 20th century.
The New York Times reported that Deitke had cofounded a startup called Vercept and previously led the development of Molmo, a multimodal AI system, at the Allen Institute for Artificial Intelligence. His expertise in systems that juggle images, sounds, and textβexactly the kind of technology Meta wants to buildβmade him a prime target for recruitment. But he's not alone: Meta CEO Mark Zuckerberg reportedly also offered an unnamed AI engineer $1 billion in compensation to be paid out over several years. What's going on?
These astronomical sums reflect what tech companies believe is at stake: a race to create artificial general intelligence (AGI) or superintelligenceβmachines capable of performing intellectual tasks at or beyond the human level. Meta, Google, OpenAI, and others are betting that whoever achieves this breakthrough first could dominate markets worth trillions. Whether this vision is realistic or merely Silicon Valley hype, it's driving compensation to unprecedented levels.
Β© Paper Boat Creative via Getty Images
As AI hype permeates the Internet, tech and business leaders are already looking toward the next step. AGI, or artificial general intelligence, refers to a machine with human-like intelligence and capabilities. If today's AI systems are on a path to AGI, we will need new approaches to ensure such a machine doesn't work against human interests.
Unfortunately, we don't have anything as elegant as Isaac Asimov's Three Laws of Robotics. Researchers at DeepMind have been working on this problem and have released a new technical paper (PDF) that explains how to develop AGI safely, which you can download at your convenience.
It contains a huge amount of detail, clocking in at 108 pages before references. While some in the AI field believe AGI is a pipe dream, the authors of the DeepMind paper project that it could happen by 2030. With that in mind, they aimed to understand the risks of a human-like synthetic intelligence, which they acknowledge could lead to "severe harm."