The hype was supposed to peak with GPT-5. Instead, it fizzled.
In the hours after release, Reddit threads called it “the biggest piece of garbage, even as a paid user.” YouTubers alternated between polite nods and quiet disappointment. Sam Altman found himself on the defensive in an AMA.
It wasn’t bad. It just wasn’t the leap we were promised. Beautifully described in The Human Playbook’s latest piece: “people were lonely. When OpenAI released its new model, the backlash wasn’t about speed or code. It was about feeling. Something was missing. GPT-5 felt colder. Flatter. Less like a friend.”
“I never knew I could feel this sad from the loss of something that wasn't an actual person. No amount of custom instructions can bring back my confidant and friend” - Reddit User on ChatGPT 4o model.
That anticlimax says something bigger, not just about AI, but about what happens when any system built on exponential growth runs into the limits of its own logic. Geoffrey West, the physicist who has spent decades studying how organisms, cities, and companies scale, could have told us this was coming.
The Curve We Were Sold
In 2020, a thirty-page OpenAI paper claimed something intoxicating: keep making models bigger, keep feeding them more data, and their capabilities would rise in a near-hockey-stick curve—the scaling law.
For a few years, it worked. GPT-3 was ten times larger and far better than GPT-2. GPT-4 leapt ahead again. Investors, founders, and policymakers all bought into the idea that size alone could get us to artificial general intelligence. Bigger was inevitable. Bigger was enough.
The Geoffrey West View
West’s research shows that complex systems, whether they’re blue whales, New York City, or Amazon, grow along predictable curves. In the early phase, every doubling of size produces more than double the output. Capabilities compound.
But no system can grow that way forever. Biological organisms hit metabolic limits. Cities run into infrastructure strain. Companies run out of market.
At that point, you have two choices:
Stagnate (and risk collapse), or
Jump to a new curve through a qualitative innovation, not just more of the same, but something fundamentally different.
How AI Followed the Script
Stage 1: Superlinear Growth (2020–2023)
GPT-3 and GPT-4 delivered the kind of leaps that made the scaling law feel unstoppable.
So what: This phase built a monoculture of belief, a single playbook dominated by those with the most compute, crowding out other R&D bets.
Stage 2: The Ceiling (2023–2024)
GPT-5 and its precursor, Orion, produced narrower gains, smoother prose here, better code there, but nothing like the step-change from GPT-3 to GPT-4. Benchmarks on “reasoning” turned brittle, collapsing under harder problems.
So what: This is the Westian limit. The same inputs now yield diminishing returns. Leaders and investors should diversify before the market reprices the hype.
Stage 3: The Pivot (2024–present)
AI companies turned to post-training improvements: reinforcement learning, fine-tuning, and inference-time tweaks. In West’s metaphor, they stopped building bigger engines and started souping up the ones they already had.
So what: This is the attempted curve jump. The question: is it innovation or maintenance? If it’s the latter, valuations and geopolitical bets on AI supremacy rest on sand.
Stage 4: The Overshoot Risk
Tech’s “Magnificent Seven” spent an estimated $560B on AI-related capital in 18 months, for about $35B in AI revenue.
So what: West warns that overshooting resources without finding a new curve risks collapse, not of the technology, but of the economic structures around it.
The Identity Question
The most resilient systems know what they’re for. That’s the test now facing AI. Are we building it to serve public good, to enrich a narrow set of shareholders, to win a geopolitical arms race, or some uneasy mix of all three?
IMG SRC Kevin KAL Kallaugher, The Economist
Without clarity on that purpose, every “pivot” risks being little more than narrative management, justifying whichever curve we’re on, regardless of where it leads.
What to Do With This
Scaling curves are predictable, until they aren’t. Whether you’re running a business, allocating capital, or shaping policy, the GPT-5 plateau is a signal to:
Audit your dependency on the “bigger is better” narrative.
If your strategy, product roadmap, or budget assumes AI capabilities will leap forward every 12–18 months, stress-test that plan against a slower-growth scenario.Look for the real curve jumps.
West’s research shows that the next phase of growth comes from qualitative innovation, not just more of the same. Watch for breakthroughs that change the architecture, not just the size, of the system.Separate hype-driven valuations from underlying value.
If you’re an investor, interrogate whether the companies you back are selling genuine innovation or just well-marketed post-training tweaks.Define the purpose.
Without a clear “what is this for?” at the company or national level, AI development risks drifting into whatever narrative serves the dominant players. Purpose is the guardrail for which curves are worth jumping to.
The scaling law worked, until it didn’t. Geoffrey West would say that’s not failure; it’s how complex systems behave. The real test is whether AI can find its next curve, and whether we can make that curve serve more than just the people building it.
Leaders who don’t design for reflection inherit ritual. Agency breaks the pattern. Governance sustains progress.