Is this the moment when the Generative AI bubble finally deflates?
Hard to say, but definitely not out of the question
For the last few years, I have been warning you that the mania for LLMs would break eventually.
The technical foundation was, as I argued all the way back to the early days of GPT-2, never strong enough to support the hype. Since August 2023 (in essays like What if generative AI turns out to be a dud?, and What exactly are the economics of AI?), I have been repeatedly warning that the economics don’t make much sense either.
But until recently enthusiasm kept rising and rising, no matter what I said.
But then we all say what happened with GPT-5; Altman spent years promising the moon, and in the end, long overdue, didn’t even come close to delivering. A lot of people took note.
Could things be changing? Is reality at least settling in? One never knows, but here are a few potentially significant signs from just the last few days:
Even influencers who have nothing to do with tech are starting to see it.
And for that matter, even Sam Altman himself seems to see it:
Ok that‘s a faked image that someone sent me, which was too funny not to share.
But …. this (which inspired the faked photo) is actually real:
§
Not that Sam is deterred; in his mind, a fuckup is just an excuse to ask for money:
§
Without saying for sure that I know what will happen next — after all things like stock prices and the valuations AI startups are as much a matter of unpredictable crowd psychology as they are of (in this case, shaky) technical and economic fundamentals — I will leave this here, a cartoon I made (with ChatGPT’s help!) in mid-July.
Once the markets really understand this, enthusiasm may indeed collapse fairly quickly.
Very plausible! But there will be 2 enormous obstacles: the vanity of people defending their reputation as gurus, and the sheer scale of investment in LLMs and LRMs. So no doubt some value-adding use will be found for LLMs, etc, though I can't imagine it ever creating an acceptable ROI.
Part of the problem is that AI isn't useless. It is delivering real results to people in the world. My wife uses it to phrase texts and think through decisions, and I use it all the time for programming. Even if LLMs never get one iota better than they currently are, they are still a product that does give real value and is used in real use cases in the world.
So it's easy to point to the current capabilities and rate of growth and hype up superintelligence. And much harder to listen to "This won't get there this way".