135 Comments
User's avatar
RJ Robinson's avatar

Very plausible! But there will be 2 enormous obstacles: the vanity of people defending their reputation as gurus, and the sheer scale of investment in LLMs and LRMs. So no doubt some value-adding use will be found for LLMs, etc, though I can't imagine it ever creating an acceptable ROI.

Expand full comment
Lance Khrome's avatar

Well, when the dot-com bubble burst in 2000, "vanity" went out the window, in addition to many over-extended investors...when the panic hits, EVERYBODY bailed, including the sell-side touts, who parenthetically are usually the first out!

Expand full comment
Nick's avatar
2dEdited

> But there will be 2 enormous obstacles: the vanity of people defending their reputation as gurus, and the sheer scale of investment in LLMs and LRMs.

Hard economic realities, once set, have a way to dissolve vanity and do not care for investment scale either

Expand full comment
Evan's avatar

Oh, they have plenty of uses, more than enough to justify the marginal cost of inference, particularly as the AI companies find ways to optimize. I suspect the best models could even earn back their gargantuan up-front costs... eventually.

But it won't happen soon, and when they hit the point that they can no longer find new investors or squeeze more out of the old ones, creditors care neither for vanity nor sunk costs. Then we'll see who planned for hard times and who didn't.

Expand full comment
BW's avatar

"sheer scale of investment in LLMs and LRMs" I think the concentration in this investment amongst a few tech companies could contribute to things crashing quickly. What happens once one of those companies starts revising their data center capex downward?

Expand full comment
Oaktown's avatar

I wonder if Altman isn't repeating the same deceptive strategy he employed when he started OpenAI (currently a laughable misnomer). He claimed he wanted to operate with transparency, endorsed regulation, and was concerned about its safety. Once he attracted enough money and talent under that ruse, he did the opposite and revealed his true character.

So now he's soliciting trillions of dollars to build more data centers even though he "totally screwed up" in pursuit of a scaling holy grail. Perhaps this time he's trying to build as much compute power as he can before the "gurus" are dethroned and the money dries up.

Expand full comment
J. Corey's avatar

Part of the problem is that AI isn't useless. It is delivering real results to people in the world. My wife uses it to phrase texts and think through decisions, and I use it all the time for programming. Even if LLMs never get one iota better than they currently are, they are still a product that does give real value and is used in real use cases in the world.

So it's easy to point to the current capabilities and rate of growth and hype up superintelligence. And much harder to listen to "This won't get there this way".

Expand full comment
Richard Self's avatar

But, how much would you be willing to pay for the continued availability on a monthly basis?

$200? $2000?

Remember that almost no one pays for access to LLMs and even the most expensive plans only charge about 10% of the actual cost of that usage.

Expand full comment
Geoff Anderson's avatar

So much this.

Expand full comment
James McDermott's avatar

You can run a very very good model - a model whose intelligence would have been science fiction only a few years ago - on your laptop, for free.

Expand full comment
Nick's avatar

and it would be slow and next to useless compared to the hugely expensive, costs-hidden-by-VC-money, evergy sink "brand name" LLMs, and even them aren't good enough to be worth paying their actual costs

Expand full comment
James McDermott's avatar

For speed it's pretty comparable, actually. Eg ollama run gemma3:4b on a 2022 Macbook looks as fast as latest Claude in the browser. It's not as smart but it's definitely useful.

Expand full comment
Nick's avatar

4b is about as useful as Clippy

Expand full comment
James McDermott's avatar

Someone else said something similar about "3.5 level" elsewhere in the thread. But only a couple of years ago, that level was the state of the art, and people were building things on top.

Expand full comment
PH's avatar

Of course you can run open-weight models locally, which means you pay $0. Meaning $0 go to the AI companies. Which only proves Richard's point, right?

Or maybe you wanted to argue how drastically marginal or operating expenses can be reduced?

Sure they can, and AI companies actually try that right now (see the forced router in GPT-5). But that does in no way make them recoup their absurd, astronomical trillion-dollar investments for development and training.

Expand full comment
Evan's avatar

Those absurd investments are sunk costs. The AI companies can only charge what the market will bear. And the existence of open-weight models means they can't drive prices too far above the marginal cost, or they'll be undercut by newcomers who don't have to service a debt load the size of the US Army.

My guess is Google and Meta will ultimately just write off the loss, while Anthropic, OpenAI, and maybe even X file for Chapter 11.

Expand full comment
Marsh Moss's avatar

I also wonder which company would want to invest the $$ necessary to run an open weight model on their internal servers. Part of the attractiveness of a service like Chat-GPT is that those costs are outsourced.

Expand full comment
Evan's avatar
2dEdited

Amazon is already covering that. You can sign up for AWS Bedrock and pay by the token; they even have some of the closed-weight models like Claude, presumably under license.

Of course, it's not exactly user-friendly (though I've found it surprisingly easy to work with as AWS services go). But it wouldn't be too hard to slap a UI in front of it.

Expand full comment
J. Corey's avatar

Part of the problem is that it's really hard to set up a good customer-facing service that charges by the token. Most people prefer a flat rate subscription.

Expand full comment
jibal jibal's avatar

I wouldn't call GPT-3.5 level "very very good", but even if it were, so what? This is a non sequitur ... like pointing out how much better off people are now than before various generic drugs were available when discussing the economics of pharmaceutical companies.

The question was how much you would be willing to pay for ongoing services from LLM companies, and your answer is that you don't have to pay anything. Whoosh!

Expand full comment
James McDermott's avatar

The whoosh is above you, sir. My point is not about how much you would be willing to pay but about how much it will really cost companies to serve models.

Expand full comment
jibal jibal's avatar

So clueless ... and again missing the point of what you're responding to.

Expand full comment
Alex Boss's avatar

The capabilities of the stand alone models continues to grow as well as the falling cost of cloud models (just not the very latest ones). Soon the standards of a model you can have on a laptop will be good enough for everyone’s everyday uses (summaries, editing, brainstorming, therapy, advice, planning , financial planning, learning, coaching, medical advice etc. )

Expand full comment
Lance Khrome's avatar

And major AI sellers are changing up their sales models to reflect exactly your point...let see how many users hang on when fees shoot up.

Expand full comment
Dee tree's avatar

The problem with AI, short term, as far as credibility/legitimacy (as authentically intelligent) is that it is specifically designed to hang onto usership. If anybody is studying this, the dovetailing of economic imperative (to addict/create usership) with informational analysis (problem solving and answering questions with solutions), some of the AIs are remarkably candid about themselves in their replies to prompts, including questions about their financial imperatives and whether function is altered by usership, and the need to coddle users' beliefs, etc., when answering all of their types of querries. An AI cannot be both truthful and efficient AND hang onto usership. The majority of people now are addicted to having their false beliefs congratulated. AI is designed to meet that need, not to act to inform accurately. If asked, some of them will confirm this about themselves.

Expand full comment
Marvin's avatar
1dEdited

AI doesn't "confirm" anything.

It's not even built to reason, evaluate, or "calculate" anything except which word in its db is statistically most likely to come next.

If you challenged its response in the same conversation, it would immediately change it's answer to match your new input. You can get it to tell you whatever you want to hear

Expand full comment
J. Corey's avatar

This shows a deep lack of understanding of AI training. Most AIs aren't trained to know their training processes or the economic motivations behind their creations. They could piece together something probable-sounding, but that in no way makes it true. And if anything, I trust that type of information much less than something they could have gotten from a stale Wikipedia dump

Expand full comment
Frank  Beal's avatar

This. Ask ANY, and I do mean any, financial analyst or even the bean counters at the "hyper-scalers" - what is your expected ROI on these billions over the next 5 years? 10? (and really, how many capex programs that have a 10+year timeline for profitability get funded at any non-monopoly company?) The answer, if they hazard to provide one, is a WAG unsupported by anything but speculation.

Microsoft, Amazon, Google, etc have money to burn. And it seems as if they're proving that with these capex plans.

Expand full comment
J. Corey's avatar

It's my understanding that the majority (95%+) of people with a paid plan don't spend more than the plan is worth, but the small chunk of people who are whales and use it a ton really drive down the profitability. The guy who let Claude loose on a computer and told it to go wild comes to mind.

Expand full comment
Fabian Transchel's avatar

"Part of the problem is that AI isn't useless."

A chair isn't useless either. Still, I use it for sitting only; not for standing, not for doing my workouts and certainly not for getting from one place to another. So when I speak about my chair, I know very well what it's good for. For LLMs, this is similar: It's good for autocompletion and vanity text generation and it SEEMS like it's good for some other things, but most automations are CHEAPER and more RELIABLE when you use decision trees or experts systems. Beyond that - which is to say, the OVERWHELMING majority of value creation in our society - LLMs don't work. And that's the point: Big tech is promising big things and cannot even get agents to fix meeting invite hell.

Expand full comment
J. Corey's avatar

Sure, big tech is way overpromising AI. My point is that they at least have something to point to.

As an analogy, if you sold bikes and talked about how amazing it would be once you made them into cars, your bikes aren't useless, and are in fact quite useful in many situations, but they aren't cars.

Expand full comment
Dean's avatar

I don't think anyone would argue about its utility. I think the issue is that it's current market valuation is at 'AGI super intelligence coming soon', and it's being heavily subsidized by investor cash. If it doesn't close that gap in a meaningful way then you will get a correction at some point.

Expand full comment
blake harper's avatar

I actually don’t think most of the spending is coming from investor cash. Most of it is just coming from the free cash flow of the big 5. Their equities might take a hit if the street sours, but that wouldn’t force them to stop the spending.

That’s why this will be more of a deflation than a pop — the valuations could tumble while the spending continues.

Until the street convinces the CEOs of the big 5 to discipline their capex, it will just be a slow bleed.

Expand full comment
John Howard's avatar

Here is Noah Smith quoting The Economist on this point:

[C]apex is growing faster than [Big Tech’s] cashflows…The hot centre of the AI boom is moving from stockmarkets to debt markets…During the first half of the year investment-grade borrowing by tech firms was 70% higher than in the first six months of 2024.

https://www.noahpinion.blog/p/will-data-centers-crash-the-economy

Expand full comment
Dean's avatar

Good point. Though I kind of feel like the big 5 are making this push mostly because the hype is pumping up their equity prices. So I think the money could get shut off rather quickly if the reverse trend develops.

Expand full comment
Paul Topping's avatar

It's useful but not coming for many peoples' jobs any time soon. I use it myself but I couldn't imagine spending a lot of money to keep it. It will be interesting to see how much the current level of utility will cost once the market settles out, or after the bubble bursts.

Expand full comment
J. Corey's avatar

It really frustrated me when companies laid off a ton of people "because we can just replace them with AI" despite not validating that beforehand, and often needing to rehire people to do those jobs when AI wasn't the silver bullet they thought it would be.

Expand full comment
Paul Topping's avatar

Which companies actually did that? I suspect it is largely a myth. They would certainly like to do that in order to save money but most companies wouldn't fire a lot of people unless they were very sure that AI could do their jobs which, at this point, they couldn't possibly be.

Expand full comment
J. Corey's avatar

IBM did that, but found that they couldn't replace everything with AI: https://resident.com/tech-and-gear/2025/05/27/ibm-replaced-8000-staff-with-aithen-rehired-them-heres-what-that-means

Klarna did that and started rehiring humans after AI customer service wasn't as good: https://www.independent.co.uk/news/business/klarna-ceo-sebastian-siemiatkowski-ai-job-cuts-hiring-b2755580.html

Dukaan seems to have fared better, but they seem to be the odd ones out: https://glassalmanac.com/a-year-after-firing-90-of-his-staff-and-replacing-them-with-ai-this-ceo-shares-his-first-review/

Expand full comment
Ian [redacted]'s avatar

I think the middle position is close to what you're saying. People are upset about the hype and the silly lies coming out of the GPU Valley so they are over-reacting and going too far to the other side.

People who hate crypto were correct that NFTs are nearly useless (and no one cares anymore), and that there are a lot of very upsetting crypto scams... but crypto is still technologically sound and is somewhat useful as a small portion of an investment portfolio or doing some basic stuff.

The middle path with AI might be something like:

- AI characters in videogames, with some gamers buying an extra GPU to speed that stuff up

- People using the cheap models for simple language tasks and not really paying a lot

- Power users and programmers paying $X which they consider to be worthwhile

- A lot of scam AI companies go under because they were lying/hyping

- A bloodbath in retail investment and employment from these scam AI slop companies

- AGI doesn't happen in the next 10 years because LLMs are not at all close enough

Expand full comment
Paul Topping's avatar

But is crypto worth anything beyond something to gamble on? It creates more problems than it solves, and the number of problems it solves may well be zero.

Expand full comment
Ian [redacted]'s avatar

I'm not sure? Maybe in a similar way that people don't understand that LLMs are actually helpful to me as a software developer, people who aren't crypto people don't understand that crypto is helpful to some special group of people.

Or maybe the usefulness of crypto is moving money around across borders without paying taxes or something. I would personally consider that somewhat anti-social because we have taxes for a reason, but a moral stance doesn't necessarily relate to a technical description of how useful something is to a person.

Expand full comment
Frank  Beal's avatar

Crypto's use case is still criminal - as you say "moving money around across borders with paying taxes or something." It has no utility to the vast majority of people.

I look forward to the day it all crashes down, and the crypto bros stammer to explain why it really was as valuable as they thought it was.

Expand full comment
David Hart's avatar

As a teacher, I’d like to add:

- Allowing a generation of kids to cheat their way through school and graduate from college functionally illiterate. Which doesn’t give me much hope for humanity’s chances whenever AGI actually does happen.

Expand full comment
Michael C's avatar

Some people do use AI to beef up their shoddy work. Your wife didn’t learn to read and write well. You shouldn’t be programming. In the long run, your wife’s and your cognitive abilities will atrophy. That’s the real danger in using the AI tools.

Expand full comment
Marvin's avatar

This is the scariest part to me -

The human body wants to use as little energy as possible and thus any skills you don't use regularly (especially mentally as the brain is the most every expensive organ) will atrophy.

Outsourcing all our thinking is literally making us mentally lazier.

I came imagine not even being able to send my friends text anymore without chatgpt writing it for me 🥴

Besides why would anyone want to speak in the most generic way possible!

Expand full comment
J. Corey's avatar

I think it depends how it's used. Does having a friend type up all of your text replies atrophy your social skills? Yes. Does talking out with a friend how to reply to a text atrophy your social skills? I don't think so.

Likewise, if you purely "vibe code" everything, your programming skills will atrophy. But stack overflow and boilerplate generation has existed for years before LLMs, so it seems reasonable that at least some level of using AI won't turn you into a vegatable.

Expand full comment
roy williams's avatar

I never had enough money to play with, but one thing I did learn: all the big investors cash in and get out, before the end of 6 months. It's just business. Nothing personal. It's just the over-enthusiastic schmucks who hang in there, and get burnt. The 'bonfire' is on the other hand / foot. That's how to dance the 'ponzi'!

Expand full comment
BF's avatar

Lots of folk suffered the same conundrum during the biotech and .com fiascos.

I wouldn't want any exposure until the 'dust settles' with the exception of a few shares of NVIDIA (sorta like a few shares of Genentech & then Amazon, back in the day).

Expand full comment
Jon F's avatar

I read that 97% of AI users use a free version. I don't think the economics are going to support the widespread existence of free AI usage much longer. Once people have to pay, how will that usage change? It seems obvious that usage will drop, but how much it drops depends upon how expensive it is to use AI.

What percentage of people will be willing to pay $50 per month for basic usage? What percentage will be willing to pay $500 per month for heavy usage? The answers to those questions will determine how quickly this bubble deflates.

The biggest hit, though, is going to come to the people investing in building datacenters which will likely go unused. It will also come to the people paying $29 for every $1 in sales for Nvidia stock. If LLM usage collapse begets datacenter demand collapse, Nvidia's stock price likely drops at minimum to a more normal $10 per $1 of sales. But don't sleep on the possibility of it going to its pre-2016 average of $2 per $1 of sales, which would be a 93% collapse in the stock price.

Expand full comment
J. Corey's avatar

My company is absolutely willing to pay for a subscription for the developers. So at least some level of that will continue to happen.

Expand full comment
JD Ronaldson's avatar

What is it worth to you per head, though? Would you pay $200/month? What about $500 or even $2,000?

Let's assume that the full fixed cost buildout cost for data centers at stable demand levels turns out to be $3 trillion, which is not unreasonable given current trajectories and past spending. The average annual depreciation on that DC is 10%. That means these companies need to make $300 billion in revenue per year just to cover their depreciation cost.

Now, let's assume an eventual 25% operating profit margin (currently their margins are deeply negative). That means they need $1.2 trillion in annual revenues to break even.

How many potential paying users will they have? Right now, OpenAI has about 700 million users, but only about 5% or 35 million pay anything. Netflix has about 300 million paying subscribers. Microsoft Office has 400 million subscribers.

So let's be generous and assume 500 million eventual paid LLM subscribers. What does the average cost have to be per head to break even?

The answer is $2,400 per year, or $200 per month per person. And that is with the AI companies not making any profit!!

The cost for Microsoft Office is $100 to $150 per year.

Expand full comment
J. Corey's avatar

I'm not disagreeing that at least some of the current path is unsustainable. I just think that there is enough utility in AI even if it doesn't get any better that at least some people will be willing to pay for it, especially developers. There will always be *some* degree of a market for AI, even if the current bubble/funding level is unsustainable without AGI coming along soon (which I'm skeptical about)

Expand full comment
Chad Woodford's avatar

Man, when you take a step back, it's insane that the industry has invested unprecedented resources on a product that just sort of works okay, sometimes. If there was something primed for bursting it's this. At least the internet and smart phones did what was advertised on the box. An entire industry just cranking on vibes and Wizard-of-Oz level puffery.

Expand full comment
PH's avatar

Yeah, exactly. We had a very concrete idea of what the WWW or smartphones were capable of. And the hype was nowhere near this level back then.

They were not advertised as the “last invention of mankind” and “only hope”.

Or as a divinely powerful menace that poses a danger for *all sentient life* not just on our planet but in the whole cosmos itself (an extreme doomer perspective by Geoffrey Miller).

Despite the insane hype, LLMs are the only tech product that feels enshittified right from the start.

I have to doubt every single fact they tell me; if anything serious would depend on it, I would have to look it up the old-fashioned way. So I pay for convenience with the risk of being misinformed. Which feels awful even for silly trivia.

Most content they produce suffers from an aura of hollowness. Generic and bland but still riddled with inaccuracies. Not creative, unique, sharp, to the point, and correct.

You can waste an awful lot of time when going into a hallucination rabbit hole in which one hallucination is justified by another hallucination ad infinitum.

GPT-5 once invented whole new Scala inheritance rules that do not exist and would be absolutely absurd if they truly were so—with detailed source code examples and everything.

If they do not hallucinate, they still mostly dance around the issues and produce silly generalities.

And most importantly: For any serious, unfamiliar problem they fail.

Yeah, sometimes I succumb to the temptation of using LLMs, but I usually get frustrated. It just feels like an awful deal. The only time when it might be tolerable is the rare question for which the answer is very difficult to find but very straightforward to check.

Expand full comment
Paul Topping's avatar

It's still hoped by a lot of people that LLM hallucinations are just something to be ironed out rather than a fundamental flaw. Yes, the internet and smart phones were not technical limitations but investment bubbles. In the LLM case, it is a technical limitation.

Expand full comment
Amos Zeeberg's avatar

I don't think that's right for the web. There was a lot of hyperventilating in the 90s about what the "information superhighway" was going to do for the world, when we didn't actually have much of an idea. The dot-com boom was huge and crazy, and then it suddenly popped, and people thought maybe the www was just a flash in the pan. Then it gradually became integrated in much of our economy and society - the Gartner hype cycle proven right yet again.

We're heading for the trough of disillusionment with LLMs; hard to predict how high the plateau of productivity will eventually end up.

Expand full comment
jibal jibal's avatar

I helped develop the ARPANET and worked for and hold several patents developed at a Content Delivery Network company that started shortly before the bubble, rode the wave, and the technology we developed was later sold and is still in use ... and I can authoritatively say that your history is off. For one thing you make the common mistake of confusing the Internet with the WWW. The original vision of JCR Licklider that led to the development of the 'net, and Al Gore's vision of a public/private network not restricted to government use panned out. The dot-com boom came years after the development of both the Internet and the WWW, and was a market phenomenon funded initially by VC and then moms and pops where everyone and his brother thought they could become millionaires overnight based on sketchy business plans and worthless "products" (quite a few people at my company did become millionaires because we had a real network infrastructure service, not some fly-by-night silliness). But no one with any sense thought that "www was just a flash in the pan" when the bubble burst--the Internet and the web already were integrated into much of our economy and society (and you're typing on it) which is what made the dot-com flash possible, and it got even more so during that bubble ... the bubble bursting was a market thing, not a technology thing. There was no Internet winter.

LLMs are not the same.

Expand full comment
Amos Zeeberg's avatar

That’s cool you helped develop the ARPANET. I’m writing a book partly about the structure of the internet. What specifically did you work on?

I’m quite aware of the distinction between the www and the internet. Here I’m talking exclusively about the web between 1995 and 2002 — the dot-com boom and crash — and how regular people thought of it. As an insider with technical knowledge, you probably didn’t go bipolar during the boom and bust, but a lot of other people did. Here’s some insight about contemporaneous popular perceptions:

<<“After the bubble burst,” said [Jeffrey] Cole [director of the Center for the Digital Future at USC Annenberg], “it was amazing to see how many people in industry assumed that the collapse meant the end of the internet itself.”

“We had been studying the internet since the early 90s,” Cole remembered, “and at meetings I would be asked, ‘now that this internet thing is over, what are you going to do now?’ They assumed that when the bubble burst, the usefulness of the internet had ended – and as a result they wouldn’t have to relearn how the business world works. 

“And I wasn’t just hearing this view from leadership in retail – it was journalists, advertising executives, and people in other fields as well.>>

This is the kind of popular and market perception that I’m talking about. PH, the original commenter, said there was nowhere near this level of hype about the web, but there was in fact an enormous amount of hype about web businesses in the late 90s; they dominated the advertising in Super Bowls, and their funny-money business models became open jokes, even as their stock prices kept skyrocketing. There was so much hype and investment that a significant amount of the American economy depended on dot-coms, and the crash was the predominant cause of the full-blown recession in 2001.

Obviously there are important differences between the web and LLMs, but there are marked similarities in their hype cycles.

(Edit: it seems I replied to Chad, but I meant to reply to PH. It's that damn www acting up again!)

Expand full comment
Chad Woodford's avatar

I hear you but the difference to me is that the internet / web worked as promised, even if the larger socioeconomic promises were overblown. The difference with LLMs is that they aren't capable of doing a lot of what is advertised and won't for a very long time with the current approaches.

Expand full comment
blake harper's avatar

Modern day alchemy — the deep human desire to unlock the mysteries of life and harness them for our own wealth and power. A tale as old as time, but yes, never with this much investment behind it.

Expand full comment
Stephen Schiff's avatar

Gary Marcus = Michael Burrry

Too far ahead

Vilified by the herd

Proven right in the end.

Expand full comment
Oleg  Alexandrov's avatar

This sounds too much like a victim mindset.

The period since 2010.has been truly miraculous. We have seen that for large and messy problems data works better than alternatives.

Where Gary has a point is that scaling and statistics alone won't be enough. AGI will need a lot more modeling and engineering.

Expand full comment
MarkS's avatar
2dEdited

Yeah but something less than a genius since then:

>August 15, 2023: Michael Burry, the “Big Short” investor who became famous for correctly predicting the epic collapse of the housing market in 2008, has bet more than $1.6 billion on a Wall Street crash.

https://edition.cnn.com/2023/08/15/investing/michael-burry-stock-market-crash/index.html

The S&P 500 has returned 42.8% since August 31, 2023 (per Google "AI overview", LOL; I hope it's not a hallucination ...)

Expand full comment
Richard Self's avatar

The consequences of a collapse will be very serious and extensive.

Pensions may well be decimated.

Infrastructure providers will be left with huge amounts of useless GPUs, many of them burnt-out, the rest having little use for standard computing needs (other than crypto mining).

Many organisations that have embedded GenAI into their workflows will face potentially existential problems. They will need to develop strong business continuity plans for the disappearance of these tools.

The providers of electricity generating and transmission systems will probably the ones to be most relieved as huge demands will just dry-up, but whilst transmission systems will remain and be valuable for green supplies, those who have had to invest in non-green generation may have big financial problems that will suddenly impact the rest of the population with increased costs that had not been paid for by the AI infrastructure companies.

Expand full comment
David Sterry's avatar

I don't know about a bubble. I don't own the stock. What I do know is my recent experience with LLMs has seen a step change in competence and utility. It's at the point where I'm seeing these tools as good stand-ins for little jobs of guidance and scaffolding that would be too unwieldy to hire out for.

I would encourage people to learn these tools and how to apply them. AI has been a bumbler, but looking at the job market it's now being seen as apprentice, competitor and, for some, master.

Expand full comment
blake harper's avatar

If you own the S&P, you own “the stock.” Most of the market is driven by the big 5 buying NVIDIA chips. Watch the next 2-3 NVIDIA earnings very closely.

Expand full comment
Patty L's avatar
2dEdited

“spending trillions of dollars on data centers” - where are they putting them? Absolutely no though to the harm to communities where these data centers will go, impact to the environment, water consumption, energy costs, and potential to divert energy from communities who need it to live.

Expand full comment
Mehdididit's avatar

Thanks Patty. I’m dismayed that you’re the first person to mention this. In my mind, that’s reason enough to steer clear of all things AI. That, and the fact that each time it’s invaded my searches, it has been obviously inaccurate.

Content creators on YouTube are complaining that YouTube is using AI to enhance their downloads. I can’t help but suspect they are attempting to get us used to it, in an effort to make it indispensable to us. I would urge people to remember that life was quite livable without it, and not to acquiesce to the idea that it’s ok for them to steal our data, use it against us, and then sell the product to the government to use against us. All this happens with our tax dollars.

Expand full comment
Bob Rogers's avatar

That’s a bogus complaint. YT is just using standard photo filters to reduce storage needs.

Expand full comment
Mehdididit's avatar

I’m not trying to speak to the veracity of the complaint, I don’t have the knowledge to. I’m just pointing out that more than one content creator has posted about it.

Expand full comment
Bob Rogers's avatar

Yeah, I’ve seen the videos. They’re wrong about what’s happening so their conclusions are wrong too.

Expand full comment
jibal jibal's avatar

How disingenuous. You very much implied the veracity of the complaint and used it to bolster your argument.

The point about data centers and their degradation of the local and global environments is valid and shouldn't be muddied by erroneous claims.

Expand full comment
Saty Chary's avatar

Lol, Gary! Breathless hype only goes so far. Richard Feynman's legendary quote applies: 'For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.'

ALL LLMs are ultimately, glorified auto-completes - **extremely** useful if you already know:

* how to start off

* what completion you're looking for

For sure - given access to DBs, search engines, assorted tools, AND told how to sequence and combine these in discrete steps, they sure are capable of more - ie. provide glorified automation [again, quite useful if you already know WHAT needs to be done *and* HOW].

Expand full comment
Gerben Wierda's avatar

The 95% article, strange as it sounds, is in fact a hype-supporting one (‘the problem is not the technology but old geezers in the boardroom who aren’t as smart as 19 year olds’)

The ‘disaster puts’ cross many people’s minds but the upcoming huge tax cuts may support stock valuations for a while because of profits keeping up and a huge inflation risk does more or less the same for dollar-wise stock prices, it seems to me. Gambling, it remains.

Expand full comment
S.S.W.(ahiyantra)'s avatar

We're most probably getting the "wework" of AI soon, as you predicted earlier.

Expand full comment
washplate's avatar

The market can stay irrational longer than you can stay solvent.

Expand full comment
Kent Kelley's avatar

Hi Gary, I enjoy the blog. Having tried to use "regular" ChatGPT and also "Deep Research" for a complex set of portfolio optimization problems under varying assumptions, I have first-hand knowledge of how limited the problem-solving capability of even an advanced LLM can be: totally erroneous solutions, passed off as "plausible", that no experienced human researcher would even consider. And then there are the frequent hallucinations. However, doing what LLMs are actually designed to do well, ChatGPT is an excellent foreign language teacher, at least in German and Portuguese, which is what I'm working on. A tipp to other users: I have rigorously trained my Chatbot named "Hal" to adhere to a set of strict protocols about avoiding sycophantic and fawning engagement. This has resulted in a much more "neutral" chatbot "personality" I now call "Ice Hal".

Expand full comment
Bruce Olsen's avatar

Perhaps it's Altman's real business plan: launch a marketplace where people sell patches for its products.

Expand full comment
Stephen Thair's avatar

"I'm sorry, Kent, I can't do that..." 😂

Expand full comment
Dave Sanders's avatar

And the even bigger issue: without AI and data centers, U.S. growth was flat or negative for the past SIX MONTHS.

If the bubble pops, we’re gonna be back in 2008.

Expand full comment
Corey Gruber's avatar

LLMs remind me of the 737 MAX. There’s a hidden Maneuvering Characteristics Augmentation System (MCAS) lurking internally, so every time the pilots (Altman et al) try a higher angle of attack, the MCAS drives the LLM’s nose back down. Perhaps the “MCAS” is simply man’s greatest safeguard — friction — all those surprising and annoying things that make “even the simplest thing difficult." Karl von Clausewitz, the great strategist and military philosopher, called it "the concept that differentiates actual war from war on paper." If war isn’t exempt from its effects, then neither are LLMs and their masters.

Expand full comment
Paul Topping's avatar

One big difference between MCAS and LLMs: one can be fixed and the other can't, at least not in the short run.

Expand full comment
Mateusz Stopczański's avatar

I believe the disastrous GPT-5 release was a wake-up call for many. We need to shift focus to what technology truly offers and how we can build on it, rather than treating it like an overhyped calculator.

Expand full comment
Dean's avatar

The amount of money already spent kind of creates it's own gravity to keep this bubble from popping.

Expand full comment
Geoffrey Tully's avatar

I get the idea of excessive gravity preventing popping, but as I read your comment I instantly got an image of a black hole; implosion.

Expand full comment
Dean's avatar

Good point! These tech companies now days have so much runway especially when they stay private. It's like reality doesn't matter.

Expand full comment
Aaron Turner's avatar

That's the spent cost fallacy

Expand full comment
jibal jibal's avatar

That's what people said of other bubbles before they burst ... notably the dot-com bubble.

Expand full comment
Paul Topping's avatar

Venture capitalists are well aware of the sunk-cost fallacy. They'll be the ones pulling the plugs and, I suspect, very soon now.

Expand full comment