Reality Check

Edward Zitron 26 min read

I'm sick and god-damn tired of this! I have written tens of thousands of words about this and still, to this day, people are babbling about the "AI revolution" as the sky rains blood and crevices open in the Earth, dragging houses and cars and domesticated animals into their maws. Things are astronomically fucked outside, yet the tech media continues to tell me to get my swimming trunks and take a nice long dip in the pool.

I apologize, this is going to be a little less reserved than usual.

I don't know why I'm the one writing what I'm writing, and I frequently feel weird that I, a part-time blogger and podcaster, am writing the things that I'm writing. Since I put out OpenAI Is A Systemic Risk To The Tech Industry, I've heard nothing in response, as was the case with How Does OpenAI Survive? and OpenAI Is A Bad Business

There seems to be little concern — or belief — that there is any kind of risk at the heart of OpenAI, a company that spent $9 billion in 2024 to lose $5 billion. While I'd love to add a "because..." here, if not because it’s important to be intellectually honest and represent views that directly contrast my own, even if I do so in a somewhat sardonic fashion, nobody seems to actually have a cogent response to how they right this ship other than Hard Forker Casey Newton throwing a full-scale tantrum on a podcast and saying I'm wrong because "inference costs are coming down."

Newton is a nakedly-captured booster that ran an infographic from Anthropic a few weeks ago the likes of which I haven't seen since 2013, but he's far from the only one with a flimsy attachment to reality.

The Information ran a piece a couple of weeks ago that made me furious, which was a surprise because — for the most part — their coverage of tech, and especially AI, has been some of the best around, and they generally avoid the temptation to be shills for shaky and unsustainable tech companies. 

The story claimed that OpenAI was "forecasting revenue topping $125 billion in 2029" based on "selling agents" and "monetizing free users...as a driver to higher revenue." The piece, reported out based on things "...told [to] some potential and current investors," takes great pains to accept literally everything that OpenAI says as perfectly reasonable, if not gospel, even if said things make absolutely no sense.

According to The Information's reporting, OpenAI expects "agents" and "new products" to contribute tens of billions of dollars of revenue, both in the near-term (somehow contributing $3 billion in revenue this year, which I'll get to in a little bit) and in the long-term, with an egregious $25 billion in revenue in 2029 projected to come from "new products." 

If you're wondering what those new products might be, I am too, because The Information doesn't seem to know, and instead of saying "OpenAI has no idea what the fuck they're talking about and is just saying stuff," the outlet chooses instead to publish things with the kind of empty optimism that's indistinguishable from GPT-generated LinkedIn posts.

Check out this fucking chart.

The Information — OpenAI Forecasts Revenue Topping $125 Billion in 2029 as Agents, New Products Gain

I want to be really, really clear: we are nearly in May 2025, and I see no evidence that OpenAI even has a marketable agent product, let alone one that will make it three billion god damn dollars in the next six or seven months.

For context, that’s triple the revenue OpenAI reportedly made from selling access to its models via its APIs — essentially allowing third-party companies to use GPT in their apps — in the entirety of 2024. And those APIs and models actually exist in a meaningful sense, as opposed to whatever the fuck OpenAI’s half-baked Agents stuff is. 

In fact, no, no, I'm not going to be mean, I'm going to explain exactly what The Information is reporting in an objective way, because writing it out really shows how silly it all sounds. I am going to write "they believe" a lot because I must be clear how stupid this is:

  • According to The Information's reporting, they believe that OpenAI will make $3 billion in 2025 from selling access to its agents in 2025. This appears to come from SoftBank, which has said it will buy $3 billion worth of OpenAI products annually.
  • Earlier this year, we got a bit of extra information about how SoftBank would use those products. It plans to create a system called Cristal Intelligence that will be a kind-of general purpose AI agent platform for big enterprises. The exact specifics of what it does is vague (shocker, I know) but SoftBank intends to use the technology internally, across its various portfolio companies, as well as market it to other large enterprise companies in Japan.  
  • I also want to add that The Information can't keep its story straight on this issue. Back in February, they reported that OpenAI would make $3 billion in revenue only from agents, with a big, beautiful chart that said $3 billion would come from “it," only to add that “it” would be SoftBank "...[using] OpenAI's products across its companies." 
  • Based on these numbers, it seems like SoftBank will be the only customer for OpenAI’s agents. While this won’t be the case — and isn’t, because it excludes anyone willing to pay a few bucks to test it out — it nonetheless doesn’t signal good things for Agents as a mass market product.  
    • Agents do not exist as a product that can be sold at that scale. The Information's own reporting from last week highlighted how OpenAI’s "Operator" agent "struggle[d] with comparison shopping on financial products," and how Operator and other agents are "...tripped by pop-ups or logins, as well as prompts asking for email addresses and phone numbers for marketing purposes," which I think accurately describes most of the internet.
    • To summarize, The Information is saying that the above product will make OpenAI three billion dollars by the end of the year.
  • According to The Information's reporting, they believe that OpenAI will basically double revenue every single year for the next four years and make $13 billion in revenue in 2025, more than doubling that to $29 billion in 2026, nearly doubling that to $54 billion in 2027, nearly doubling that to $86 billion in 2028, and eventually hitting $125 billion in 2029.
    • Said revenue estimates, as of 2026, include billions of dollars of "new products" that include "free user monetization."
      • If you are wondering what that means, I have no idea. The Information does not explain. They do, however, say that "OpenAI won’t start generating much revenue from free users and other products until next year. In 2029, however, it projects revenue from free users and other products will reach $25 billion, or one-fifth of all revenue," and said that "shopping is another potential avenue."

I cannot express my disgust about how willing publications are to blindly publish projections like these, especially when they're so utterly ridiculous. Check out this quote:

OpenAI has already begun experimenting with launching software features for shopping. Starting in January, some users can access web-browsing agent Operator as part of their pro ChatGPT subscription tier to order groceries from Instacart and make restaurant reservations on OpenTable.

So you're saying this experimental software launched to an indeterminate amount of people that barely works is going to make OpenAI $13 billion in 2025, and $29 billion in 2026, and later down the line $125 billion in 2029? How? How?

What fucking universe are we all living in? There's no proof that OpenAI can do this other than the fact that it has a lot of users and venture capital! 

In fact, I think we have reason to worry about whether OpenAI even makes its current projections. In my last piece I wrote that Bloomberg had estimated that OpenAI would triple revenue to $12.7 billion in 2025, and based on its current subscriber base, OpenAI would have to effectively double its current subscription revenue and massively increase its API revenue to hit these targets.

These projections rely on one entity (SoftBank) spending $3 billion on OpenAI's services, meaning that it’d make enough API calls to generate more revenue than OpenAI made in subscriptions in the entirety of 2024, and something else that I can only describe as “an act of God.”

That, I admit, assumes that Softbank’s spending commitment is based on usage, and not a flat fee (where Softbank pays $3bn and gets a set — or infinite — level of access). Assuming it’s the former, I’d be stunned if SoftBank’s consumption hits $3bn this year, even with the massive cost of the reasoning models that Cristal Intelligence will be based on. Softbank announced its deal with OpenAI in February. 

Cristal Intelligence, if it works — and that is possibly the most load-bearing “if” of all time — will be a massive, complicated, ambitious product. Details are vague, but from what I understand, SoftBank wants to create an AI that handles the infinitely varied tasks that knowledge workers perform on a daily basis. 

To be clear, OpenAI’s agents cannot consistently do, well… anything

What I believe is happening is that reporters are taking OpenAI's rapid growth in revenue from 2023 to 2024 (from tens of millions a month at the start of 2023 to $300 million in August 2024) to mean that the company will always effectively double or triple revenue every single year forever, with their evidence being "OpenAI has projected this will be the case."

It's bullshit! I'm sorry! As I wrote before, OpenAI effectively is the generative AI industry, and nothing about the rest of the generative AI industry suggests that the revenue exists to sustain these ridiculous, obscene and fantastical projections. Believing this — and yes, reporting it objectively is both endorsing and believing these numbers — is engaging in childlike logic, where you take one event (OpenAI's revenue grew 1700% from 2023 to 2024! Wow!) to mean another will take place (OpenAI will continue to double revenue literally every other year! Wow!), consciously ignoring difficult questions such as "how?" and "what's the total addressable market of Large Language Model subscriptions exactly?" and "how does this company even survive when it "expects the costs of inference to triple this year to $6 billion alone"?

Wait, wait, sorry, I need to be really clear with that last one, this is a direct quote from The Information:

The company also expects growth in inference costs—the costs of running AI products such as ChatGPT and underlying models—to moderate over the next half-decade. Those costs will triple this year, to about $6 billion and rise to nearly $47 billion in 2030. Still, the annual growth rate will fall to about 30% then.

Are you fucking kidding me?

Six billion fucking dollars for inference alone? Hey Casey, I thought those costs were coming down! Casey, are you there? Casey? Casey?????

Anyway, that's not great at all! That's really bad! The Information reports that OpenAI will make "about $8 billion" from subscriptions to ChatGPT in 2025, meaning that 75% of OpenAI's largest revenue source is eaten up by the price to provide it. This is meant to be the cheaper part! This is the one fucking thing people say is meant to come down in price!

Are we living in different dimensions? Are there large parts of the tech media that have gas leaks in their offices? What am I missing? Tell me what I'm missing!

Nerr, Ed, you haven't talked to the people building these things, you don't know what you're- shut the fuck up! Shut up! I am sick and tired of people (like Casey!) suggesting that what's missing from my analysis is to "interview people who work at these companies and understand how this technology works." What would these people say to me, exactly? What response would they have to these numbers?

Forgive Me I'm Going To Be A Little Rude

In fact, you know what, let me just sit down and go through the critiques one-by-one. Some of you are going to say I'm being rude to these people and it weakens my analysis, to which I respond "kiss my entire ass." I can beat you to death with the truth while making fun of you for believing stupid things.

  • The costs of inference are coming down: Source? Because it sure seems like they're increasing for OpenAI, and they're effectively the entire userbase of the generative AI industry! 
    • But DeepSeek… No, my sweet idiot child. DeepSeek is not OpenAI, and OpenAI’s latest models only get more expensive as time drags on. GPT-4.5 costs $75 per million input tokens, and $150 per million output tokens. And at the risk of repeating myself, OpenAI is effectively the generative AI industry — at least, for the world outside China. 
  • This is the company at its growth stage, it can simply "hit the button" and it'll all be profitable: You have the mind of a child! If this was the case, why would both Anthropic and OpenAI be losing so much money? Why are none of the hyperscalers making profit on AI? Why does nobody want to talk about the underlying economics?
  • These are the early days of AI: Wrong! We have the entire tech industry and more money than has ever been invested into anything piled into generative AI and the result has been utterly mediocre. Nobody's making money but NVIDIA!
  • They're already showing signs that it'll be powerful: No it's not! If it was there'd be people doing crazy, impressive things with it!
  • But Ed, really, it's the early days, it was just like this in the early days of the internet: No it wasn't! Read Jim Covello of Goldman Sachs' note from last year, the early days of the internet were absolutely nothing like this-
    • Smartphones! YES! Got you, Ed! Smartphones! People doubted those too- I am going to drown you in an icy lake! Covello's note also included an entire thing about how smartphones were fully telegraphed to analysts in advance, with "hundreds of presentations" that accurately fit how smartphones rolled out, no such roadmap exists for AI!
  • Heh, heh, Ed, you're so boned. Check out this article from Newsweek in 1995 where a guy says that the internet won't be a big business. This somehow proves that AI is going to be big, due to the fact one guy was wrong once: Motherfucker, have you read that piece? He basically says that the internet, at that time, was pretty limited, and yes, he conflated that with the idea that it wouldn't be big in the future. Clifford Stoll's piece also — as Michael Hiltzik wrote for the LA Times — was alarmingly accurate about misinformation and sleazy companies selling computerized replacements for education.
    • In any case, one guy saying that the internet won't be big doesn't mean a fucking thing about generative AI and you are a simpleton if you think it does. One guy being wrong in some way is not a response to my work. I will crush you like a bug.
    • Stoll's analysis also isn't based on hundreds of hours of research and endless reporting. Mine is! I will grab you from the ceiling like the Wallmaster from Zelda and you will never be heard from again.
  • OpenAI and Anthropic are research entities not businesses, they aren't focused on profit: Okay so are they just going to burn money forever? No, really, is that the case? Or do you think they hit the "be profitable" button sometime?

[Record Scratch] Wait a second...

  • OpenAI has as many as 800 million weekly active users! That's proof of adoption! Hey, woah, I get that you're really horny about this number, but something don't make no sense here! On March 31 2025, OpenAI said that it had "...500 million people who use ChatGPT every week." Two weeks later, Sam Altman claimed that "something like 10% of the world "uses our systems a lot," which the media took to mean that ChatGPT has 800 million weekly active users.
  • Here are the three ways to interpret this, and you tell me which one sounds real:
    • OpenAI's userbase increased by 300 million weekly active users in two weeks.
    • OpenAI understated its userbase in the announcement of their funding announcement on OpenAI dot com by 300 million users.
    • Sam Altman fucking lied.

I get that some members of the media have a weird attachment to this nasty little man, but have any of you ever considered he’s just fucking says things knowing you will print them with the kindest possible interpretation?

Sam Altman is a liar! He lies! He's lied before and he'll lie again!

But wait, Ed! Google says it has 350 million monthly active users on Gemini! Eat shit, Zitron! No, you eat shit! Yes, Google Gemini has 350 million monthly active users.

And that’s because it started replacing Google Assistant with Google Gemini in early March! You are being had! You are being swindled! If Google replaced Google Search with Google Gemini it would have billions of monthly active users! 

Anyway, back to the critiques...

  • OpenAI having hundreds of millions of free users, each losing it money, is proof that the free version of ChatGPT is popular, largely because the entirety of the media has written about AI nonstop for two straight years and mentioned ChatGPT every single fucking time. Yes there is a degree here of marketing, of partnerships, of word of mouth, of some degree of utility, but remove the non-stop free media campaign and ChatGPT would've peetered out by now along with this stupid fucking bubble.
    • But Ed it's proof of something right- yeah! It's proof that something is broken in society. Generative AI has never had the kind of meaningful business returns or utility that actually underpins something meaningful, but it has enough to make people give it a try.

You know what? Let's talk about why this bubble actually inflated!

So, let's start simple: the term "artificial intelligence" is bastardized to the point it effectively means nothing and everything at the same time. When people hear "AI" they think of an autonomous intelligence that can do things for them, and generative AI can "do things for you" like generate an image or text "from a simple prompt." As a result, it's easy to manipulate people who don't know much about tech into believing that this will naturally progress from "it can create a bunch of text for me that I have to write for my job just by me typing in a prompt" to "it can do my job for me just by typing in a prompt."

Basically everything you read about "the future of AI" extrapolates generative AI's ability to sort of generate something a human would make and turns it into do whatever a human can do, all because tech has, in the past, been bad at the beginning and linearly improved as time drags on. 

This illogical thinking underpins the entire generative AI boom, because we've found out exactly how many people do not know what the fuck they're talking about and are willing to believe the last semi-intelligent person they talked to. Generative AI is a remarkable con — a just-good-enough simulacrum of human expression to get it past the gatekeepers in finance and the media, knowing that neither will apply a second gear of critical thinking beyond "huh guess we're doing AI now."

The expectation that generative AI will transform into something much, much more powerful requires you to first ignore the existing limitations, believing it to be more capable than it is, and also ignore the fact that these models have yet to show meaningful improvement over the past few years. They still hallucinate. They’re still ungodly expensive to run. They’re still unreliable. And they still don’t do much.  

Worse still, ChatGPT's growth has galvanized these people into believing that this is a legitimate, meaningful movement, rather than the most successful PR campaign of all time.  

Think of it like this: if almost every single media outlet talked about one thing (generative AI), and that one thing was available from one company (OpenAI), wouldn't it look exactly how things look today? You've got OpenAI with hundreds of millions of monthly active users, and then a bunch of other companies — including big tech firms with multi-trillion dollar market caps — with somewhere between 10 and 69 million monthly active users.

What we're seeing is one company taking most of the users and money available and doing so because the media fucking helped them.  People aren't amazed by ChatGPT — they're curious! They're curious about why the media won't shut up about it!

Everybody I talk to that uses ChatGPT regularly uses it as either a way to generate shitty limericks or as a replacement for Google search, a product that Google has deliberately made worse as a means of increasing profits.

ChatGPT is, if I'm honest, better at processing search strings than Google Search, which is not so much a sign that ChatGPT is good at something as it is that Google has stopped innovating in any meaningful way. Over time, Google Search should've become something that was able to interpret your searches into the perfect result, which would require the company to improve how it processes your requests. Instead, Google Search has become dramatically worse, mostly because the company's incentives changed from "help people find something on the web" to "funnel as much traffic and show as many ad impressions as possible on Google.com."

By this point, Google Search should have been more magical, more capable of taking a dimwitted question and turning it into a great answer, with said answer being a result on the internet. Note that nothing I'm writing here is actually about generating a result — it's about processing a user's query and presenting an answer, the very foundation of computing and the thing that Google, at one point, was the best in the world at doing. Thanks to Prabhakar Raghavan, the former head of ads that led a coup to become head of search, Google was pulled away from being a meaningful source of information.

And I'd argue that ChatGPT filled that void by doing the thing that people wanted Google Search to do: answer a question, even if the user isn't really sure how to ask it. Google Search has become clunky, obfuscatory, putting the burden of using the service on the user rather than helping fill the gap between query and answer in any meaningful way. Google's AI summaries don't even try to do what ChatGPT does — they generate summaries based on search results and say "okay man, uhh, is this what you want?" 

One note on Google’s AI summaries: They’re designed to answer a question, rather than provide a right answer. That’s a distinction that needs to be made, because it speaks to the underlying utility of this product. 

One good illustration of this came earlier this week, when someone noticed that you could ask Google to explain the meaning of a completely made-up phrase, and it would dutifully obey. “Two dry frogs in a situation,” Google said, referred to a group of people in an awkward or difficult social situation. 

Not every insect has a mortgage,” Google claimed,” is a humorous way of explaining that not everything is as it seems. My favorite, “big winky on the skillet bowl,” is apparently a slang term that refers to a piece of bread with an egg in the middle. 

Funny? Sure. But is it useful? No. 

With all its data and all its talent, Google has put the laziest version of a Large Language Model on top of a questionably-functional search product as a means of impressing shareholders.

None of this is to say that ChatGPT is good, just that it is better at understanding a user's request than Google Search.

Yes, I fundamentally believe that 500 million people a week could be using ChatGPT as some sort of search replacement, and no, I do not believe that's a functional business model, in part because if it was, ChatGPT would've been a functional business. 

That, and it appears that Google's ability to turn search into such a big business was because it held a monopoly on search, search advertising and the entire online ads industry, and if it was a truly competitive market and it wasn’t allowed to be vertically integrated with the entire digital advertising apparatus of the web, it would likely be making much less revenue per user. And that’s bad if your Google Replacement costs many, many times more than Google to run. 

As an aside: if you're wondering, no, OpenAI cannot "just create a Google Search competitor." SearchGPT will be significantly more expensive to run at Google's scale than ChatGPT — both infrastructurally and in the cost of revenue, with OpenAI forced to create a massive advertising arm that currently doesn't exist at the company.

People love the ChatGPT interface — the box where they can type one thing and get another thing out — because it resembles how everybody has always wanted Google Search to work. Does it actually work? Who knows. But people feel like they're getting more out of it.

Let's Talk About AGI Really Quick

This newsletter has been a break from the extremely deep and onerous analysis I've been on for the last few months, in part because I needed to have a little fun writing.

It also comes from a place of frustration. None of this has ever felt substantive or real because the actual things that you can do with generative AI never seem to come close to the things that people like Sam Altman and Dario Amodei seem to be promising, nor do they come close to the bullshit that people like Casey Newton and Kevin Roose are peddling. None of this ever resembled "artificial general intelligence," and if I'm honest, very little of it seems to even suggest it's a functional industry.

When cynical plants like Roose bumble around asking theoretical questions such as "do you think that there is a 50% chance or greater that AGI, defined as an AI system that outperforms human experts at virtually all cognitive tasks, will be built before 2030," we should all be terrified, not of AGI, but that the lead tech columnist at the New York Times appears to have an undiagnosed concussion. Roose's logic (as with Newton's) is based on the idea that he's talked to a bunch of people that say "yeah dude AGI is right around the corner" rather than any kind of proof or tangible evidence, just "the curve is going up."

Roose’s most egregious example of this company-forward credulousness came last week, when he published a thinly-veiled puff piece about what to do if AI models become conscious in the near future. He interviewed two people — both employed by Anthropic, with one holding the genuinely hilarious job description of “AI welfare researcher” — who said batshit things like “there’s only a small chance (maybe 15 percent or so) that Claude or another current A.I. system is conscious” and “It seems to me that if you find yourself in the situation of bringing some new class of being into existence… then it seems quite prudent to at least be asking questions about whether that system might have its own kinds of experiences.”

What makes this so appalling is that Roose acknowledges that this shit is seen by most level-headed people as nothing less than utter fantasy. He describes the concept of AI consciousness as “a taboo subject” and that many critics will see this as “crazy talk,” but doesn’t bother to speak to any actual critics. He does, however, speculate on the motives of said critics, saying that “they might object to an A.I. company’s studying consciousness in the first place, because it might create incentives to train their systems to act more sentient than they actually are.”

Yeah Kevin, wouldn’t it be terrible if a company somehow convinced someone that their AI was more powerful than it was? Also, do you bark at the mirror every time you walk past it because you think you see another guy?

Nothing about anything that Anthropic or OpenAI is building or shipping suggests we are anywhere near any kind of autonomous computing. They've used the concept of "AI safety" — and now, AI welfare — as a marketing term to convince people that their expensive, wasteful software will somehow become conscious because they're having discussions about what to do if it does so, and anyone — literally any reporter — accepting this at face value is doing their readers a disservice and embarrassing themselves in the process.

If AI safety advocates cared about, say, safety or AI, they'd have cared about the environmental impact, or the fact these models train using stolen material, or the fact that if these models actually delivered on their promises, it would deliver a shock to the labor market that would meaningfully hurt millions — if not billions — of people, and we don’t have anywhere near the social safety net to support them. 

These companies don't care about your safety and they don't have any way to get to AGI. They are full of shit and it's time to start being honest that you don't have any proof they will do anything they say they will.

Oh, By The Way, The Bubble Might Be Bursting

Hey, remember in August of last year when I talked about the pale horses of the AIpocalpyse? One of the major warning signs that the bubble was bursting was big tech firms reducing their capital expenditures, a call I've made before, with a little more clarity, on April 4 2024:

While I hope I'm wrong, the calamity I fear is one where the massive over-investment in data centers is met with a lack of meaningful growth or profit, leading to the markets turning on the major cloud players that staked their future on unproven generative AI. If businesses don't adopt AI at scale — not experimentally, but at the core of their operations — the revenue is simply not there to sustain the hype, and once the market turns, it will turn hard, demanding efficiency and cutbacks that will lead to tens of thousands of job cuts.

We're about to find out if I'm right.

Last week, Yahoo Finance reported that analyst Josh Beck said that Amazon's generative AI revenue for Amazon Web Services would be $5 billion, a remarkably small sum that is A) not profit and B) a drop in the bucket compared to Amazon's projected $105 billion in capital expenditures in 2025, its $78.2 billion in 2024, or its $48.4 billion in 2023.

Is That Really It? Are you kidding me? Amazon will only make $5 billion from AI in 2025? What?

5 billion dollars? Five billion god damn dollars? Are you fucking kidding me? You'd make more money auctioning dogs! This is a disgrace! And if you're wondering, yes! All of this is for AI:

CEO Andy Jassy said in February that the vast majority of this year’s $100 billion in capital investments from the tech giant will go toward building out artificial intelligence capacity for its cloud segment, Amazon Web Services (AWS).

Well shit, I bet investors are gonna love this! Better save some money, Andy!

What's that? You already did? How?

Oh, shit! A report from Wells Fargo analysts (called "Data Centers: AWS Goes on Pause") says that Amazon has "paused a portion of its leasing discussions on the colocation side...[and while] it's not clear the magnitude of the pause...the positioning is similar to what [analysts have] heard recently from Microsoft, [that] they are digesting aggressive recent lease-up deals...pulling back from a pipeline of LOIs or SOQs."

Some asshole is going to say "LOIs and SOQs aren't a big deal," but they are. I wrote about it here.

"Digesting" in this case refers to when hyperscalers sit with their current capacity for a minute, and Wells Fargo adds that these periods typically last 6-12 months, though can be much shorter. It's not obvious how much capacity Amazon is walking away from, but they are walking away from capacity. It's happening.

But what if it wasn't just Amazon? Another report from friend of the newsletter (read: people I email occasionally asking for a PDF) analyst TD Cowen put out a report last week that, while titled in a way that suggested there wasn't a pull back, actually said there was.

Let's take a look at one damning quote:

...relative to the hyperscale demand backdrop at PTC, hyperscale demand has moderated a bit (driven by the Microsoft pullback and to a lesser extent Amazon, discussed below), particularly in Europe, 2) there has been a broader moderation in the urgency and speed with which the hyperscalers are looking to take down capacity, and 3) the number of large deals (i.e. +400MW deals) in the market appears to have moderated.

In plain English, this means "demand has come down, there's less urgency in building this stuff, and the market is slowing down. Cowen also added that it "...observed a moderation in the exuberance around the outlook for hyperscale demand which characterized the market this time last year." 

Brother, isn't this meant to be the next big thing? We need more exuberance! Not less!

Worse still, Microsoft appears to have pulled back even further, with TD Cowen noting that there has been a "slowdown in demand," and that it saw "very little third-party leasing from Microsoft" this quarter, and, most damningly, and I'll bold this for effect, "these deals in totality suggest Microsoft's run-rate demand has decelerated materially," which for those of you wondering means it’s not getting the fucking demand for generative AI.

Well, at least Meta and Oracle aren't slowing down, right?

Well...

TD Cowen reported that it received "reverse inquiries from industry participants around a potential slowdown in demand from Oracle," leading the analyst to ask around and find that "there had been a NT (near-term) slowdown in decision-making amid organizational changes at Oracle," though it adds this might not mean that this is changing its needs or the speed at which it secures capacity. If you're wondering what else this could mean, you are correct to do so, because "slowing down" traditionally refers to a change in speed.

TD Cowen also adds that Meta has continued demand "albeit with less volume of MW (Megawatt) signings quarter-over-quarter..." then adding that "Meta's data center activity has historically been characterized by short periods of strong activity followed by digestion." In essence, Meta is signing less megawatts of compute and has, in the past, followed periods of aggressive buildouts with, well, fewer buildouts.

If I'm Wrong, How Am I Wrong Exactly?

I dunno man, all of this sure seems like the hyperscalers are reducing their capital expenditures at a time when tariffs and economic uncertainty are making investors more critical of revenues. It sure seems like nobody outside of OpenAI is making any real revenue on generative AI, and they're certainly not making a profit.

It also, at this point, is pretty obvious that generative AI isn't going to do much more than it does today. If Amazon is only making $5 billion in revenue from the literal only shiny new thing it has, sold on the world's premier cloud platform, at a time when businesses are hungry and desperate to integrate AI, then there's little chance this suddenly turns into a remarkable revenue-driver.

Amazon made $187.79 billion in its last quarterly earnings, and if $5 billion is all it’s making at the very height of the bubble, it heavily suggests that there may not actually be that much money to make, either because it's too expensive to run these services or because these services don't have the kind of total addressable market as the rest of Amazon's services.

Microsoft reported that it was making a paltry $13 billion a year — so the equivalent of $3.25 billion a quarter — selling generative AI services and model access. The Information reported that Salesforce's "Agentforce" bullshit isn't even going to boost sales growth in 2025, in part because it’s pitching it as "digital labor that can essentially replace humans for tasks" and it turns out that it doesn't do that very well at all, costs $2 a conversation, and requires paying Salesforce to use its "data cloud" product.

What, if anything, suggests that I'm wrong here? That things have worked out in the past with things like the Internet and smartphones, and so it surely must happen for generative AI and, by extension, OpenAI? That companies like Uber lost money and eventually worked out (see my response here)? That OpenAI is growing fast, and that somehow discounts the fact it burns billions of dollars and does not appear to have any path to making a profit? That agents will suddenly start working and everything will be fine?

It's a fucking joke and I'm tired of it!

Large Language Models and their associated businesses are a $50 billion industry masquerading as a trillion-dollar panacea for a tech industry that’s lost the plot. Silicon Valley is dominated by management consultants that no longer know what innovation looks like, tricked by Sam Altman, a savvy con artist who took advantage of tech’s desperation for growth. 

Generative AI is the perfected nihilistic form of tech bubbles — a way for people to spend a lot of money and power on cloud compute because they don’t have anything better to do. Large Language Models are boring, unprofitable cloud software stretched to their limits — both ethically and technologically — as a means of tech’s collapsing growth era, OpenAI’s non-profit mission fattened up to make foie gras for SaaS companies to upsell their clients and cloud compute companies to sell GPUs at an hourly rate. 

The Rot Economy has consumed the tech industry. Every American tech firm has become corrupted by the growth-at-all-costs mindset, and thus they no longer know how to make sustainable businesses that solve real problems, largely because the people that run them haven’t experienced them for decades. 

As a result, none of them were ready for when Sam Altman tricked them into believing he was their savior. 

Generative AI isn’t about helping you or me do things — it’s about making new SKUs, new monthly subscription costs for consumers and enterprises, new ways to convince people to pay more for the things that they already used to be slightly different in a way that often ends up being worse. 

Only an industry out of options would choose this bubble, and the punishment for doing so will be grim. I don’t know if you think I’m wrong or not. I don’t know if you think I’m crazy for the way I communicate about this industry. Even if you think I am, think long and hard about why it is you disagree with me, and the consequences of me being wrong. 

There is nothing else after generative AI. There are no other hypergrowth markets left in tech. SaaS companies are out of things to upsell. Google, Microsoft, Amazon and Meta do not have any other ways to continue showing growth, and when the market works that out, there will be hell to pay, hell that will reverberate through the valuations of, at the very least, every public software company, and many of the hardware ones too.

And I fear it'll go much further, too. The longer this bubble inflates - the longer everybody pretends - the worse the consequences will be.

Share
Comments

Welcome to Where's Your Ed At!

Subscribe today. It's free. Please.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Ed Zitron's Where's Your Ed At.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.