Soundtrack: Mack Glocky - Chasing Cars
Last week, I spent a great deal of time and words framing the generative AI industry as a cynical con where OpenAI's Sam Altman and Anthropic's Dario Amodei have used a compliant media and braindead investors to frame unprofitable, unsustainable, environmentally-damaging and mediocre cloud software as some sort of powerful, futuristic automation.
Yet as I prepared a script for Better Offline (and discussed it with my buddy Kasey, as I often do), I kept coming back to one thought: where's the money?
No, really, where is it? Where is the money that this supposedly revolutionary, world-changing industry is making, and will make?
The answer is simple: I do not believe it exists. Generative AI lacks the basic unit economics, product-market fit, or market penetration associated with any meaningful software boom, and outside of OpenAI, the industry may be pathetically, hopelessly small, all while providing few meaningful business returns and constantly losing money.
I am deeply worried about this industry, and I need you to know why.
On Unit Economics and Generative AI
Putting aside the hype and bluster, OpenAI — as with all generative AI model developers — loses money on every single prompt and output. Its products do not scale like traditional software, in that the more users it gets, the more expensive its services are to run because its models are so compute-intensive.
For example, ChatGPT having 400 million weekly active users is not the same thing as a traditional app like Instagram or Facebook having that many users. The cost of serving a regular user of an app like Instagram is significantly smaller, because these are, effectively, websites with connecting APIs, images, videos and user interactions. These platforms aren’t innately compute-heavy, at least to the same extent as generative AI, and so you don’t require the same level of infrastructure to support the same amount of people.
Conversely, generative AI requires expensive-to-buy and expensive-to-run GPUs, both for inference and training the models themselves. The GPUs must be run at full tilt for both inference and training models, which shortens their lifespan, while also consuming ungodly amounts of energy. And surrounding that GPU is the rest of the computer, which is usually highly-specced, and thus, expensive.
These models also require endless amounts of training data, supplies of which have been running out for a long time. While synthetic data might bridge some of the gap, at least in situations where there’s a definitive right and wrong answer (like a mathematical problem), there are likely diminishing returns due to the sheer amount of data necessary to make a large language model even larger — data amounting to more than four times the size of the internet.
These companies also must spend hundreds of millions of dollars on salaries to attract and retain AI talent — as much as $1.5 billion a year in OpenAI's case (before stock-based compensation). In 2016, Microsoft claimed that top AI talent could cost as much as an NFL quarterback to hire, and that sum has likely only increased since then, given the generative AI frenzy.
As an aside: One analyst told the Wall Street Journal that companies running generative AI models "could be utilizing half of [their] capital expenditure[s]...because all of these things could break down." As in it’s possible hyperscalers could spend 50% of their capital expenditures replacing broken stuff.
Though these costs are not a burden on OpenAI or Anthropic, they absolutely are on Microsoft, Google and Amazon.
As a result of the costs of running these services, a free user of ChatGPT is a cost burden on OpenAI, as is every free customer of Google's Gemini, Anthropic's Claude, Perplexity, or any other generative AI company.
Said costs are also so severe that even paying customers lose these companies money. Even the most successful company in the business appears to have no way to stop burning money — and as I'll explain, there's only one real company in this industry, OpenAI, and it is most decidedly not a real business.
OpenAI Spent $9 Billion to make $4 billion In 2024, and the entirety of its revenue ($4 billion) is spent on compute ($2 billion to run models, $3 billion to train them)
As a note — I have repeatedly said OpenAI lost $5 billion after revenue in 2024. However, I can no longer in good conscience suggest that it burned “only” $5 billion. It’s time to be honest about these numbers. While it’s fair to say that OpenAI’s “net losses” are $5 billion, it’s time to be clear about what it costs to run this company.
- 2024 Revenue: According to reporting by The Information, OpenAI's revenue was likely somewhere in the region of $4 billion.
- Burn Rate: The Information also reports that OpenAI lost $5 billion after revenue in 2024, excluding stock-based compensation, which OpenAI, like other startups, uses as a means of compensation on top of cash. Nevertheless, the more it gives away, the less it has for capital raises. To put this in blunt terms, based on reporting by The Information, running OpenAI cost $9 billion dollars in 2024. The cost of the compute to train models alone ($3 billion) obliterates the entirety of its subscription revenue, and the compute from running models ($2 billion) takes the rest, and then some. It doesn’t just cost more to run OpenAI than it makes — it costs the company a billion dollars more than the entirety of its revenue to run the software it sells before any other costs.
- OpenAI also spends an alarming amount of money on salaries — over $700 million in 2024 before you consider stock-based compensation, a number that will also have to increase because it’s “growing” which means “hiring as many people as possible,” and it’s paying through the nose.
- How Does It Make Money: The majority of its revenue (70+%) comes from subscriptions to premium versions of ChatGPT, with the rest coming from selling access to its models via its API.
- The Information also reported that OpenAI now has 15.5 million paying subscribers, though it's unclear what level of OpenAI's premium products they're paying for, or how “sticky” those customers are, or the cost of customer acquisition, or any other metric that would tell us how valuable those customers are to the bottom line. Nevertheless, OpenAI loses money on every single paying customer, just like with its free users. Increasing paid subscribers also, somehow, increases OpenAI's burn rate. This is not a real company.
The New York Times reports that OpenAI projects it'll make $11.6 billion in 2025, and assuming that OpenAI burns at the same rate it did in 2024 — spending $2.25 to make $1 — OpenAI is on course to burn over $26 billion in 2025 for a loss of $14.4 billion. Who knows what its actual costs will be, and as a private company (or, more accurately, entity, as for the moment it remains a weird for-profit/nonprofit hybrid) it’s not obligated to disclose its financials. The only information we’ll get will come from leaked documents and dogged reporting, like the excellent work from The New York Times and The Information cited above.
It's also important to note that OpenAI's costs are partially subsidized by its relationship with Microsoft, which provides cloud compute credits for its Azure service, which is also offered to OpenAI at a discount. Or, put another way, it’s like OpenAI got paid with airmiles, but the airline lowered the redemption cost of booking a flight with those airmiles, allowing it to take more flights than another person with the equivalent amount of points. At this point, it isn’t clear if OpenAI is still paying out of the billions of credits it received from Microsoft in 2023 or whether it’s had to start using cold-hard cash.
Until recently, OpenAI exclusively used Microsoft's Azure services to train, host, and run its models, but recent changes to the deal means that OpenAI is now working with Oracle to build out further data centers to do so. The end of the exclusivity agreement is reportedly due to a deterioration of the chummy relationship between OpenAI and Redmond, according to The Wall Street Journal, with the latter allegedly growing tired of OpenAI’s constant demands for more compute, and the former feeling as though Microsoft had failed to live up to its obligations to provide the resources needed for OpenAI to sustain its growth.
It is unclear whether this partnership with Oracle will work in the same way as the Microsoft deal. If not, OpenAI’s operating costs will only go up. Per reporting from The Information, OpenAI pays just over 25% of the cost of Azure’s GPU compute as part of its deal with Microsoft — around $1.30-per-GPU-per-hour versus the regular Azure cost of $3.40 to $4.
On User Numbers
OpenAI recently announced that it has 400 million weekly active users.
Weekly Active Users can refer to any seven-day period in a month, meaning that OpenAI can effectively use any spike in traffic to say that it’s “increased its weekly active users,” because it can choose the best seven-day period in a month. This isn’t to say they aren’t “big,” but these numbers are easy to game.
When I asked OpenAI to define what a “weekly active user” was, it responded by pointing me to a tweet by Chief Operating Officer Brad Lightcap that said “ChatGPT recently crossed 400M WAU, we feel very fortunate to serve 5% of the world every week.” It is extremely questionable that it refuses to define this core metric, and without a definition, in my opinion, there is no way to assume anything other than the fact that OpenAI is actively gaming its numbers.
There's likely two reasons it focuses on weekly active users:
- As I described, these numbers are easy to game.
- The majority of OpenAI’s revenue comes from paid subscriptions to ChatGPT.
The latter point is crucial, because it suggests OpenAI is not doing anywhere near as well as it seems based on the very basic metrics used to measure the success of a software product.
The Information reported on January 31st that OpenAI had 15.5 million monthly paying subscribers, and immediately added that this was a “less than 5% conversion rate” of OpenAI’s weekly active users — a statement that is much like dividing the number 52 by the letter A. This is not an honest or reasonable way to evaluate the success of ChatGPT’s (still unprofitable) software business, because the actual metric would have to be divided paying subscribers by MONTHLY active users, a number that would be considerably higher than 400 million.
Based on data from market intelligence firm Sensor Tower, OpenAI’s ChatGPT app (on Android and iOS) is estimated to have had more than 339 million monthly active users, and based on traffic data from market intelligence company Similarweb, ChatGPT.com had 246 million unique monthly visitors. There’s likely some crossover, with people using both the mobile and web interfaces, though how big that group is remains uncertain.
Though every single person that visits ChatGPT.com might not become a user, it’s safe to assume that ChatGPT’s Monthly Active Users are somewhere in the region of 500-600 million.
That’s good, right? Its actual users are higher than officially claimed? Er, no. First, each user is a financial drain on the company, whether they’re a free or paid user.
It would also suggest a conversion rate of 2.583% from free to paid users on ChatGPT — an astonishingly bad number, one made worse by the fact that every single user of ChatGPT, regardless of whether they pay, loses the company money.
It also feeds into a point I’ve repeatedly made in this newsletter, and in my podcast. Generative AI isn’t that useful. If Generative AI was genuinely this game-changing technology that makes it possible to simplify your life and your work, you’d surely fork over the $20 monthly fee for unlimited access to OpenAI’s more powerful models. I imagine many of those users are, at best, infrequent, opening up ChatGPT out of curiosity or to do basic things, and don’t have anywhere near the same levels of engagement as with any other SaaS app.
While it's quite common for Silicon Valley companies to play fast and loose with metrics, this particular one is deeply concerning, and I hypothesize that OpenAI choosing to go with Weekly versus Monthly Active Users is an intentional attempt to avoid people calculating the conversion rate of its subscription products. As I will continue to repeat, these subscription products lose the company money.
Mea Culpa: My previous piece focused entirely on web traffic to ChatGPT.com, and did not have the data I now have related to app downloads. Nevertheless, it isn't obvious whether OpenAI is being honest about its weekly active users, because it won't even define how it measures them.
On Product Strategy
- OpenAI makes most of its money from subscriptions (approximately $3 billion in 2024) and the rest on API access to its models (approximately $1 billion).
- As a result, OpenAI has chosen to monetize ChatGPT and its associated products in an all-you-can-eat software subscription model, or otherwise make money by other people productizing it. In both of these scenarios, OpenAI loses money.
- OpenAI's products are not fundamentally differentiated or interesting enough to be sold separately. It has failed — as with the rest of the generative AI industry — to meaningfully productize its models due to their massive training and operational costs and a lack of any meaningful "killer app" use cases.
- The only product that OpenAI has succeeded in scaling to the mass market is the free version of ChatGPT, which loses the company money with every prompt. This scale isn't a result of any kind of product-market fit. It's entirely media-driven, with reporters making "ChatGPT" synonymous with "artificial intelligence."
- As a result, I do not believe that generative AI is a "real" industry — which I define as one with multiple competitive companies with sustainable revenue streams and meaningful products with actual market penetration — because it is entirely subsidized by a combination of venture capital and hyperscaler cloud credits.
- ChatGPT is popular because it is the only well-known product, one that's mentioned in basically every article on artificial intelligence. If this were a "real" industry, other competitors would have similar scale — especially those run by hyperscalers — but as I'll get to later, data suggests that OpenAI is the only company with any significant user base in the entire generative AI industry, and it is still wildly unprofitable and unsustainable.
- OpenAI's models have been almost entirely commoditized. Even its reasoning model o1 has been commoditized by both DeepSeek's R1 model and Perplexity's R1 1776 model, both of which offer similar outcomes at a much-discounted price, though it's unclear (and in my opinion unlikely) that these models are profitable to run.
- OpenAI, as a company, is piss-poor at product. It's been two years and ChatGPT mostly does the same thing as it used to, still costs more to run than it makes, and ultimately does the same thing as every other LLM chatbot from every other generative AI company.
- Moreover, OpenAI (like every other generative AI model developer) is incapable of solving the critical flaw with ChatGPT, namely its tendency to hallucinate — where it asserts something to be true, when it isn’t. This makes it a non-starter for most business customers, where (obviously) what you write has to be true.
- Case in point: A BBC investigation just found that half of all AI-generated news articles have some kind of “significant” issue, whether that be hallucinated facts, editorialization, or references to outdated information.
- And the reason why OpenAI hasn’t fixed the hallucination problem isn’t because it doesn’t want to, but because it can’t. They’re an inevitable side-effect of LLMs as a whole.
- The fact that nobody has managed to make a mass market product by connecting OpenAI's models also suggests that the use cases just aren't there for mass market products powered by generative AI.
- Furthermore, the fact that API access is such a small part of its revenue suggests that the market for actually implementing Large Language Models is relatively small. If the biggest player in the space only made a billion dollars in 2024 selling access to its models (unprofitably), and that amount is the minority of its revenue, there may not actually be a real industry here.
- These realities — the lack of utility and product differentiation — also mean that OpenAI can’t raise its prices above the breakeven point, which would also likely make its generative AI unaffordable and unattractive to both business and personal customers.
Counterpoint: OpenAI has a new series of products that could open up new revenue streams such as Operator, its "agent" product, and "Deep Research," their research product.
- On costs: Both of these products are very compute intensive.
- Operator uses OpenAI's " Computer-Using Agent (CUA)," which combines OpenAI's models with virtual machines that take distinct actions on web pages in an extremely-unreliable way. Failures will either increase the amount of attempts a user makes to use Operator or make users never use it again.
- Deep Research uses a version of OpenAI's "o3 reasoning model," a model so expensive (because it spends more time to generate a response based on the model reconsidering and evaluating steps as it goes) that OpenAI will no longer launch it as a standalone model.
- On Product-Market Fit:
- To use Operator or Deep Research currently requires you to pay $200 a month for OpenAI's ChatGPT Pro, a $200-a-month subscription.
- Sam Altman has revealed that the $200-a-month subscription, much like the rest of OpenAI’s subscriptions, loses money because "people are using it more than expected."
- Furthermore, even on Pro, Deep Research is currently limited to 100 queries per month, adding that it is "very compute-intensive and slow."
- Though Altman has promised that ChatGPT Plus and free users will eventually get access to a few Deep Research queries a month, this will only increase its cash burn further!
- Sam Altman has revealed that the $200-a-month subscription, much like the rest of OpenAI’s subscriptions, loses money because "people are using it more than expected."
- As a product, Operator barely works. As I covered a few weeks ago, this product — which claims to control your computer and does not appear to be able to do so consistently — is not even close to ready for prime time, nor do I think it has a market.
- Deep Research has already been commoditized, with Perplexity and xAI launching their own versions almost immediately.
- Deep Research is also not a good product. As I covered last week, the quality of writing that you receive from a Deep Research report is terrible, rivaled only by the appalling quality of its citations, which include forum posts and Search Engine Optimized content instead of actual news sources. These reports are neither "deep" nor well researched, and cost OpenAI a great deal of money to deliver.
- To use Operator or Deep Research currently requires you to pay $200 a month for OpenAI's ChatGPT Pro, a $200-a-month subscription.
- On Revenue
- Both Operator and Deep Research currently require you to pay for a $200-a-month subscription that loses the company money.
- Neither product is sold on its own, and while they may drive revenue to the ChatGPT Pro product, as said above, said product loses OpenAI money.
- These products are compute-intensive and have questionable outputs, making each prompt from a user both expensive and likely to be followed up with further prompts to get the outputs the user desired. As generative models don't "know" anything and are probabilistically generating answers, they are poor arbiters of quality information.
In summary, both Operator and Deep Research are expensive products to maintain, are sold through an expensive $200-a-month subscription that (like every other service provided by OpenAI) loses the company money, and due to the low quality of their outputs and actions are likely to increase user engagement to try and get the desired output, incurring further costs for OpenAI.
On The Future Prospects for OpenAI
- A week or two ago, Sam Altman announced the "updated roadmap for GPT-4.5 and GPT-5.
- GPT-4.5 will be OpenAI's "last chain-of-thought model," referring to the core functionality of its reasoning models.
- GPT-5 will be, and I quote Altman, "a system that integrates a lot of our technology, including o3."
- Altman also vaguely suggests that paid subscribers will be able to run GPT-5 at "a higher level of intelligence," which likely refers to being able to ask the models to spend more time computing an answer. He also suggests that it will "incorporate voice, canvas, search, deep research, and more."
- Both of these statements vary from vague to meaningless, but I hypothesize the following:
- GPT-4.5 will be an upgraded version of GPT-4o, OpenAI's foundation model, now codenamed Orion.
- GPT-5 (which used to be called Orion) could be just about anything, but one thing that Altman mentioned in the tweet is that OpenAI's model offerings had gotten too complicated, and that it would be doing away with the ability to pick what model you used, gussying this up by claiming this was "unified intelligence.'
- As a result of doing away with the model picker, I hypothesize that OpenAI will now attempt to moderate costs by picking what model will work best for a prompt — a process it will automate to questionable results.
- I believe that this announcement is a very bad omen for OpenAI. Orion has been in the works for more than 20 months and was meant to be released at the end of last year, but was delayed due to multiple training runs that resulted in, to quote the Wall Street Journal, "software [that] fell short of the results researchers were hoping for."
- As an aside, The Wall Street Journal refers to Orion as "GPT-5," but based on the copy and Altman's comments, I believe "Orion" refers to the foundation model. OpenAI appears to be calling a hodgepodge of different other models "GPT-5" now.
- The Journal further adds that as of December Orion "perform[ed] better than OpenAI’s current offerings, but [hadn't] advanced enough to justify the enormous cost of keeping the new model running," with each six-month-long training run — no matter its efficacy — costing around $500 million.
- OpenAI also, like every generative AI company, is running out of high-quality training data necessary to make the model "smarter" (based on benchmarks specifically made to make LLMs seem smart) — and note that "smarter" doesn't mean "new functionality."
- Sam Altman deputizing Orion from GPT-5 to GPT-4.5 suggests that OpenAI has hit a wall with making its next model, requiring him to lower expectations for a model that OpenAI Japan president Tagao Nagasaki had suggested would "aim for 100 times more computational volume than GPT-4," which some took to mean "100 times more powerful" when it actually means "it will take way more computation to train or run inference on it."
- If Sam Altman, a man who loves to lie, is trying to reduce expectations for a product, you should be worried.
- Large Language Models — which are trained by feeding them massive amounts of training data and then reinforcing their understanding through further training runs — are hitting the point of diminishing returns. In simple terms, to quote Max Zeff of TechCrunch, "everyone now seems to be admitting you can’t just use more compute and more data while pretraining large language models and expect them to turn into some sort of all-knowing digital god."
- OpenAI's real advantage, other than the fact it’s captured the entire tech media, has been its relationship with Microsoft, because access to huge amounts of compute and capital allowed it to corner the market for making the Largest Language Model.
- Now that it's pretty obvious this isn't going to keep working, OpenAI is scrambling, especially now DeepSeek commoditized reasoning models and proved that you can build Large Language Models without the latest GPUs.
- It's unclear what the functionality of GPT-4.5 or GPT-5 will be. Does the market care about an even-more-powerful Large Language Model if said power doesn't lead to an actual product? Does the market care if "unified intelligence" just means stapling together various models to produce outputs?
As it stands, OpenAI has effectively no moat beyond its industrial capacity to train Large Language Models and its presence in the media. It can have as many users as it wants, but it doesn't matter because it loses billions of dollars, and appears to be continuing to follow the money-losing Large Language Model paradigm, guaranteeing it’ll lose billions more.
Is Generative AI A Real Industry?
The Large Language Model paradigm is also yet to produce a successful, mass market product, and no, Large Language Models are not successful or mass market. I know, I know, you're going to say ChatGPT is huge, we've already been through that, but surely, if generative AI was a real industry, there'd be multiple other players with massive customer bases as a result of how revolutionary it was, right?
Right?
Wrong!
Let's look at some estimated numbers from data intelligence firm Sensor Tower (monthly active users on apps) and Similarweb (unique monthly active visitors) for the biggest players in AI in January 2025:
- OpenAI's ChatGPT: 339 million monthly active users on the ChatGPT app, 246 million unique monthly visitors to ChatGPT.com.
- Microsoft Copilot: 11 million monthly active users on the Copilot app, 15.6 million unique monthly visitors to copilot.microsoft.com.
- Google Gemini: 18 million monthly active users on the Gemini app, 47.3 million unique monthly visitors.
- Anthropic's Claude: Two million (!) monthly active users on the Claude app, 8.2 million unique monthly visitors to claude.ai.
- Perplexity: Eight million monthly active users on the Perplexity app, 10.6 million unique monthly visitors to Perplexity.ai.
- DeepSeek: 27 million monthly active users on the DeepSeek app, 79.9 million unique monthly visitors to DeepSeek.com.
- This figure doesn’t capture DeepSeek’s China-based users, who (at least, on mobile) access the app through a variety of marketplaces. From what I can tell, the DeepSeek app has nearly 10 million downloads on the Vivo store — just one of many Android app marketplaces serving Mainland China, and not even one of the biggest.
- This isn’t surprising. China is a huge market, and it’s also one that’s incredibly hard for non-Chinese companies to enter, especially when you’re potentially dealing in content that’s incredibly sensitive or prohibited in China. That’s why Western social media and search companies are nowhere to be found in China, and the same is true for AI.
- For the sake of simplicity, assume that all these numbers mentioned earlier refer to users outside of China, where most — if not all — of the Western-made chatbots are blocked by the Great Firewall.
To put this in perspective, the entire combined monthly active users of the Copilot, Claude, Gemini, DeepSeek, and Perplexity apps amount to 66 million, or 19.47% of the entire monthly active users of ChatGPT's mobile app. Web traffic slightly improves things (I say sarcastically), with the 161.6 million unique monthly visitors that visited the websites for Copilot, Claude, Gemini, DeepSeek and Perplexity making up 65.69% of all of the traffic that went to ChatGPT.com.
However, I'd argue that including DeepSeek vastly over-inflates these numbers. It’s an outlier, and it’s also a relatively new company that’s enjoying its moment in the sun, basking in the glow of a post-launch traffic spike, and a flood of favorable media coverage. I imagine that when the dust settles in a few months, we’ll get a more reliable idea of its market share and consistent user base.
Without DeepSeek, the remaining generative AI services made up a total of 39 million monthly active users across their apps, and a grand total of 81.7 million unique monthly web visitors.
Without ChatGPT, it appears that the entire generative AI app market is a little more than half the size of Pokémon Go at its peak, when it had 147 million monthly active users. While one can say I missed a few apps — xAI's Grok, Amazon's Rufus, or Character.ai — there isn't a chance in hell they cover the shortfall.
These numbers aren't simply piss poor, they're a sign that the market for generative AI is incredibly small, and based on the fact that every single one of these apps only loses money, is actively harmful to their respective investors or owners.
I do not think this is a real industry, and I believe that if we pulled the plug on the venture capital aspect tomorrow it would evaporate.
On API Calls
Another counter to my argument is that API calls are a kind of “hidden adoption” — that there is this massive swell of engaged, happy customers using generative AI that aren’t using any of the major apps, and that the connection to these models is the real secret success story.
This isn’t the case.
OpenAI, as I’ve established, is the largest player in generative AI, making more revenue (roughly $4 billion in 2024, though it lost $5 billion after revenue — again, OpenAI lost $9 billion in 2024) than any other private AI company. The closest I can get to an estimate on how many actual developers integrate their applications is a statement from their October 2024 dev day where OpenAI said over three million developers are building apps using its models.
Again, that’s a very fuzzy — and unreliable — metric. I imagine a significant chunk of those developers are hobbyists working on personal projects, or simply playing around with the service out of sheer curiosity, spending a few bucks to write the generative AI equivalent of “Hello World,” and then moving on with their lives. Those developers actually using OpenAI’s APIs in actual commercial projects likely represent a vanishingly small percentage of that three million.
As I’ve discussed in the past, OpenAI’s revenue is heavily weighted toward its subscription business, with licensing access to models like GPT-4o making up less than 30% (around $1 billion) of their its, and subscriptions to their premium products (ChatGPT Plus, Teams, Business, Pro, the newly-released Government plan, etc) making up the majority — around $3 billion in 2024.
My argument is fairly simple. OpenAI is the most well-known player in generative AI, and thus we can extrapolate from it to draw conclusions about the wider industry. In the event that there was a huge, meaningful industry integrating generative AI into distinct products with mass-market consumer adoption, OpenAI’s API business would be doing far, far more revenue.
Let me be a little more specific about why API calls matter.
When a business plugs OpenAI’s models into its apps and a customer triggers a feature that uses it — such as asking the app to summarize an email — OpenAI charges the business both for the prompt (the input) and the result (the output). As a result, where “weekly active users” might be indicative of attention to OpenAI’s products, API calls are far more indicative of consumer and enterprise adoption.
To be clear, I acknowledge that there are a lot — a non-specific amount, but a fair amount — of app developers and companies adopting generative AI. However, judging on the revenue both to OpenAI’s developer-focused business and the lack of any real revenue for any business integrating generative AI, I hypothesize that customers — which include developers integrating OpenAI’s models into both consumer-facing apps and enterprise-focused apps — are not actually using these features that much.
I should also add that OpenAI makes about $200 million a year selling its models through Microsoft, meaning that its API business may be as small as $800 million. Again, this is not profit, it is revenue.
Sidebar: There is, of course, an alternative: that OpenAI is charging way, way less for their models than it should — an argument I made in The Subprime AI Crisis last year — but accepting this argument means that at some point OpenAI will either have to become profitable (it has shown no signs of doing so) or charge the actual cost of operating their unprofitable models.
How Bad Is This?
For Anthropic, It's Pretty Disastrous
The Information reported last week that Anthropic has projected (made up) that it will make at least $12 billion in revenue in 2027, despite making $918 million in 2024 and losing $5.6 billion somehow.
Anthropic is currently raising $2 billion at a $60 billion valuation for a business that loses billions of dollars a year with an app install base of 2 million people and a web presence smaller than some niche hobbyist news outlets.
Based on reporting from the Information from two weeks ago, Anthropic made approximately $918 million in 2024 (and lost $5.6 billion), with CNBC reporting that 60-75% of that revenue came from API calls (though that number was from September 2024). In that respect, it’s the reverse of OpenAI — which, itself, points to the relative obscurity of Anthropic and the fact that OpenAI has become accepted as the default consumer entrypoint to generative AI.
This company is not worth $60 billion.
Anthropic has raised $14.7 billion to create an also-ran Large Language Model company that some developers like more than OpenAI, with a competing consumer-facing Large Language Model (Claude) that has an install base of maybe 2% of the five free-to-play games made by Clash of Clans developer Super Cell.
Anthropic, much like OpenAI, has categorically failed to productize its Large Language Model, with the only product it appears to have pushed being Computer Use, a similarly-useless AI model that can sometimes successfully do in minutes what it takes you to do in seconds using a web browser.
Anthropic, like OpenAI, has no moat. While it provides chain-of-thought reasoning in its models, that too has been commoditized by DeepSeek. Its models, again like OpenAI, are unprofitable, unsustainable and heavily-dependent on training data that's either running out or has already run out.
Its CEO is also a sleazy conman who, like Sam Altman, continually promises that his company's AI systems will become powerful and autonomous in a way that they have never shown any possibility of becoming.
Any investor in Anthropic needs to seriously consider what it is they're investing in. Anthropic has, other than iterating on its Large Language Model Claude, shown little fundamental differentiation from the rest of the industry.
Anthropic's business, again like OpenAI, is entirely propped up by venture capital and hyperscaler (Google, Amazon) money, and without it would die almost immediately, because it has only ever lost money.
Its products are both unpopular and commoditized, and it lost $5.6 billion last year! Stop dancing around this fact! Stop it!
For Perplexity, Who Cares?
Perplexity, a company valued at $9 billion toward the end of 2024, has eight million people a month using its app, with the Financial Times reporting it has a grand total of 15 million monthly active users for its unprofitable search engine. Perplexity, like every generative AI company, only ever loses money, and its product — generative AI-powered search — is so commoditized that it's actually remarkable the company still exists.
Other than a slick design, there is little to be excited about here — and 8 million monthly active users is a pathetic, embarrassing number for a company with the majority of its users on mobile.
Aravind Srivinas is a desperate man with questionable intentions that made a half-hearted offer to merge with TikTok in January and a product that rips off journalists to spit out its mediocre content.
Any investor in Perplexity needs to ask themselves — what is it I'm investing in? An unprofitable search engine? An unprofitable Large Language Model company? A company that has such poor adoption of its product that it was prepared to become the shell corporation for TikTok?
Personally, I'd be concerned about the bullshit numbers it keeps making up. The Information reported that Perplexity said it would make $127 million in 2025, and $656 million in 2026.
How much money did it make in 2024? Just over $56 million! Is it profitable? Hell no!
Its product is commoditized, and it makes less than a quarter of the revenue of the Oakland Athletics in 2024, though its app is marginally more popular.
It's time to stop humoring these companies!
For The Hyperscalers, Apocalyptic
The Wall Street Journal reports that Microsoft intends to spend $93.7 billion on capital expenditures in 2025 — or roughly $8,518 per monthly active user on the Copilot app in January 2025. Those figures, however, may already be out of date with Bloomberg reporting the company is cancelling some leases for AI data centers. If true, it would suggest the company is pulling back from its drunken AI spending binge — although it’s not clear to what extent.
Sidenote: For what it’s worth, Microsoft responded by saying it stands by its original capex plans, although “may strategically pace or adjust [its] infrastructure in some areas.” Take from that what you will, while also noting that a plan isn’t the same as a definitive commitment, and that the company paused construction on a data center in January that was reportedly intended to support OpenAI. It’s also worth noting that as part of these cuts, Microsoft has pulled back from so-called statements of qualifications - the financial rundown and statements that say how they'll intend to pay for the lease (this might also include financing terms) - a document that's a precursor for future data center agreements. In short, they may have pulled out from further data centers they hadn't fully committed to.
Google is currently planning to spend $75 billion on capital expenditures, or roughly $4,167 per monthly active user of the Gemini app in January 2025. Sundar Pichai wants Gemini to be "used by 500 million people before the end of 2025," a number so unrealistic that someone at Google should have been fired, and that someone is Sundar Pichai.
The fact of the matter is that if Google and Microsoft can't make generative AI apps with meaningful consumer penetration, this entire industry is screwed. There really are no optimistic ways to look at these numbers (and yes, I'm repeating from the above):
- Microsoft Copilot: 11 million monthly active users on the Copilot app, 15.6 million unique monthly visitors to copilot.microsoft.com.
- Google Gemini: 18 million monthly active users on the Gemini app, 47.3 million unique monthly visitors.
These are utterly pathetic considering Microsoft and Google's scale, especially given the latter's complete dominance over web search and the ability to funnel customers to Gemini. For millions — perhaps billions — Google is the first page they see when they open a web browser. It should be owning this by now.
47.3 million unique monthly visitors is a lot of people, but considering that Google spent $52.54 billion in capital expenditures in 2024, it's hard to see where the return is, or even see where a return could possibly be.
Google, like most companies, does not break out revenue from AI, though it loves to say stuff like "a strong quarter was driven by our leadership in AI and momentum across the business." As a result of its unwillingness to share hard numbers, all we have to look at are numbers like those I've received from Similarweb and Sensor Tower, and it's fair to suggest that Gemini and its associated products have been a complete flop.
Worse still, it spent $127.54 billion in capital expenditures in 2023 and 2024 combined, with an estimated $75 billion forecast for 2025. What the fuck is going on?
Yes, it is likely making revenue from people running generative AI models on Google Cloud, and yes, it is likely making revenue from forcing AI upon Google Workspace customers. But Google, like every single other generative AI player, is losing money on every single generative AI prompt, and based on these monthly active user numbers, nobody really cares about Gemini.
Actually, I take that back. Some people care about Gemini — not that many, but some! — and it's far more fair to say that nobody cares about Microsoft Copilot despite Microsoft shoving it in every corner of our lives. 11 million monthly active users for its unprofitable, heavily-commoditized Large Language Model app is a joke — as are the 15.6 million monthly active users to its web presence — probably because it does exactly the same shit that every other LLM does.
Microsoft's Copilot app isn't just unpopular, it's irrelevant. For comparison, Microsoft Teams has, according to a post from the end of 2023, over 320 million monthly active users. That’s more than ten times the amount of monthly active users of their Copilot app and the Copilot website combined, and unlike Copilot, Teams actually makes Microsoft money.
Now, I obviously don't have the numbers on the people that accidentally click the Copilot button in Microsoft Office or on Bing.com, but I do know that Microsoft isn't making much money on AI at all. Microsoft reported in its last earnings that it was making "$13 billion of annual revenue" — a projected number based on current contracts — on its "artificial intelligence products."
Now, I've made this point again and again, but revenue is not the same thing as profit, and Microsoft does not have an "artificial intelligence" part of its earnings. These numbers are cherry-picked from across the entire suite of Microsoft products — such as selling Copilot add-ons to its Microsoft 365 enterprise suite (The Information reported in September 2024 that Microsoft had only sold Copilot to around 1% of their 365 customers), selling access to OpenAI's models on Azure (roughly a billion in revenue), and people running their own models on their Microsoft Azure Cloud.
For context, Microsoft made $69.63 billion in revenue in its last quarter. $13 billion of annual revenue (NOT profit) is about $3.25 billion in quarterly revenue off of upwards of $200 billion of capital expenditures since 2023.
The fact that neither Gemini nor Copilot has any meaningful consumer penetration isn't just a joke. It should be sending alarm bells throughout Wall Street. While Microsoft and Google may make money outside of consumer software, both companies have desperately tried to cram Copilot and Gemini down consumers' throats, and they have categorically, unquestionably failed, all while burning billions of dollars to do so.
"BUT ED, WHAT ABOUT GITHUB COPILOT."
According to a report from the Wall Street Journal from October 2023, Microsoft was losing on average more than $20 a month per user on the paid version of Github, with some users costing them more than $80 a month. Microsoft said a year later that Github Copilot had 1.8 million paid customers, which is pretty good, except like all generative AI products, it loses money.
I must repeat that Microsoft will have spent over $200 billion in capital expenditures by the end of 2025. In return, it got 1.8 million paying customers for a product that — like everything else I'm talking about — is heavily-commoditized (basically every LLM can generate code, though some are better than others, by which I mean they all introduce security issues into your code, but some produce stuff that’ll actually compile) and loses Microsoft money even when the user pays.
Am I getting through to you yet? Is it working?
On The Prevalence of “AI”
One of the arguments people make is that “AI is everywhere,” but it’s important to remember that the prevalence of AI is not proof of its adoption, but the intent of companies shoving it into everything, and the same goes for “business integrating AI” that are really just mandating people dick around with Copilot or ChatGPT.
No, really, KPMG bought 47,000 Microsoft Copilot subscriptions last year (at a significant discount) “to be familiar with any AI-related questions [its] customers may have. Management consultancy PwC bought 100,000 enterprise subscriptions — becoming OpenAI’s largest customer in the process, as well as its first reseller, and have created their own internal generative AI called ChatPWC that PwC staff absolutely hate.
While you may “see AI everywhere,” integrations of generative AI are indicative of the decision making of the management behind the platforms and the demands of “the market” more than any consumer demand. Enterprise software is more often than not sold in bulk to managers or C-suite executives tasked less with company operations and more with seeming “on the forefront of technology.”
In practical terms, this means there’s a lot of demand to put AI in stuff and some demand to buy stuff with AI on it by enterprises buying software, but little evidence to suggest significant user adoption or usage, I’d argue because Large Language Models do not lend themselves to features that provide meaningful business returns.
Where Large Language Models Work
To be clear, and to deal with the “erm, actually” responses, I am not saying Large Language Models have no use cases or no customers.
People really do use them for coding, for searching defined libraries of documents, for generating draft materials, for brainstorming, and for summarizing and searching documents. These are useful, but they are not magical.
These are also — and I do not believe there are any use cases that justify this — not a counterbalance for the ruinous financial and environmental costs of generative AI. It is the leaded gasoline of tech, where the boost to engine performance didn’t outweigh the horrific health impacts it inflicted.
On “Agents”
When a company uses the term “agent,” they are intentionally trying to be deceitful, because the term “agent” means “autonomous AI that does stuff without you touching it.” The problem with this definition is that everybody has used it to refer to “a chatbot that can do some things while connected to a database,” which is otherwise known as a chatbot.
In OpenAI and Anthropic’s case, “agents” refer to a model that controls your computer and performs tasks based on a prompt. This is closer to “the truth,” other than the fact it’s so unreliable as to be disqualifying, and the tasks it succeeds at (like searching on Tripadvisor) are remarkably simple.
Next time you hear the term “agent,” actually look at what the product does.
On Artificial General Intelligence
Generative AI is probabilistic, and Large Language Models do not “know” anything, because they are guessing what the next part of a particular output would be based on an input. They are not “making decisions.” They are probability machines, which in turn makes them only as reliable as probability can be, and as conscious — no matter how intricate a system may be or how much infrastructure is built — as a pair of dice.
We do not understand how human intelligence works, and as a result it’s laughable to imagine we’d be able to simulate it. Large Language Models do not create “artificial intelligence” — they are the most powerful parrots in the world, trained to respond to stimulus with what they guess is the correct answer.
In simpler terms, imagine if you made a machine that threw a bouncy ball down a hallway, and got really, really good at dialing it in to throw the ball so that it followed a fairly exact trajectory. Would you consider the arm intelligent? How about the ball?
The point I am making is that Large Language Models — a cool concept with some interesting things they can do — have been used as a cynical marketing vehicle to raise money for OpenAI by lying about what they’re capable of doing, starting with calling them “artificial intelligence.”
No, Really, Where's The Money?
Revenue is not the same as profit.
I'll say it again — revenue is not the same as profit.
And even then, Google, Amazon and (to an extent Microsoft), the companies making the most investments in AI, do not want to state what that revenue is. I hypothesize the reason that they do not want to disclose it is that it’s pretty god damn small.
It is extremely worrying that so few companies are willing to directly disclose their revenue from selling services that are allegedly revolutionary. Why? Salesforce says it closed “200 AI related deals” in its last earnings. How much money did it make? Why does Google get away with saying it has “growing demand for AI” without clarifying what that means? Is it because nobody is making that much money?
Sidebar: I can find — and I’ve really looked! — one company that appears to be making profit from generative AI. Turing, a consultancy that helps generative AI companies find people to train their models that made $300 million in revenue in 2024 and reached an indeterminate amount of profitability.
While Microsoft may “disclose” it “made $13 billion in AI revenue,” that’s annualized — so projected based on current contracts rather than booked revenue — and does not speak to the specific line items like one would if said line items were not going to make the markets say “hey, what the fuck?”
Put aside whatever fantastical beliefs you may have about the future and tell me, right now, what business use case exists that justifies burning hundreds of billions of dollars, damaging our power grid, hurting our planet, and stealing from millions of people?
Even if you can put troublesome things like “morals” or “the basic principles of finance” aside, can AI evangelists not see that their dream is failing? Can they not see that nothing is really happening? That generative AI, at best, can be kind of cool yet mostly sucks and comes at an unbearable moral, financial and environmental cost? Is any of this really worth it?
And where exactly does this end? Do you truly, gun to your head, your life contingent on the truth leaving your lips, believe that this goes much further than you see today?
Do you not see that this kind of sucks? Do you not see that generative AI runs contrary to the basic tenets of what makes science fiction cool? It doesn’t make humans better, it reduces their work to a stagnant, unremarkable slop in every way it can, and reduces the cognition of those who come to rely on it, and it costs hundreds of billions of dollars and a return to fossil fuels for some reason.
It isn’t working. The users aren’t there. The revenue isn’t there. The best time to stop this was two years ago, and the next best time is as soon as humanly possible.
I have said that generative AI is a group delusion in the past, and I repeat that claim today. What you are seeing in the news is not the “success“ of the artificial intelligence industry, but a runaway narrative created by and sustained by Sam Altman and OpenAI.
What you are watching is not a revolution, but a repetitious public relations campaign for one company that accidentally timed the launch of ChatGPT with a period of deep desperation in big tech, one so profound that it will likely drag half a trillion dollars’ worth of capital expenditures along with it.
This bubble will only burst when either the markets or the hyperscalers accept that they have chased their own tails toward oblivion. There is no justification for any of the capital expenditures related to generative AI — we are approaching the limit of what the transformer-based architecture can do, if we haven’t already reached it. No amount of beating off about test-time compute and connecting Large Language Models to other Large Language Models is going to create a new use case for this technology, and even if it did, it’s unlikely that it ever makes enough money to make it profitable.
I will keep writing this stuff until I’m proven wrong. I do not know why more people aren’t more worried about this. The financials are truly damning, the user numbers so small as to be insignificant, the costs so ruinous that they will likely cost tens of thousands of people their jobs and one of the hyperscalers CEOs their job (although, admittedly, I’m less upset about that), and inflict damage on tech valuations that may rival the dot com boom.
And if the last point feels distant to you, ask yourself: What’s in your retirement savings? That’s right. Google and Microsoft, and hundreds of other companies that will be hurt by the contagion of an AI bubble imploding, just as they were in the 2008 financial crash, when the failure of the banking system trickled down into the wider economy.
I should also not be the person saying this, or at least I should not be the first. These numbers are horrifying, and I have no idea why nobody else is worried. There is no industry here. There is no money. There is no proof that this will ever turn into a real industry, and far more proof that it will cost more money than it will ever make in perpetuity.
OpenAI and Anthropic are not real companies — they are free-riders, living on venture-backed welfare for an indeterminate amount of time because the entire tech industry has agreed to rally around the world’s most unprofitable software. And like any free rider that doesn’t actually produce anything, when the money goes away, they’re fucked.
Seriously, why are investors funding OpenAI? Do they seriously believe it’s necessary to let Sam Altman and OpenAI continue to burn 5 or more billion dollars a year on the off chance he’s able to create something that’s…alive? Profitable? What’s the endpoint here? How many more billions? Where is the fucking money, Sam Altman? Where is the god damn money?
Because generative AI is OpenAI. The consumer adoption of this software has completely failed, and appears to be going nowhere fast. ChatGPT is sustained entirely on deranged, specious hype drummed up by a media industry that thinks it’s more remarkable to write down the last lie that Sam Altman told than say that OpenAI has lost $9 billion dollars in the last year and intends to more than double that number in 2025 for absolutely no reason.
It is time to stop humouring OpenAI, and time to start directly stating that it is a bad business without a meaningful product. The generative AI industry does not exist without OpenAI, and thus this company must justify its existence.
And let’s be abundantly clear: OpenAI cannot exist any further without further venture capital investment. This company has absolutely no path to sustain itself, no moat, and loses so much money that it will need more than $50 billion to continue in its current form.
I don’t know how I’m wrong, and I have sat and thought a great deal about how I might be. I can find no compelling arguments. I don’t know what to do but tell you what I think, and why I think that way, and hope that you, the reader, understand a little bit more about what I think is going on.
I’ll leave you with one thought — and one particular thing that bothers me about generative AI.
Regular people, for the most part, do not seem to want this. While there are occasional people I’ll meet who use ChatGPT to rewrite part of an email, most of the people I meet feel like AI was forced into their lives.
With that in mind, I believe that Apple is radicalizing millions of people against generative AI by forcing them to reckon with the terrible summaries, awful suggested texts and horribly-designed user interface elements of Apple Intelligence.
Something about generative AI has caused the hyperscalers to truly lose it, and the intrusion of generative AI into both Microsoft Office and Google Docs has turned just about everybody I know in the business world against it.
The resentment boiling against this software is profound because the tech industry has become desperate and violative, showing such contempt for their customers that even Apple will force an inferior experience upon them to please the will of the Rot Economy and the growth-at-all-cost mindset of the markets.
Let’s be frank: nobody really needs anything generative AI does. Large Language Models hallucinate too much to be truly reliable, a problem that will require entire new branches of mathematics to solve, and their most common consumer-facing functions like summarizing an article, “practicing for a job interview,” or “write me a business plan” are not really things people need or massively benefit from, even if these things weren’t ruinously expensive or damaging to the environment.
I believe regular people are turning on the tech industry thanks to their frenzied attempts to make us all buy into their latest bad idea.
Yet it isn’t working. Consumers don’t want this shit. They’re intrigued by the idea, then mostly immediately bouncing off of it once they see what it can (or can’t) do. This software is being forced on people at scale by corporations desperate to seem futuristic without any real understanding as to why they need it, and whatever use cases may exist for Large Language Models are dwarfed by how utterly unprofitable this whole fiasco is.
I want you to remember the names Satya Nadella, Tim Cook, Mark Zuckerberg, Sam Altman, Dario Amodei and Sundar Pichai, because they are the reason that this farce began and they must be the ones who are blamed for how it ends.