Soundtrack — Soundgarden — Blow Up The Outside World
A lot of people try to rationalize the AI bubble by digging up the past.
Billions of dollars of waste are justified by saying “OpenAI just like Uber” (it isn’t) and “the data center buildout is just like Amazon Web Services” (it isn’t, Amazon Web Services was profitable in a decade and cost about $52 billion between 2003 and 2017, and that’s normalized for inflation) and, most egregiously, that AI is “too big to fail.”
I think that these statements are acts of cowardice if they are not backed up by direct and obvious comparisons based on historical data and actual research. They are lazy intellectual tropes borne of at best ignorance, or at worst an intellectual weakness that makes somebody willing to take flimsy information and repeat it as if it were gospel. Nobody has any proof that AI is profitable on inference, nor is there any explanation about how it will become profitable at some point, just a cult-like drone of “they’ll work it out” and “look at the growth!”
And the last argument, that AI is “too big to fail” is the most cowardly of them all, given that said statement seldom precedes the word “because,” and then an explanation of why generative AI is so economically important, and why any market correction would be so catastrophic that the bubble must continue to inflate.
Over the last few months I have worked diligently to unwind these myths. I discussed earlier in the year how the AI Bubble is much worse than the dot com bubble, and ended last year with a mythbusters (AI edition) that paired well with my free opus, How To Argue With An AI Booster.
I don’t see my detractors putting in anything approaching a comparable effort. Or any effort, really.
This isn’t a game I’m playing or some sort of competitive situation, nor do I feel compelled to “prove my detractors wrong” with any specificity. I believe time will do that for me.
My work is about actually finding out what’s going on, and I believe that explaining it is key to helping people understand the world. None of the people who supposedly believe that AI is the biggest, most hugest and most special boy of all time have done anything to counter my core points around AI economics other than glance-grade misreads of years-old pieces and repeating things like “they’re profitable on inference!”
Failing to do thorough analysis deprives the general public of the truth, and misleads investors into making bad decisions. Cynicism and skepticism is often framed as some sort of negative process — “hating” on something for the sake of being negative, or to gain some sort of cultural prestige, or as a way of performatively exhibiting one’s personal morality — when both require the courage (when done properly) to actually understand things in-depth.
I also realize many major media outlets are outright against skepticism. While they frame their coverage as “taking on big tech,” their questions are safe, their pieces are safer, their criticisms rarely attack the actual soft parts of the industries (the funding of the companies or infrastructure developments, or the functionality of the technology itself), and almost never seek to directly interrogate the actual statements made by AI leaders and investors, or the various hangers-on and boosters.
This is why I’ve been so laser-focused on the mythologies that have emerged over the past couple of years, such as when people say “it’s just like the dot com bubble" — it’s not, it’s much worse! — because if these mythologies actually withstood scrutiny, my work wouldn’t have much weight.
The Dot Com Bubble in particular grinds my gears because it’s a lazy trope used to rationalise rotten economics, all while disregarding the actual harms that took place. Unemployment spiked to 6%, venture capital funds lost 90% of their value, and hundreds of thousands of people in the tech industry lost their jobs, some of them for good.
It is utterly grotesque how many people minimize and rationalize the dot com bubble, reframing it as a positive, by saying that “things worked out afterwards,” all so that they can use that as proof that we need to keep giving startups as much money as they ask for forever and that AI is the biggest thing in the world.
Yet AI is, in reality, much smaller than people think. As I wrote up (and Bloomberg clearly were inspired by!) last week, only 5GW of AI data centers are actually under construction worldwide out of the 12GW that are supposedly meant to be delivered this year, with many of them slowed by the necessity of foreign imports of electrical equipment and, you know, the fact that construction is hard, and the power isn’t available.
Meanwhile, back in October 2025, The Wall Street Journal claimed that a “giant new AI data center is coming to the epicenter of America’s fracking boom” in a deal between Poolside AI (a company that does not appear to have released a product) and CoreWeave (an unprofitable AI data center company that I’ve written about a great deal). This was an “exclusive” report that included the following quote:
“It is not about your headline numbers of gigawatts. It’s about your ability to deliver data centers,” Eiso Kant, a co-founder of Poolside, said in an interview. The ability to build data centers quickly is “the real physical bottleneck in our industry,” he said.
Turns out Mr. Kant was correct, as it was just reported that CoreWeave and Poolside’s deal fell apart, along with Poolside’s $2 billion funding round, as Poolside was “unable to stand up the first cluster of chips to CoreWeave’s timeline,” probably because it couldn’t afford them and wasn’t building anything. The FT added that “...Poolside was unable to convince investors that it could train AI models to the same level of established competitors.” It was also unable to get Google to take over the site.
Elsewhere, troubling signs are coming from the secondary markets — the place where people sell stock in private companies like OpenAI. Those signs being that, well, nobody’s buying.
Per Bloomberg, over $600 million of OpenAI shares are sitting for sale with no interest from buyers at its current $850 billion post-money valuation, though apparently $2 billion is “ready to deploy” for private Anthropic shares at a $380 billion valuation, according to Next Round Capital (a secondary share sale site)’s Ken Smythe.
Though people will try to frame this as a case of OpenAI’s shares “being too close to what they might go public at,” one has to wonder why shares of what is supposed to be the literal most valuable company of all time aren’t being sold at what, theoretically, is a massive discount.
One might argue that it’s because people think that the stock might drop on IPO and then grow, but…that doesn’t show a great degree of faith in the company. Investors likely think that Anthropic would go public at a higher price than $380 billion, though I do need to note that the full quote was that "buyers have indicated that they have $2 billion of cash ready to deploy into Anthropic,” which is not the same thing as “will actually buy it.”
In any case, the market is no longer treating OpenAI like it’s the golden child. Poolside’s CoreWeave deal is dead. Data centers aren’t getting built. Oracle is laying off tens of thousands of people to fund AI data centers for OpenAI, a company that cannot afford to pay for them. AI demand, despite how fucking annoying everybody is being about it, does not seem to exist at the scale that makes any part of this industry make sense.
Yet people still squeal that “The Trump Administration Will Bail Out The AI Industry,” and that OpenAI is “too big to fail,” two statements that are not founded in history or analysis, but are the kinds of things that you say only when you’re either so beaten down by bad news that you’ve effectively given up or are so willfully ignorant that you’ll say stuff without knowing what it means because it makes you feel better.
AI Is Not Too Big To Fail, And If You Say It Is You Do Not Know What That Means
As I discussed in this week’s free newsletter, there is a subprime AI crisis going on.
When the subprime mortgage crisis happened towards the end of the 2000s, millions of people built their lives around the idea that easy money would always be available, and that housing would only ever increase in value. These assumptions led to the creation of inherently dangerous mortgage products that never should have existed, and that inevitably screwed the buyers.
I talked about these in my last free newsletter. Negative amortization mortgages, for example, were a thing in the US. These were where the mortgage payments didn’t actually cover the cost of the interest, let alone the principal.
Similarly, in the UK, my country of birth, many homebuyers used endowment mortgages — an interest-only mortgage where, instead of paying the principal, buyers made monthly payments into an investment savings account that (theoretically) would cover the cost of the property (and perhaps provide some extra cash) at the end of the term. If the investments did extremely well, the buyer could potentially pay off the mortgage early.
Far too often, those investments underperformed, meaning buyers were left staring at a shortfall at the end of their term.
Across the globe, the value of housing was massively overinflated by the lax standards of a mortgage industry incentivized to sign as many people as possible thanks to a lack of regulation and easily-available funding.
The value of housing — and indeed the larger housing and construction boom — was a mirage. In reality, housing wasn’t worth anywhere near what it was being sold for, and the massive demand for housing was only possible with unlimited resources, and under ideal conditions (namely, normal levels of inflation and relatively low interest rates).
Those buying houses they couldn’t afford with adjustable-rate mortgages either didn’t understand the terms, or believed members of the media and government officials that suggested housing prices would never decrease and that one could easily refinance the mortgage in question.
Similarly, AI startups products are all subsidized by venture capital, and must, in literally every case, allow users to burn tokens at a cost far in excess of their subscription fees, a business that only “works” — and I put that in quotation marks — as long as venture capital continues to fund it. While from the outside these may seem like these are functional businesses with paying users, without the hype cycle justifying endless capital, these businesses wouldn’t be possible, let alone viable, in any way shape or form.
For example, Harvey is an AI tool for lawyers that just raised $200 million at an $11 billion valuation, all while having an astonishingly small $190 million in ARR, or $15.8 million a month. It raised another $160 million in December 2025, after raising $300 million in June 2025, after raising $300 million in February 2025.
Remove even one of those venture capital rounds and Harvey dies. Much like subprime loans allowed borrowers to get mortgages they had no hope of paying, hype cycles create the illusion of viable businesses that cannot and will never survive without the subsidies.
The same goes for companies like OpenAI and Anthropic, both of whom created priority processing tiers for their enterprise customers last year, and the latter of which just added peak rate limits from 5am and 11am Pacific Time. Their customers are the subprime borrowers too — they built workflows around using these products that may or may not be possible with new rate limits, and in the case of enterprise customers using priority processing, their costs massively spiked, which is why Cursor and Replit suddenly made their products worse in the middle of 2025.
The reason that the Subprime Mortgage Crisis led to the Great Financial Crisis was that trillions of dollars were used to speculate upon its outcome, across $1.1 trillion of mortgage-backed securities. In mid-2008, per the IMF, more than 60% of all US mortgages had been securitized (as in turned into something you could both trade, speculate on the outcome of and thus buy credit default swaps against). Collateralized debt obligations — big packages of different mortgages and other kinds of debt that masked the true quality of the underlying assets — expanded to over $2 trillion by 2006, though the final writedowns were around $218 billion of losses.
By comparison, AI is pathetically small. While there were $178.5 billion in data center credit deals done in America last year, speculation and securitization remains low, and in many cases the amount of actual cash available is in tranches based on construction milestones, with most data center projects (like Aligned’s recent $2.58 billion raise) funded by “facilities” specifically to minimize risk.
As I’ve written about previously, building a data center is hard — especially when you’re building at scale. Finding land, obtaining permits (something which can be frustrated by opposition from neighbors or local governments), obtaining electricity, and then obtaining the labor, machinery, and raw materials all take time. Some components — like electrical transformers — have lead times in excess of a year.
And so, you can understand why there’s such a disparity between the dollar amount in data center credit deals, and the actual capital deployed to build said data centers.
There also isn’t quite as much wilful ignorance on the part of ratings agencies, though that isn’t to say they’re actually doing their jobs. CoreWeave is one of many data center companies that’s been able to raise billions of dollars using its counterparties’ credit ratings, with Moody’s giving the debt for an unprofitable data center company that would die without endless debt that’s insufficiently capitalized to pay it off an “A3 investment grade rating” because it was able to use Meta’s credit rating and the GPUs in question as collateral.
Nevertheless, none of this comes close to the apocalypse that the global economy faced as a result of the catastrophically dangerous bets made by the entire finance industry during the late 2000s, because those bets weren’t made on housing so much as they were made on financial instruments that were given power because of housing.
Juiced by a mortgage industry that allowed basically anybody to buy a house regardless of whether they could pay for it, by the middle of 2008, nearly $9 trillion of mortgages were outstanding in America (with around $1.1 trillion of home equity loans on top). Trillions (it’s hard to estimate due to the amount of off-balance sheet trades that happened) more were gambled on top of them as they were packaged into CDOs (collateralized debt obligations) and synthetic CDOs where somebody would buy a credit default swap (CDS, or a bet against the default) against the underlying assets, assuming (incorrectly) that the company issuing the CDS would have the funds to pay them.
As I’ll get into deeper in the piece, no such comparison exists for AI, and the asset-backed securitization of data centers and GPUs remains very small. Despite many deceptive studies that attempt to claim otherwise, the economy is relatively unaffected by AI, and while software companies might have debt, AI companies, for the most part, do not appear to, and those that do (OpenAI and Anthropic) have credit facilities rather than lump-sum loans.
In totality, the AI industry seems to have made about $65 billion in revenue (not profit!) in 2025, with I estimate about a third of that being the result of OpenAI or Anthropic feeding money to hyperscalers or neoclouds like CoreWeave, and a billions more being AI startups (funded entirely by VC) feeding money to Anthropic and OpenAI to rent their models.
Even the venture capital scale of AI startups is drastically overestimated. While (as reported by The New York Times) “AI startups” raised $297 billion in the first quarter of 2026, $188 billion of that was taken by OpenAI (which has yet to fully receive the funds!), Anthropic, xAI, and Waymo. In 2025, $425 billion was invested in startups globally, with half of that (about $212.5 billion) going to AI startups, but about half of that ($102 billion) going to Anthropic, OpenAI, xAI, Scale AI’s not-quite-acquisition by Meta, and Bezos’ Project Prometheus.
The great financial crisis was, as I’ll get into, a literal collapse of how banks, financial institutions, and property businesses operated, with their reckless speculation on a housing market that was only made possible by a craven mortgage industry incentivized to get people to sign at any cost. When people speculated that there was a bubble, articles ran saying that housing was actually cheap, that subprime lending had actually “made the mortgage market more perfect,” that the sky was not falling in the credit markets because unemployment wasn’t going to rise, that subprime mortgages wouldn’t hurt the economy, and that there was no recession coming.
Sidenote: This isn’t to say the media didn’t report on the bubble. In fact, outlets like CNBC that have been staunch supporters of the AI bubble directly reported on Buffett’s concerns about the housing bubble, with even Jim Cramer worrying that the bubble might burst as early as 2005, though he did go on to tell people not to worry about Bear Stearns just before it collapsed.
More specifically, he told people not to pull their money from Bear Stearns, saying that its low price (at the time, it was trading at $65-a-share, almost a third of its one-year high) meant it was more likely to be acquired by a competitor, and at a higher price than its market value.
In the end, it was sold to JP Morgan Chase for $10-a-share.
In any case, OpenAI, Anthropic and AI startups in general are far from “systemic risks.” They are not load-bearing. TARP and associated bailouts did not bail out the markets themselves — the S&P 500 lost around half of its value during the bear market that followed, and home prices only returned to growth in 2012.
I imagine the “systemic risk” argument is that NVIDIA makes up 7% to 8% of the value of the S&P 500, and that makes sense as long as you ignore that Exxon Mobil was around 5% of the value of S&P 500 in 2008 and saw its value tank for years following the crisis without any bailout to stop it. Microsoft, Meta, Amazon, Google, NVIDIA, Tesla, and Apple are not going bankrupt if AI dies, and anybody suggesting they will is wrong.
NVIDIA’s revenue collapsing by 50% or 80% or more would not cause a “financial crisis,” nor would said collapse be considered a “systemic risk” to the stability of the broader economy, though I admit, it would be very bad for the markets writ large.
Conversely, a similar blow at TSMC — the company that owns the literal foundries that makes many of the leading-edge semiconductors used today, including those used for data center GPUs — would be, because its collapse would massively reduce the demand for its products, which, I add, require billions of dollars of upfront investment to make.
GPUs are not critical to the global economy, nor are Large Language Models, nor is OpenAI, nor is Anthropic. Their collapse would end a hype cycle, which would make the markets drop much like they did in the dot com bust, but that is not the same as too big to fail.
Today’s premium is one of the most comprehensive analyses I’ve ever written — a rundown of what makes something “Too Big To Fail,” an explanation of the actual fundamentals of the Great Financial Crisis, and a true systemic analysis of the AI bubble writ large.
None of this is too big to fail, and in many ways its failure is necessary for us to move forward as a society.