Soundtrack: Masters of Reality - High Noon Amsterdam
I have said almost everything in this piece in every one of these articles for months. I am not upset, but just stating an obvious truth. The current state of affairs effectively pushes against the boundaries of good sense, logic and reason, a grotesque, wasteful wound in the side of the tech industry.
August 2, 2024 was Black Friday for the artificial intelligence boom, as a week of rough earnings from Big Tech led to what felt like the entire media industry to ask: is the AI bubble popping?
The Guardian sought to answer why the big seven tech companies have been hit with AI boom doubts. CNN asked “has the AI bubble burst?” and The Atlantic suggested (several months too late) that the “Generative-AI Revolution May Be A Bubble.” The Financial Times reported hedge fund Elliot Management told investors that Nvidia was “a bubble,” and Bloomberg reported that Big Tech had failed to convince Wall Street that AI is paying off.
While these articles — which were about the recent declines in the share prices of Amazon, Microsoft, and Google — didn’t say anything particularly new, they hinted to a broader awareness among Wall Street that the AI ambitions of these companies (particularly generative AI) will require massive upfront investments, and the payoff may not actually be there. At least, not for a while. And once a narrative gets settled, it’s very, very hard to move it.
Sidebar: Why Is Wall Street So Upset?
In short, it’s because these companies — while still making over $10 billion dollars in profit in the last quarter alone — have also spent an absolute shit ton of money on infrastructure to capture the “demand” for cloud services from generative AI. However, none of them seem to actually be making that much money from the thing they’re investing in.
In the last fiscal year, Microsoft’s capex (capital expenditure) was $55.7 billion, up 75% year-over-year, with more than one third ($19 billion) spent in the quarter ending June 30 2024. This is reportedly split 50-50 between infrastructure and tech, which suggests an aggressive data center build-out, with Chief Financial Officer Amy Hood saying that Microsoft “expect[s] capital expenditures to increase on a sequential basis” given cloud and “AI demand” that, as I’ve just said, isn’t really there.
Worse still, Hood added — and I quote Microsoft’s earnings call — that AI-related spend represented “nearly all of our total capital expenditures,” with “roughly half” for infrastructure needs that will “support monetization over the next 15 years and beyond.”
In essence, Microsoft spent $19 billion in the last quarter on cloud and AI expenses, and has made it clear that it’s not done spending more money than it’s ever spent before on a technology that neither makes Microsoft nor the people using it that much money. For context, Microsoft made $22.04 billion in profits ($64.73 billion in revenue) in Q2 2024. Is this really worth sinking an entire quarter’s worth of profits into? Another way to view this: its net profit margin has dropped from 39.44% in Q3 2023 to 34.04% in Q2 2024, meaning that it’s taking home less of the dollars spent.
The other cloud providers aren’t much better. Google’s capex is forecast to be $50 billion in 2024, and it spent $11 billion in Q4 2023 driven “mostly by technical infrastructure,” meaning servers and data centers, and $12 billion in Q1 2024. The reason I’m not breaking things out much is because Google has been extremely guarded about its AI expenses. Probably because they’re really high and not making them any money. Amazon is similarly-guarded, with its Capex last year hanging somewhere around $48.4 billion. It has spent $30.5 billion so far in 2024 — an eye-watering amount considering its profit for Q2 2024 was $13.48 billion.
And there really is no sign that anything changes here. Every hyper-scaler has said that they intend to keep spending lots of money on AI, and I haven’t even mentioned companies like Oracle, which expects to spend $10bn on cloud infrastructure this year, with much of that new capacity going to support Microsoft and Google Cloud.
Yet the real chaos of Black Friday came in the form of one of my pale horses. In Pop Culture, I suggested that the first signs of the AI bubble’s collapse would be in the failure of a major — though, not operating at the same scale as OpenAI — AI company. One of the most likely failures, I posited, would be Character.AI, which raised $150m in funding, and The Information hinted might sell itself to one of the big tech companies.
Well, on Friday — most of this stuff happened on Friday — Google said it would license Character.ai’s technology and hired the company’s leadership, Noam Shazeer and Daniel De Freitas, along with their research team of 30 people, to work at its Deepmind AI division. The fates of the other 140 employees remain uncertain.
Shezeer and De Freitas are both former Google employees, having left the company in 2021 to create Character.ai. It’s unclear whether, given their previous employment, they’ll be required to wear the “noogler” propeller hat upon their return to Mountain View.
While this is being framed as a typical licensing-employee-poaching deal, it’s actually an acquisition by stealth, with Google paying investors $2.5bn. Investors will still own that equity, but without exclusive access to their models, and without an engineering team, the company is effectively dead.
Why does this matter? On one level, it’s an indicator of the unsustainability of many generative AI applications. Even with $150m in funding — which is a decent amount of capital — Character.ai likely couldn’t keep the lights on for very long, thus necessitating its absorption by a larger company.
We’re starting to see a shift — or, more accurately, a centralization — in the point of failure for AI. Whereas at one point the burden was shouldered by a large and disparate group of startups and investors, it’s now moving to a few shoulders — Microsoft, Google, Amazon, Facebook, and, to a much lesser extent, Apple.
The two most prevalent Large Language Models (outside Meta’s Llama) — OpenAI’s GPT and Anthropic’s Claude — are effectively big tech’s welfare recipients, receiving billions in cloud credits to run their extremely-expensive models without having to build out their own infrastructure. Microsoft invested billions in OpenAI, and both Google and Amazon are propping up Anthropic.
The stars were already aligning in the worst possible way, then something else happened. The Information reported (again, on Friday) that Nvidia has reportedly told Microsoft (and another unnamed cloud customer) that its next-generation Blackwell chips — which are designed primarily for accelerating AI compute tasks — will be delayed by three months due to an unspecified design flaw identified by its contract foundry, TSMC.
It's entirely possible that this flaw takes more than three months to resolve. Semiconductor manufacturing is really, really hard, and issues that seem minor — especially to an outside layperson — can take eons to resolve. With Blackwell representing a major leap forward in capabilities (both in terms of power efficiency and sheer compute power), this is undoubtedly a massive blow to OpenAI, as well as its competitors. On a basic level, this might delay some of OpenAI’s grander ambitions for an indeterminate time.
One particularly-worrying quote from The Information’s article was that “Microsoft managers had planned to make Blackwell-powered servers available to OpenAI by January but may need to plan for March or early spring, said a person with knowledge of the situation.”
This is bad for a few reasons:
- OpenAI desperately needs something new. It needs something that will show both investors and the media that it is building something meaningful, and while it’s possible it will be able to deliver GPT-5 (their next model) in the near-term future, even potential customers don’t believe the jump will be significant from current model GPT-4.
- For reasons I’ve alluded to at the start of this article, investors are getting impatient with the AI hype boom — and, more specifically, with the giant tech companies that are bankrolling it.
- Nvidia’s blackwell chips would help OpenAI models run faster and ingest training data faster, as well as — even if it’s not possible — fool investors into believing that OpenAI “had the latest tech,” letting Sam Altman keep the con going.
- Regardless, this article has put it very plainly: OpenAI only gets access to the latest technology as fast as Microsoft permits.
This is a problem that expands to Google and Microsoft, too. Nvidia’s Blackwell chips were supposed to be a major technological leap for generative AI companies, designed with their needs in mind, and thus providing vastly more compute power and energy efficiency. But now, they’ve been delayed, with their availability not expected until the first quarter of 2025 at the earliest — and perhaps later, with Nvidia forced to do new test runs with its foundry partner before scaling up to mass production.
On their own, one of these events would be worrisome signs of the bubble popping, and together they threaten to begin a collapse that I’ve been predicting since March — that AI had three quarters to prove itself, “savaging the revenues of the biggest companies in tech.”
I’ve realized now that it isn’t super useful to attach things to time (though I stand by my prediction) and thus I think it’s more useful to suggest what the terms of the bubble popping actually are.
I am defining the bubble popping as the major cloud companies dropping capex on generative AI in a public and significant manner, or Anthropic or OpenAI collapsing (which, to be clear, doesn’t have to be an Enron-style implosion, but could be its absorption into another company on unfavorable terms, or being forced to radically curtail its offering to reflect its massively diminished means, or being forced to pivot to a less capital-intensive model, like IP licensing, becoming a warmed-up version of the Santa Cruz Operation, one of the most despicable companies in tech history, although for different reasons).
However, I believe that the collapse (or absorption) of OpenAI is the one critical sign to look for. OpenAI began this hype cycle, Sam Altman is the PT Barnum of the Large Language Model circus, and it has absorbed more money and attention than any other startup of the last few years. It is symbolic of the excess and waste of the generative AI boom, and its death (or, as mentioned, some other kind of collapse, such as acquisition) is the sign that we’re done here, in the same way that FTX signaled the end of the cryptocurrency boom..
How will we know it’s happening? Well, the pale horses to watch out for are:
- Any price increases by OpenAI or Anthropic: If they start needing to make more money, they could get desperate. This will be a sign their unit economics are no longer working out with their investors.
- Stories about general discord in AI investment: Any stories about investors fleeing AI — as they did the metaverse — are a sign that things are falling apart.
- Stories about OpenAI and Anthropic having trouble raising money: Up until now, there haven’t been any rumors of OpenAI nor Anthropic — the biggest independent Large Language Model companies — having any trouble raising capital, despite the fact that Anthropic is expected to burn $2.7 billion in 2024, just over half of the $5 billion OpenAI is expected to lose. Both of them need to raise billions of dollars, and will have to do so soon. And if they can’t raise, nobody can.
- Any suggestion that Google or Microsoft is reducing their capex: Venture capital isn’t really what’s propping up generative AI — it’s Google or Microsoft. If either of them decide it’s time to slow down investment, the boom is done, as referenced above.
- Discord within or the collapse of other major players: Scale AI — a training data company that has raised over $1.5 billion — and Cohere (another Large Language Model company that recently raised $450 million from Nvidia and Salesforce) are the other players to watch. These are stable, revenue-generating (yet unprofitable) ancillary players that likely will need to raise in the next 12 months. If we hear about “problems,” this means that people internally are worried, and they know more about their companies than we do.
- Discord within OpenAI or Anthropic: This one’s fairly obvious. If we hear there are problems, they are worried, and if they’re worried, they’re likely worried for the reasons I’ve written above. And these problems would likely find their way into the tech press, much like how discord within the White House, or the two major political parties, invariably finds its way into the byline of Maggie Haberman. If Axios or TechCrunch or 404 Media start reporting on dust-ups within the San Francisco offices of OpenAI with increasing regularity, you know bigger things are afoot.
- Any discussions of layoffs: Based on people I’ve talked to, AI companies are paying incredibly large salaries in stock and cash, and are not run particularly efficiently even on an operational level. If AI jobs are no longer cushy hidey-holes for Silicon Valley’s most expensive engineers, things will sour fast.
- A big, stupid magic trick: As these companies get desperate, expect someone — especially OpenAI — to try and show something “new and crazy” as a means of trying to turn the narrative. When or if this happens, look very carefully at what they say about the product’s availability, or what it can do, or who they show it to.
- Alternative scenario — Sora launches: If OpenAI gets desperate, it may move up the public launch of Sora, its generative video product. Doing so will only cause more problems — there isn’t a chance in hell that Sora is profitable, and I’m fairly sure it’s even more expensive to run than ChatGPT, and I imagine its visual inconsistencies and hallucinations would make for some entertaining content for YouTubers and tech reporters.
The common thread between all of these events is that they’re all expressions of desperation, fear, and a total lack of confidence in the underlying profit. So far — and this is napkin math — I’d estimate that a total of $200 billion has been spent to get generative AI to this point, in infrastructure, in funding, in energy, in so many different meaningless ways, all to get us to the point that we have a tool that’s really good at generating things that aren’t as good as what a human could make.
To be clear, we’re not quite at the bubble popping yet. The reason I choose the collapse of OpenAI as “the event” is because it will mean that Microsoft decided to cut it off, and there wasn’t enough banker or venture capital interest to prop it up. OpenAI’s collapse would be both financial and symbolic — the sign that the valley would let a company die, and that this idea wasn’t good enough for everybody to stake their futures on it.
In any case, things could change. The bubble could stay inflated.
As I wrote last week, OpenAI needs to raise more money than anybody has ever raised in the next 12-24 months, or launch a product so significant that it blows ChatGPT out of the water — things that are unlikely, but possible. Microsoft, Google and Amazon could somehow change the narrative, but doing so too would require some sort of technical breakthrough that has a very obvious way that it’d make money, something they have yet to be able to do.
The markets are also capricious. As mentioned above, on Friday, Elliott Management — a hedge fund with over $70bn in assets under management (AUM) — said, in a letter to clients, that AI is “overhyped” and Nvidia is a “bubble,” adding that AI had not delivered “value commensurate with the hype.”
While it’s possible there are others that dissent to prop things up (which I think is entirely possible, because if you’re deep in the hole with Microsoft or Nvidia, you’ll likely try to justify your decision in the face of growing skepticism), it’s worth putting Elliott Management’s note in a broader context. It’s only the latest voice in a chorus of critics, whose members include esteemed analyst houses and some of the largest investment banks in the world. .
Goldman Sachs put out a report in July that said that generative AI was too expensive and didn’t solve the complex problems that it would need to to justify said expenses. Just over a week ago, Gartner put out a report predicting that 30% of Generative AI projects would be abandoned after a proof of concept by End of 2025, and the Washington Post reports that Barclays bank thinks that “Wall Street analysts are expecting Big Tech companies to spend around $60 billion a year on developing AI models by 2026, but reap only around $20 billion a year in revenue from AI by that point.”
That’s only a billion dollars more than Microsoft’s entire capital expenditures in the last quarter.
Microsoft also has a storied history of pumping and dumping ideas. For Microsoft, augmented reality was an “absolute breakthrough” in 2019 before quietly shoving it in a corner with mass layoffs and business line closures a few years later. In 2021, Satya Nadella “couldn’t overstate” how much of a breakthrough the metaverse was, yet two years later — and big props to Preston Gralla of Computerworld for saying this in February 2023 — laid off most of the people involved and “betting everything” on to artificial intelligence.
Why, at this point, would Microsoft give OpenAI any more money, other than to save face? Though it’s a very real possibility that Satya Nadella is surrounded by yes men, Microsoft also has access to sell OpenAI’s “pre-AGI products” and full access to its research, as well as “certain rights to OpenAI’s intellectual property.” Unless OpenAI is capable of delivering something meaningfully different — something Nadella can wave in front of analysts' faces and show them that they can automate a million jobs immediately — its existence is another cost center, and one that could be eliminated during a tough quarter.
The narrative has shifted. To continue, Sundar Pichai of Google and Satya Nadella of Microsoft — as the loudest of the publicly-traded generative AI hype men — will have to show everybody something remarkable, but even then, they will have to show how it makes real money.
In the best-case scenario, these companies would either have to dramatically reduce costs — something they have yet to do, and actually predicted the opposite of, with 2025 expected to have higher capital expenditures than even the last year’s unprecedented spending spree — or show ways to make generative AI so significantly efficient that it somehow balances things out.
What we’re really waiting for now is somebody to blink. If none of the major cloud companies — who are usually in lock-step with each other to keep their cartel-esque empires alive — change course over the next three months, we could potentially see this boom continue for another quarter.
To continue, they will have to commit to billions of dollars in — at the very least — cloud credits for OpenAI and Anthropic, as neither of those companies can survive without further cash infusions. They will also have to find some way to make any of this useful and profitable, two challenges that generative AI has never really been able to deal with.
And they will have to figure out a way to support these companies in the face of rising disquiet among shareholders, who, in the most recent rout of AI-ensnared stocks, lost $2.6tn in value. As I pointed out in The Shareholder Supremacy, investors don’t care about the future (which generative AI isn’t) or innovation (which generative AI isn’t, unless you consider the mass-industrialization of larceny to be innovative), but rather what sends their portfolios cruising ever higher.
If generative AI becomes an albatross around the necks of Google, or Microsoft, or Amazon — which I believe it will — these companies will face serious pressure to curtail their investments and their capex spending. Investors will ask why these companies are spending a collective (estimated) additional $100bn in CapEx spending in 2025 on a technology that likely won’t drive any revenue growth, when it could be used for things like dividends and stock buybacks instead.
In any case, I believe the next month will be critical to the future of the artificial intelligence boom.
And I believe this is a time for industry-wide introspection — and to consider why this bubble existed in the first place.
None of this ever made sense, and it is time for this industry to atone, and take this group psychosis and learn from it. Generative AI was always unsustainable, always dependent on reams of training data that necessitated stealing from millions of people, its utility vague and its ubiquity overstated. The media and the markets have tolerated a technology that, while not inherently bad, was implemented in a way so nefariously and wastefully that it necessitated theft, billions of dollars in cash, and double-digit percent increases in hyper scalers’ emissions.
The desperation for the tech industry to “have something new” has led to such ruinous excess, and if this bubble collapses, it will be a result of a shared myopia in both big tech dimwits like Satya Nadella and Sundar Pichai, and Silicon Valley power players like Reid Hoffman, Sam Altman, Brian Chesky, and Marc Andreessen. The people propping this bubble up no longer experience human problems, and thus can no longer be trusted to solve them.
This is a story of waste, ignorance and greed. Of being so desperate to own the future but so disconnected from actually building anything. This arms race is a monument to the lack of curiosity rife in the highest ranks of the tech industry. They refuse to do the hard work — to create, to be curious, to be excited about the things you build and the people they serve — and so they spent billions to eliminate the risk they even might have to do any of those things.
Had Sundar Pichai looked at Microsoft’s investment in OpenAI and said “no thanks” — as he did with the metaverse — it’s likely that none of this would’ve happened. But a combined hunger for growth and a lack of any natural predators means that big tech no longer knows how to make competitive, useful products, and thus can only see what their competitors are doing and say “uhhh, yeah! That’s what the big thing is!”
Mark Zuckerberg was once so disconnected from Meta’s work on AI that he literally had no idea of the AI breakthrough Sundar Pichai complimented him about in a meeting mere months before Meta’s own obsession with AI truly began. None of these guys have any idea what’s going on! And why are they having these chummy meetings? These aren’t competitors! They’re co-conspirators!
These companies are too large, too unwieldy, too disconnected, and do too much. They lack the focus that makes a truly competitive business, and lack a cohesive culture built on solving real human or business problems. These are not companies built for anything other than growth — and none of them, not even Apple, have built something truly innovative and life-changing in the best part of a decade, with the exception, perhaps, of contactless payments. These companies are run by rot economists and have disconnected, chaotic cultures full of petty fiefdoms where established technologists are ratfucked by management goons when they refuse to make their products worse for a profit.
There is a world where these companies just make a billion dollars a quarter and they don't have to fire people every quarter, one where these companies actually solve real problems, and make incredibly large amounts of money for doing so. The problem is that they’re greedy, and addicted to growth, and incapable of doing anything other than following the last guy who had anything approaching a monetizable idea, the stench of Jack Welch wafting through every boardroom.
So, where are we, then? What happens next?
If this is the end — and there is every chance it keeps going for a few more months, but the next week is crucial — it will be sudden and violent.
There is enough money sloshing around right now that OpenAI can be bankrolled further — as discussed, it needs another $5 billion, if not $10 billion, to make it to the end of 2025 — but that will have to come through the Kingdom of Saudi Arabia, SoftBank, and private debt. And those suitors won’t provide nearly as generous terms as Microsoft, which, as I previously argued, actually gave OpenAI a really bad deal.
While the venture capital system theoretically could afford to do so, their limited partners — who were already a little sore from the excess of 2021 and 2022’s fads, not to mention the sudden and rapid rise in interest rates — potentially might balk at the amounts they’ll need. We’re talking $100 million checks, and lots of them, from everybody.
And it will only take one person blinking to create a panic in both public and private markets. OpenAI may be able to raise money, but investors have no real path to liquidity, as OpenAI most assuredly won’t be able to go public, and in the process of investing will find out exactly how bad things are there. While there is a chance we get one more round — a last gasp — it’s honestly a little tough to imagine how it keeps going even if it raises another $10 billion.
So how does a collapse happen?
There’s the obvious one — OpenAI or Anthropic shutting down — but I think that’s very unlikely based on their corporate structure. OpenAI or Anthropic “collapsing” will be more like the Character.ai deal — an attempt to hide the ugly mess where a big tech firm absorbs them. If I had to guess, Anthropic’s most likely home would be with Amazon, which completed its $4bn investment into the company earlier this year.
In any case where this happens — even if the entity “keeps going” without the key people in power (such as Anthropic CEO Dario Amodei going in-house at Amazon or Google. Again), it doesn’t really matter. They will eventually putter out.
Raising a round for a generative AI company in this market is likely going to be difficult, if not impossible. The narrative has shifted, and the people with the money to invest don’t really know what they’re talking about technologically-speaking, so they are investing based on how everybody else would make them feel about doing so.
Satya Nadella doesn’t care about AI, and neither does Sundar Pichai, or Mark Zuckerberg, or even Sam Altman. They care about accumulating power and creating the next growth vehicle for a “tech industry” that resembles the monstrous form of General Electric in Jack Welch’s era — a directionless rat king of 25 different companies all fighting to absorb monopolies. Sam Altman doesn’t care about AI either. He cares about accumulating power, has been doing so for years, and will be back with another con within three years.
And, fundamentally, this technology really isn’t that exciting.
I’m sorry, it isn’t. The idea of a “super smart friend that knows everything about me” is exciting, as is the idea of automating away drudgery — spreadsheets and documents, for example. Yet, generative AI doesn’t actually do these things. Putting aside any feelings I may have, it has not changed my life, a guy who loves using technology and is willing to put hours into finding solutions to automate or mitigate problems in his life.
I don’t know a single person — I have literally never met or spoken to one — that has meaningfully changed their workflow or life as a result of generative AI outside of being able to integrate it to speed up some processes in already-existing systems.
Nothing about this was real. It’s a farce. The promises of artificial intelligence will not be kept through the pursuit of generative AI, as generative AI is a dead-end technology that has peaked, one that costs too much — both financially and socially — to keep it going.
There is no heroic story here. There are few people who are sincerely cheering for these companies to win — those who don't have a tangible financial return banking on the value of OpenAI and its related tendrils — and let's be honest, nobody really LOVES this, do they? Nobody really uses ChatGPT and has that childlike "oh shit" moment we all had with the iPhone, or sending a file using Dropbox, the times (large and small) that actually made us love technology.
There is no magic, no whimsy, no joy in any of these Large Language Models and their associated outgrowths. They're not helping anybody, yet their peddlers demand our fealty and our applause.
The general public has known the problems and limitations of generative AI for months. I constantly get emails from artists, engineers, teachers, machinists, journalists, consultants, cleaners and copy editors all saying the same thing — “this all seems like a crock of shit” and “the outputs are terrible.”
Generative AI was not peddled as a solution to any problem other than tech’s lack of new hypergrowth markets, and it’s the kind of bet that only a tech industry devoid of natural selection truly builds. These companies don't take risks, because they don't want to end up looking stupid. And regular people know — they know Google is worse, they know Facebook is bordering on unuseable, they know that generative AI doesn’t do anything obviously useful, and that these companies have never made more money than they’re making today.
And I believe we are in the beginnings of a true revolt against big tech and their rotten promises. Their luck has run out, and these formless private equity vehicles dressed up as software companies are no longer bred to create new things.
We’re all sitting and staring at innovation’s corpse, stabbed full of holes by McKinsey veterans and slimy con artists — and it’s time for men like Sundar Pichai and Satya Nadella to atone, because their big bet looks like a gigantic waste, one so obviously rotten from the beginning that they have never been able to tell anyone that they never had a plan beyond "throw money at it, and eventually it will do something," or "we will do this as long as the other guy does it."
This is a big, ugly mess, a monument to excess that burned more than $200 billion dollars and allowed equally lazy and uncreative people to replace real people with shitty facsimiles that are "just good enough."
This is a time for the press to reevaluate how they treat big tech, and to question the promises of those who sought to peddle this nonsense.
It is no longer time to ask Mark Zuckerberg about his chain — it's time to ask him how long he intends to burn tens of billions of dollars on nothing, and why the website that made him so many billions is falling apart at the seams, pumped full of content created by the big stupid bullshit machine he's helped popularize. It is no longer time to gingerly coo about Sundar Pichai holding your smartphone — it's time to ask him how Gemini is going to make him a profit, and what it is that it will do, and press him again and again and again about how awful Google search has got. It is no longer time to let Satya Nadella oscillate from "referendums on capitalism" to laying off tens of thousands of people — it's time to ask him if he is prepared to bankroll OpenAI to the tune of $30 billion, and refuse to leave the conversation without committing to a number.
These men have burned hundreds of billions of dollars on a machine that boils lakes to make the most mediocre version of the past. These men are not supporting innovation — they're regressive forces too scared to piss off the markets to actually change the world, domesticated animals that fear no natural predators and thus have no hunger for something new. They propped up generative AI because they’re followers rather than leaders that are paid hundreds of millions of dollars to empower a nihilistic kind of growth-capitalism that dresses up monopolies as “disruption.”
They have propped up generative AI because building the future is hard, and expensive, and involves taking risks, and making sure you have a culture that doesn’t expect the answer to a question in the fastest amount of time possible. Innovation isn’t efficient. It’s lossy, has indeterminate time scales, and requires both capital investment and the space required for great people to think of actual solutions to problems that involve them talking to and caring about real people.
Fire Satya Nadella, fire Sundar Pichai, and work out a way to fire Mark Zuckerberg. These men are unworthy. They don't truly find any of this exciting or meaningful. Big tech has been overcome with nihilism, and it must change its ways to not eventually face perdition.
I’m serious.
Do you think they’ve got a backup plan? Do you think they burned all of this money on such a mediocre and wasteful solution because they’ve got better ideas?
They're all out. Big tech doesn’t have another thing. Google doesn't have any ideas. Meta is so out of ideas that it changed its name to another, worse idea, one associated with Mark Zuckerberg burning $45 billion for no reason. Amazon has never been in the ideas business — it’s the founder of the cloud storage cartel with Microsoft, Oracle and Google. Big Tech is lazy because they've all agreed to compete only a little, never coloring too far out the lines, because doing so might expose big tech to an actual risk, like regulation, or having to hire people that built new things versus exploiting current customers.
I am sure that there is a societal shift against big tech coming — the natural result of years of excess and bullshit, of the loudest people in tech trying to convince people that a Large Language Model that can generate text and images that kind of suck is tantamount to the launch of the iPhone.
I have no idea how quickly the shift comes, but it is inevitable that this lack of big, profitable ideas leads to some form of collapse in big tech. Healthy, well-run companies don't do things like this — let alone several different trillion dollar companies all bumbling into each other like the three stooges, billions of dollars in cash falling out of their pockets. It's a disgrace.
This is also — if the collapse happens — a time to point and laugh. Regular people could see this was bullshit from the beginning. Most people find generative AI either briefly interesting or outright disgusting, and they're capable of seeing that stealing the entire internet so you could spend $100bn making an inferior version of something is a stupid idea.
Sidenote: I know this piece throws a lot of numbers around: from the amount received by companies like Anthropic and OpenAI, to the CapEx spends of Google and Microsoft, to the amount of market value lost during the latest generative AI sell-off. And one thing I wanted to make clear is that — at least, with respect to the publicly-traded companies — we have no idea how much they’ve actually spent. The figure could be way, way higher than anything cited here.
Companies have to report their capital spending, but there’s a bunch of stuff that gets absorbed into other line items. We don’t have, for example, a breakdown of how many well-paid Microsoft or Google employees work on generative AI. We don’t know what proportion of their R&D spend goes to generative AI — which, no doubt, is significant, especially considering these companies design their own chips.
It’s hard to figure out the cost of their investments, especially when they’re structured in such weird ways — like Microsoft’s kinda-acquisition of Inflection, or Google’s kinda-acquisition of Character.ai, where they got the talent and the tech, but left the equity behind. Or Microsoft’s deal with OpenAI, where instead of receiving cash, it got Azure air miles.
A dollar is a dollar. What’s the actual cost basis of a $1 Azure credit to Microsoft? Does OpenAI get favorable rates? We don’t know.
And as we work our way down the chain, you come across other expenses worth factoring in. If you’ve watched the Paris Olympics at all, you’ve probably seen a few ads for Microsoft Copilot. That’s some prime time real estate, right there. Cumulatively, we’re looking at a few hundred million.
The point I’m making is that whatever estimates we have about what these companies have spent on generative AI is probably wrong by tens of billions of dollars, and the only real reliable (ish) indicator is CapEx spending, and what’s publicly reported whenever these companies make a new investment.
Economists, analysts, and journalists have been saying this isn't the future — Shira Ovide in March 2023, mere months after ChatGPT's launch saying it wasn't magic, for example. Alex Kantrowitz and Douglas Gorman reported in August 15 2023 that “ChatGPT isn't good enough to take jobs and is unlikely to cause mass layoffs: 'The hot takes have run into reality.”
Paris Marx called the ChatGPT revolution “another tech fantasy” in July 2023, and Gary Marcus worried about generative AI being a dud in August 2023. While this is by no means an exhaustive list, it’s important to note that people have been saying this, people with big platforms. The markets chose to ignore them.
As an aside: A year later, when many outlets were still tripping over themselves to say that generative AI was the future. Shira asked “if these chatbots are supposed to be magical, why are so many of them dumb as rocks?” It’s important to call out when a national newspaper actually does the work that needs to be done to critique the powerful.
At this point, it seems inevitable that things collapse — that the acceleration has stopped, that the funding will not be there, and that the market’s taste for the billions of dollars it’ll take to make generative AI an indeterminate level of better has disappeared.
By all means, tell me I'm wrong. I would be genuinely intrigued to see how OpenAI gets out of the jam of somehow making a meaningful product that makes money when they don't have the chips to train GPT-5, a model that likely wouldn't change the game enough to matter, let alone a new architecture to build future (and more capable) models on.
In the event that Anthropic or OpenAI is able to raise, we’ll see the cycle continue another quarter, at which point something crazy needs to happen to make any of this worth it. If these companies can raise, they’ll keep the bubble inflated, though not forever. And there’s likely still enough froth to find a couple billion in cash, if they’re lucky.
But it's really worth considering that the narrative is BAD, and there is very little they can do to change it. They can play for time — hope that Nadella and Pichai stand pat and say they have "great faith in the promise of AI," but they don't get another earnings that looks like this, let alone a worse one.
In many ways, this is Sam Altman's greatest gambit, and so much lies on OpenAI's back.
This is the moment we find out if Sam Altman has really got it — whether it's worth wringing our hands about how scary he is, or whether he's just able to hoodwink enough credulous billionaires that have their hands welded to eternal money printers.
Impress me, Samuel! Show me what you've got! Show me what great works you have buried in that non-profit for-profit monstrosity you've used to trick other visionless power brokers into betting their futures on a dead end friend best known "for an absenteeism that rankled his peers and some of the startups he was supposed to nurture."
Show me what you've got, Sam. Show us all. What's the big play? You've got maybe six months, and that's in the most generous estimates.
What's that? SearchGPT? Sam, you're crazy! There's no way you can build a Google Search competitor! Trust me, I know.
On Friday, I spent a few hours looking up Google’s financial reports over the past decade or so, trying to figure out how much the company spends on operating costs and CapEx expenditures just related to its search product. Due to the way Google structures its financials, I had to make some guesses (some more informed than others). I reckon i costs Google $30 billion or more a year to run its flagship search product, and it actually works, sometimes!
Also, to build a Google search competitor, you'd have to build an ad tech company Doubleclick (which Google acquired in 2007 for $3.1 billion) and build something like AdWords (which Google got when they acquired Applied Semantics in 2003), and that's gonna cost you another $3 billion, or you’re gonna have to outsource it, which will make it worse and less profitable, because those outsourced companies are going to take a cut of your revenue, thus reducing your margins even further.
Then you've got to hire experienced ad personnel and use that ad tech infrastructure you built — you built that right? You need it to make money — to enter a market Google will use their cartel power to kick you out of.
Sorry, what was that? Make money? Oh right yeah. Like I said, Google search costs a ton to run — $30 billion, easily — and that's with decades of optimization, a coding language that Google invented, thousands of miles of underground cables, and decades of experience in doing a thing you're just learning to do, with specialized PHDs who have also been there for decades.
You did all that? Great! You’re on a roll. Now, just checking — how many ad sales people do you have? None? You don’t have any ad sales people for your search engine? Sam, you’re gonna need them! Google had over 30,000 ad sales people before its last round of layoffs. You’re not gonna make this thing profitable without ads.
In fact, I don’t think you’re gonna make it profitable at all!
So let's have a think...
- Okay, so Google had around $300bn in revenues in FY 2023.
- If we divide that by the 5.2bn global internet users, we’re assuming each person contributes $56 to Google’s revenues.
- Obviously, it doesn’t work like that. Someone in the West will make more money for Google than someone living in, say, Uganda and using a quasi-smartphone with limited Internet abilities.
- Also, there are markets where Google just isn’t a thing. China, for example, where the company has been banned for decades, or Russia, where Yandex has two-thirds of the market.
- For the sake of argument, let’s just assume that each user in the West contributes around $100 to Google’s revenues.
- And let’s also assume that search accounts for half of that (which is probably low). So, call it $50, with the rest coming from cloud services, devices, subscriptions, and so on.
- Given a 35% margin on the search product (which, again, I came to from some inelegant back-of-napkin math after hours looking at Google’s financials), Google is spending $32.5 on servicing queries, and making $17.5 in profit. Again, this is all an estimation.
- Now, if OpenAI’s search product cost the same to operate as Google, it would have to figure out a way to make $50 from each user to end up at the same place.
- Now, that’s not how this is going to work. First, OpenAI’s going to be using GPT. So, let’s multiply expenses by six times — which is probably generous, with estimates suggesting it might be more like 10 times, and that’s not including the cost of actually building the technical and human infrastructure needed to support a search product.
- That $32.5 is now $195. If it wanted to have the same margins, it would need to make $105 in profit from each user. So, $300. $300 a user for a product it hasn’t even built yet.
- Let alone has created a stable monetization strategy for. I can’t imagine anyone spending $300 a year on a search product, and if OpenAI does charge, I can’t imagine it aggressively serving ads to make up the difference.
- And that assumes it has the infrastructure to actually serve, track, and sell those ads — which it doesn’t.
- Note: It isn’t clear how many users Google has - somewhere between 1 and 3 billion I’d wager. But Google has over 90% of the search market, so it’s safe to assume that they would effectively own the majority of the internet’s userbase.
OpenAI will have to raise capital on the idea that searchGPT is able to create a meaningful competitor to the most profitable software business ever made, one that Google will also likely protect with various monopolistic weapons like its vast treasure trove of patents.
On top of that, Google spends — as I’ve said above — billions of dollars on hardware and property, and OpenAI would have to build infrastructure that dwarfs even the outlays for generative AI. Or it would have to get Microsoft to build it, or lease it from the likes of Oracle, and also probably pay for it too. And I note that Microsoft recently described OpenAI as a competitor in the search business, and it’s unlikely to kill Bing, which has been a profitable business since 2016.
It’s time to stop.
Why even pretend anymore? Anthropic is propped up by Google and Amazon, who invested billions in cloud credits. Microsoft props up OpenAI to the point that the Nvidia chips crisis affected it because daddy Microsoft was buying them for OpenAI to use, and Google, Amazon and Microsoft are all making their own models, to the point that Microsoft has already reported that it sees OpenAI as a competitor in not just search, but also news advertising.
And it’s all so craven.
Big tech funded a cloud-dependent product that cost so much more than anything else on the market based on specious hype about what “artificial intelligence” can mean. They do so as a means of selling a new product — one that while not particularly useful also means they can charge a lot for the cloud compute service it runs on, creating demand for a product that pays them money just for existing, allowing them to vastly expand their physical data center locations, further enforcing their monopolies over the cloud storage industry.
Microsoft, Google, Meta and Amazon created a new way to turn money into more money — investing in a technology that required you to pay them all a lot of money for cloud compute services to run it. The problem, it seems, was that none of them were willing to consider a world where this stuff never turned into anything remarkable other than its waste. And a lack of creativity in the tallest towers of Silicon Valley and Redmond has allowed the richest men in the industry to be conned by a snake oil machine that loses billions of dollars.
To be clear, there is nothing wrong, on its face, with generative AI. It does interesting things with documents, and I fully believe there will be some low-fi version of generative AI that plugs along silently once the hyper scalers flee. My problem is with the wasteful large language models — the ones that steal everything, that require so much energy, and create so little in return.
The reason I came up with the calculation of how OpenAI needed $5 billion to $10 billion to survive is because I wanted to make it clear how hard things would be outside of a new technology that blows people's minds — one that I am not confident it has, and one I’m expect it to start making vague noises about to get ahead of investors who might consider fleeing.
And I wanted everybody to begin thinking about how ridiculous all of this is.
We live in truly incredible times — a boiling point in the tech industry's history where the sins of excess and a lack of vision have crippled big tech's ability to make new things. Things have been too easy, too comfortable, and too frictionless for too long for people like Sundar Pichai, and even people like Larry Page and Sergei Brin, the only two people who can fire him.
Tech needs to go back to creating interesting and cool things and having companies run by people that actually care about doing so. Technology is capable of solving real problems — or at least it has been in the past — and proven time and time again how profitable it can be to do so, and I believe it can do just fine without having four or five companies that do every single thing that technology can do at the same time in the hopes to they can monopolize it.
I don’t think the most powerful people in tech realize how frustrated people are — how deeply affected by technology everybody is, even those who don’t identify as “nerds” or “enthusiasts.” And I think men like Sundar Pichai and Satya Nadella are unaware of how ready people are for a change — how disgusted they are by the cratering quality of the average tech product at a time when tech executives have never been richer.
They know that Google Search sucks, they know that Facebook sucks, they know that websites are worse, and they know, I hope, that there are people responsible.
Sundar Pichai and Satya Nadella must go. They are no longer trustworthy stewards of the future, and have inspired hundreds of billions of dollars of waste in the hearts of their equally-myopic competitors. These men do not care about their customers, they do not face true competition, and the government should break every single hyper scaler up — Google, Meta, Amazon, Apple, and Oracle, and find ways to stop them ever growing so large in the future.
We all deserve better. And the way to get there is to say the names of those responsible, to document their disgraceful acts and call for accountability again and again until something actually changes.
And it’s changing. The narrative is changing. And soon it will shift more aggressively and violently against big tech, and this is a great thing for the world, and ultimately the tech industry at large.
None of this is a victory lap for me. Had this farcical bubble burst earlier, it would have caused less harm — to the environment, to the markets (in how brutal the correction against tech will be), and to the freelancers who lost their jobs. It would have wasted less money, taken less attention away from the rest of the industry, and allowed the gruesome Sam Altman to accumulate less money and power. Instead, this bubble has been allowed to swell so large that it will only emphasize how bereft of ideas big tech has become, and the consequences are a growing dissent against tech from the general public.
And this era was ultimately the result of a lack of fear in the hearts of big tech, and a deeply-held belief that their positions were permanent, and their customers trapped.
My purpose in documenting this — in what feels at times like narrating the end of the world — I am not looking to be a pundit, or a talking head, or a person who wants to be “given credit for calling the bubble,” as none of these serve you, the customer, or history itself.
What I write may be imperfect, but it exists to try and give you a clear understanding of what’s happening, and yes, it’s emotionally-charged and clear in its biases. I believe that the tech industry has lost its love and curiosity for technology, and I find it disgusting to watch, and the best way to fight back is to studiously explain things as they happen, from my perspective, at length but with — I hope — a clarity that doesn’t require much technical knowledge.
As things have accelerated, I have become more introspective about the role of this newsletter (which rapidly approaches 40,000 subscribers) and my podcast. I don’t have a particularly profound conclusion, other than I believe I have a brevity and clarity about the tech industry that many do not, and I am also an extremely fast writer, and find this all so very interesting. I do not write 8,500 or more words because I “want to do a newsletter.” I write it because I need to. This is important to me.
What you read is me processing watching an industry I deeply care about get ransacked again and again by people who don’t seem to care about technology. The internet made me who I am, connecting me (and in many cases introducing me) to the people I hold dearest to my heart. It let me run a successful PR business despite having a learning disability — dyspraxia, or as it’s called in America, developmental coordination disorder — that makes it difficult for me to write words with a pen, and thrive despite being regularly told in secondary school that I wouldn’t amount to much.
I believe that there are many, many people that have been allowed to live better, fuller, more meaningful lives as a result of technology, and I believe that the people who built the tools that allowed them to no longer control the future. And that makes me furious.
I refuse to watch this bullshit happen in silence, and I encourage you, as I have before, to fill social media and conversations with the names of those responsible — Satya Nadella, Sundar Pichai, Mark Zuckerberg, Andy Jassy, Sam Altman, Mira Murati, Reid Hoffman, all responsible for the echoing nihilism that created the generative AI boom.
Though there is little you can do to them, I encourage you to let anybody and everybody know who’s responsible. Narratives shift. And they shift because people like you and I don’t shut up.