How Does OpenAI Survive?

Edward Zitron 31 min read

Throughout the last year I’ve written in detail about the rot in tech — the spuriousness of charlatans looking to accumulate money and power, the desperation of the most powerful executives to maintain control and rapacious growth, and the speciousness of the latest hype cycle — but at the end of the day, these are just companies, which leads to a very simple question: can the largest, most prominent company in tech’s latest hype cycle actually survive? 

I am, of course, talking about OpenAI. Regulars to this newsletter will know that I’m highly skeptical of OpenAI’s product, its business model, and its sustainability. While I don’t want to rehash the arguments made in previous newsletters and podcasts, here’s the crux of the matter: generative AI is a product with no mass-market utility - at least on the scale of truly revolutionary movements like the original cloud computing and smartphone booms - and it’s one that costs an eye-watering amount to build and run. 

Those two factors raise genuine questions about OpenAI’s ability to exist on a medium-to-long term, especially if — or, if I may be so bold to say, when — the sluice of investment money and cloud computing credits dries up.

I don't have all the answers. I don't know every part of every deal that informs every part of every aspect of generative AI. I am neither an engineer nor an economist, nor do I have privileged information. However, I do have the ability to read publicly-available data, as well as evaluate the independent reporting of respected journalists and the opinions of well-informed experts and academics, and come to conclusions as a result.

I am hypothesizing that for OpenAI to survive for longer than two years, it will have to (in no particular order):

  • Successfully navigate a convoluted and onerous relationship with Microsoft, one that exists both as a lifeline and a direct source of competition.
  • Raise more money than any startup has ever raised in history, and continue to do so at a pace totally unseen in the history of financing.
  • Have a significant technological breakthrough such that it reduces the costs of building and operating GPT — or whatever model that succeeds it — by a factor of thousands of percent.
  • Have such a significant technological breakthrough that GPT is able to take on entirely unseen new use cases, ones that are not currently possible or hypothesized as possible by any artificial intelligence researchers.
  • Have these use cases be ones that are capable of both creating new jobs and entirely automating existing ones in such a way that it will validate the massive capital expenditures and infrastructural investment necessary to continue.

I ultimately believe that OpenAI in its current form is untenable. There is no path to profitability, the burn rate is too high, and generative AI as a technology requires too much energy for the power grid to sustain it, and training these models is equally untenable, both as a result of ongoing legal issues (as a result of theft) and the amount of training data necessary to develop them.

And, quite simply, any technology requiring hundreds of billions of dollars to prove itself is built upon bad architecture. There is no historical precedent for anything that OpenAI needs to happen. Nobody has ever raised the amount of money it will need, nor has a piece of technology required such an incredible financial and systemic force — such as rebuilding the American power grid — to survive, let alone prove itself as a technology worthy of such investment.

To be clear, this piece is focused on OpenAI rather than Generative AI as a technology — though I believe OpenAI's continued existence is necessary to keep companies interested/invested in the industry at all. OpenAI has raised the most money of any generative AI company ($11.3 billion), has arguably the most attention in the press, and both popularized and created the Large Language Mode (LLM)l business model that allowed the current generation of "AI-powered" startups to exist.

What I am not saying is that OpenAI will for sure collapse, or that generative AI will definitively fail. I have exhaustively discussed the problems with this industry in the past, and I won't reiterate those points other than to illustrate what I believe is a deep instability in the tech ecosystem, and my point here is to coldly explain why OpenAI, in its current form, cannot survive longer than a few more years without a stunning confluence of technological breakthroughs and financial wizardry, some of which is possible, much of which has no historic precedence.

Let's have a look, shall we?

The Microsoft (and Valuation) Problem

Before we continue, I should note that OpenAI and Microsoft have not publicly-disclosed the terms of their deals, and there may be things I do not know that change this picture.

Regardless, OpenAI's relationship with Microsoft is deeply strange, starting with a $1 billion cash infusion in 2019, framed as a "multiyear exclusive computing partnership" that "port[ed OpenAi's] existing services to work on Azure" (Microsoft's cloud computing product) and made Microsoft "OpenAI's preferred partner." OpenAI CTO Greg Brockman added at the time that this was a "cash investment," but noted that OpenAI would "plan to be a big Azure customer." 

One particular thing to note is that Brockman stated that Microsoft would get access to sell OpenAI's pre-AGI products based off of [OpenAI's research] to Microsoft's customers, and in the accompanying blog post added that Microsoft and OpenAI were "jointly developing new Azure AI supercomputing technologies." 

Pre-AGI in this case refers to anything OpenAI has ever developed, as it has yet to develop AGI and has yet to get past the initial "chatbot" stage of its own 5-level system of evaluating artificial intelligence.

In essence, the terms of this funding round involved OpenAI handing over the research it had made to Microsoft, along with the ability for Redmond to sell OpenAI's technology as its own under the Azure banner. Furthermore, Microsoft has access to OpenAI's "pre-AGI product research," meaning that it is able to see exactly how it works, which would allow it to both sell the technology and directly compete with it. This is something that Microsoft is already working on, with The Information reporting in May that Microsoft was readying its own "MAI-1" generative model, run by Mustafa Suleyman, the co-founder of Deepmind (acquired by Google in 2014), which he left in 2022 to become a VC. Suleyman later went on to found Inflection , a transformer-based chatbot company that was sort-of acquired by Microsoft in March, and which Microsoft backed prior to its acquisition.

The Information also noted that the OpenAI partnership "helped Microsoft get ahead of their rivals." I am, again, hypothesizing, and the terms of this deal alone are extremely concerning for OpenAI in general.

Things now get a little confusing.

Apparently, in 2021 Microsoft made some sort of investment in OpenAI (cited here in a 2023 blog post about another funding round I'll get to in a minute). Confusingly, I cannot find many details on this round. Crunchbase references a 2021 secondary market offering of an undisclosed amount (meaning that then-current stockholders at OpenAI were able to sell liquidate their stock), but cites that the money came from Tiger Global Management, Sequoia Capital, Bedrock and Andreessen Horowitz, valuing the company at $14 billion. 

It's unclear whether this was the same round, or if Microsoft otherwise infused capital. The Information mentions in an article from early 2023, but somehow nobody covered it at the time. If I'm wrong, I'd love to see how it did so.

Regardless, in early 2023 Microsoft invested $10 billion in OpenAI — but the most important parts of the deal are both its terms and its distribution. Though the terms of the deal aren't public, reports state that Microsoft will may receive 75% of OpenAI's profits until it secures "its investment return [a clunky way of saying "makes back the $10 billion it invested]," along with a 49% stake in the company, though OpenAI's convoluted non-profit-for-profit structure is strange in and of itself

Semafor reports that, at least by November 2023, OpenAI had received "a fraction" of the $10 billion investment, which was (is?) delivered in tranches (stages), and that a "significant portion" of that money was in cloud compute credits, meaning that Microsoft's investment was predominantly in the supposed value of a currency that can only be used on its own services. For those who don’t fully understand how weird this is, it’s like an airline investing in a company but, instead of providing cash, it hands over air miles. You can still travel, but you are locked into A) one airline and B) their interpretation of what one “mile” is actually worth.

Furthermore, it’s extremely bizarre that said “investment” also may have - again, we don’t know the terms of the deal - allowed OpenAI to raise its valuation in the process. 

Semafor also adds (vaguely) that Microsoft has "certain rights to OpenAI's intellectual property," and that it "would still be able to run OpenAI's current models on [its] servers" even if the relationship were to break down.

What's confusing is that multiple reporters have said that Microsoft had invested "$13 billion" in OpenAI, yet I can't find the two billion dollars anywhere. Was it in 2021? Was the amount $2 billion? Was it cash, or credits? This number has been reported so regularly for so long, and it's extremely strange that so many have just assumed this happened.

The reason I think this is concerning is that two billion dollars is a great deal of money by any startup's standards.

For example, Snowflake, a wildly-successful enterprise computing company, raised a total of $2 billion, mostly before going public (though it sold $621.5 million in stock post-IPO in 2022). Also, Snowflake lost $316 million last quarter.

On top of that, this deal was totally unannounced and unreported, happening in the same year (2021) that Microsoft and OpenAI announced their Azure OpenAI services (which took until 2023 to launch publicly). While I am guessing here — if I'm wrong, please email me — it seems that Microsoft gave another $2 billion to OpenAI in 2021 at the same time that OpenAI raised an undisclosed amount from other sources. I hypothesize this deal is likely to have been a mixture of cash and cloud credits, though ChatGPT didn't reach the public until November 2022. 

It could also be that those reporting that Microsoft had put "$13 billion" into OpenAI could simply be wrong, but it would be a very commonplace error and one that would likely quickly be refuted by either Microsoft or OpenAI.

The reason that I'm raising these issues is that Microsoft, at this point, effectively owns OpenAI. Microsoft CEO Satya Nadella was instrumental in Altman's return after he was fired in November 2023, and had already planned to poach him if he hadn't returned. In many ways, this didn't really matter. Due to the nature of Microsoft's deal with OpenAI, it effectively owns — or at least has access to — all of the intellectual property behind OpenAI's products, along with all of the research, as well as the ability to license it at will.

OpenAI is inextricably tied to Microsoft. It is bound to use Azure, Microsoft's cloud compute platform, both by its agreements and the fact that the majority of its funding is in credits that can only be used on Microsoft Azure. Microsoft sells access to GPT through Azure, while also directly competing with it with its own upcoming model, and takes three-quarters of any of the (theoretical) profit that would come out of OpenAI's services.

Microsoft has also not had to really sacrifice anything to do these deals. Even if we assume that the entirety of the previous rounds of funding — a theoretical $3 billion — was in cash, and that, say, 25% of the 2023 deal was in cash versus credits, that's only $5.5 billion, with the latter half delivered in tranches on an indeterminate timeline. Assuming (again, I do not have the exact terms) that this was really $5.5 billion in cash, that's nothing for Microsoft, a company that had over $21 billion in profits in its most recent financial quarter. 

As part of this deal, Microsoft has effectively purchased the rights to OpenAI's "pre-AGI" technology, and licensed all of its technology in a way that extends past any partnership or, I imagine, future deals. Microsoft also "invested" in cloud credits at an indeterminate valuation, both in how OpenAI was valued and the credits themselves. 

Ask yourself, what is a dollar of "cloud compute credits," and what do they gain you access to? Microsoft's Azure cloud has many, many products, and it's unclear if OpenAI would receive preferential pricing on them, what products they'd be using, and the terms under which OpenAI receives them.Microsoft effectively created its own currency to invest in OpenAI, which OpenAI would then pay Microsoft in, which Microsoft would, in turn, receive as revenue.

In many ways, OpenAI's continual existence is as an R&D facility for Microsoft's generative AI business unit, one with the dice rigged in Microsoft's favor. In the event of OpenAI's collapse, OpenAI's technology would still run on Microsoft's servers, and Microsoft would still have access to both OpenAI's intellectual property and products, and in turn be able to sell them. In the event that OpenAI thrives and future generations of GPT become remarkably profitable and successful, Microsoft harvests billions of dollars of profits while still retaining access and license to any research or products used to get there. Even Microsoft's $100 billion supercomputer project is reportedly tied to Altman and OpenAI "meaningfully improving" the capabilities of its AI, according to sources talking to The Information.

I am obviously not certain, and have no way of confirming this, but do you not think “pre-AGI” technology and research includes SearchGPT, OpenAI’s recently-announced competitor to Google Search? Is it worth considering, for a second, that Microsoft could benefit whether OpenAI lives or dies? 

It's a devil's deal, one that you would only make if you were burning so much cash that it was necessary to find a benefactor with deep pockets, one that could bail you out repeatedly as you chewed through billions of dollars every year.

Sadly, that may be the truth for OpenAI.

The Funding Problem

Last week, The Information reported that OpenAI could burn as much as $5 billion dollars in 2024 based on "previously undisclosed internal financial data and people involved in the business."

The piece makes several informed estimates (and I encourage you to pay for The Information for this article alone) that I am going to draw upon. While it’s possible that these estimates may be wrong, or that the data they were based on was misleading or incorrect, I trust the Information’s analysis and the rigor of its reporting:

  • "OpenAI as of March [2024] was on track to spend nearly $4 billion this year on renting Microsoft's servers to power ChatGPT and its underlying LLMs," sourced to a "person with direct knowledge of the spending."
  • "OpenAI's training costs — including paying for the data — could balloon to as much as $3 billion this year."
    • As a note, training costs are not simply getting the data, but cleaning and preparing it — a laborious task — and then using massive amounts of cloud compute to train the model using it.
  • The Information "guesstimates" that OpenAI's 1500-person (and growing) workforce could cost around $1.5 billion a year. While that sounds a little high — especially considering that figure works out to $1m per person — it’s actually quite plausible. Top AI talent is extremely, extremely expensive, and seven-figure salaries are far from unusual. You then have to factor in things like office space, payroll taxes, equipment, and the other operational costs
  • The Information also estimates that OpenAI has somewhere between $3.5 billion and $4.5 billion in revenue, combining both ChatGPT and charging developers to access OpenAI's APIs to integrate generative functions.

The Information surmises that OpenAI thus has an operating loss of $5 billion a year. That also assumes that OpenAI's revenue is on the higher-end, and could balloon to $6 billion or more.

Though we don't have direct knowledge, OpenAI's operating costs have continued to rapidly increase over time. An estimate from early 2023 suggested that it cost $700,000 a day to run ChatGPT at a time when it was popular but not as popular as it is today, and would put ChatGPT's costs alone at around $235.3 million a year. I would also hypothesize that they were much larger based on OpenAI having raised over $13 billion in the last five years, with the majority of the capital (and credit) raises happening between 2021 and 2023.

OpenAI has historically, based on reporting, failed to make its models more efficient, failing to deliver their more efficient "Arrakis" model to Microsoft in late 2023. While the recent launch of its GPT-4o mini model has been hailed as an "efficiency" play, it appears to only be more efficient and cost-effective for those developing using OpenAI's tools, and while one could posit that this suggests that OpenAI found a more efficient/cost-effective model and thus made it cheaper, it has yet to volunteer this information to confirm.

Assuming everything exists in a vacuum, OpenAI needs at least $5 billion in new capital a year to survive. This would require it to raise more money than has ever been raised by any startup in history, possibly in perpetuity, which would in turn require it to access capital at a scale that I can find no comparable company to in business history.

WeWork — the decrepit failson of Silicon Valley — raised a total of $22.2 billion, with nearly half of it (over $10 billion) raised in debt financing (loans with varying terms) from Goldman Sachs and SoftBank's Vision Fund, some of which it had to restructure as the company collapsed. Much of WeWork's capital was raised during a time with lower interest rates and thus much more available money, and heavily relied on SoftBank's continual willingness to dump cash into a fire. WeWork also had a far more reasonable — though extremely stupid — business model, one that a lot of financiers could get their head around, and thus was able to appeal to a much wider investment market.

A more reasonable comparable would be CoreWeave, a company that gives other companies access to (and build-outs of) the massive Graphics Processing Unit clusters to power AI applications. CoreWeave has raised a total of $12.1 billion dollars with — you guessed it — the majority of it raised in a "debt financing facility" offered by asset management firms Blackstone and Magnetar that allows it to draw upon cash at an undisclosed (but likely lower) interest rate. CoreWeave has raised at a $19 billion valuation, and unlike OpenAI, its services are relatively straightforward. If you need a bunch of compute, CoreWeave will either build it for you, or lease it to you.

Historically, according to data provided by Crunchbase, the largest funding round of the last decade was $14 billion, raised by Ant Group in 2018, followed by Juul, also in 2018, when it raised $12.8 billion. Otherwise, OpenAI dominates.

As another aside, Uber, a company famed for burning $25 billion dollars to achieve profitability, raised a total of...well, $25 billion, which included four different funding rounds in the year 2018 alone.

And even then it was more profitable than OpenAI, other than in 2020, when it lost $6.7 billion — around $1.7 billion more than OpenAI might lose this year — because people weren’t going anywhere. 

On top of that, — and yes, I am directly responding to the Information’s “is OpenAI a good business?,” the central argument of which is that OpenAI also needs to burn a bunch of money — Uber immediately had a use case people understood, and immediately generated revenue by devouring the taxi monopolies, subsidized heavily by venture capital.

Conversely, OpenAI has devoured no monopoly, and the product category it created — the only one it is really part of — is one entirely subsidized by venture capital and Microsoft. What OpenAI is offering is entirely hype-driven, hard to explain to the layman, a movement driven by a lack of hypergrowth markets. 

Uber was priced to replace a monopoly, and one that most people hated. Taxis were expensive, inconvenient, artificially scarce (especially in cities like New York, which limited the supply of taxis through its medallion system), and seldom accepted credit cards. Worse, discrimination against those of a minority background by drivers was rife and unchallenged (and involved exploitative loan schemes). While we can dislike Uber as a company and criticize its business practices, you can’t deny it had an objective appeal from the outset. 

By contrast, OpenAI created an industry-wide FOMO psychosis, and has profited heavily from it, but explaining what ChatGPT is to a layman is possible yet convoluted, which Uber was not.

I should also add that the media was used by Uber as a means of laundering Uber’s reputation, used specifically to (and I quote Edward Ongweso Jr.) convince people to “view its growth as progressive, not parasitic.” It’s important to bring attention to OpenAI and Sam Altman’s attempts to create a narrative promising things that its company has no way of doing, and even more so to not find ways to explain away how unsustainable OpenAI is. It is fundamentally not the media’s job to convince the world that OpenAI is a stable company with great things ahead — that’s OpenAI’s job.

While I am not sure it’s appropriate to say that I’m a “member of the media,” of the two of us, I think that the PR guy with a part-time newsletter and a podcast should never be the one who’s more willing to be critical.

As of its last round of funding — a secondary market (meaning insiders can sell their stocks to VCs) offering from February of this year — OpenAI is "valued" at $80 billion. I say "valued," because Microsoft's investment (which likely increased the valuation of the company, though I can't confirm) was predominantly in funny-money, cloud credits, rather than any actual "investment" that would in turn give something a "valuation."

Furthermore, debt financing is usually a little harder to get, with onerous cash-heavy terms that can eat a company alive in a bad month.

Comparables at this scale are few and far between. Based on data from CBInsights, the only private companies that compete are TikTok developer ByteDance ($225 billion, raised $9.5bn) and SpaceX ($150 billion, raised $9.8bn), with Stripe ($70 billion (though I've seen $65 billion), raised $9.4bn), Shein ($66 billion, raised $4.1bn) and Databricks ($43 billion, raised $4bn) just behind.

In all of these cases, the companies in question make real money and have real business models. ByteDance (which owns TikTok, as well as several other companies in China) made $120 billion in revenue in 2023, and its services are used by hundreds of millions of people. SpaceX, while unprofitable, is (for better or worse) effectively tied to the US government as a necessary contractor, and has succeeded making billions of dollars in reducing its operating costs. Stripe is one of the most well-respected payments companies in the world, makes billions of dollars, has extremely useful services, and is "robustly cash flow positive." Shein, while a horrible company built on exploitation, makes over $30 billion a year and has $2 billion in profit selling stuff. Databricks, a boring-yet-useful data intelligence company, reported in April that it had reached $1.6 billion in revenue in a quarter, and is yet to achieve profitability — making it the odd man out, and possibly the closest comparable to OpenAI.

However, one consistent difference with these companies is that they've proven market viability. While Databricks may be an indeterminate level of unprofitable, it has been raising for longer (since their Series A in 2013), and while it’s raised a lot, OpenAI has had to raise more, in a shorter period of time, and will have to raise again soon. 

SpaceX, which makes rockets that sometimes explode, still makes rockets and satellites that provide people with internet access (and Starlink makes billions in revenue) — which isn't an endorsement of Musk so much as it is a differentiator. Stripe has raised in large chunks, but over a longer timeline and also provides a very obvious and useful product. And I don't have to explain why TikTok or Douyin are important considering they're both some of the largest social networks in the world.

And in all cases, they've raised less money than OpenAI has to date, though with the caveat that OpenAI's $10 billion round was mostly in cloud credits.

Yet that actually raises a much thornier question: is OpenAI capable of raising this much money? When Stripe raised $6.5 billion in 2023, it dropped its valuation to $50 billion, which it got back to $65 billion in 2024 when it raised a tender offer of $694 million. Again, Stripe is bordering on an essential service, used across wide swaths of the internet to help people buy stuff. OpenAI competitor Anthropic has also had to raise over $7 billion since 2021 (including billions from Amazon and Google that could also be in cloud credits)- and The Information reports it could burn $2.7 billion in 2024 on $800 million in revenue that it has to share with Amazon.

What I'm trying to establish is that OpenAI would have to, at its current pace:

  • Raise more money than anyone ever has before — likely at least $3 billion, but more like $10 billion, and do so soon, likely within the next six months.
  • Raise either multiple rounds, or the largest funding round ever raised by any company, and then have to keep doing so in perpetuity.
  • Raise at either a massive down-round — by taking on more money at a reduced valuation — or raise at a valuation higher than any privately-held company ever has.

In all of these cases, OpenAI would have to show investors both how it intends to grow revenue and reduce costs, and do so in such a way that it would reassure investors that OpenAI would not simply return asking for more capital in a few months. It would also likely have to amend their corporate structure, as Sam Altman has suggested it might.

This isn't impossible, but it feels extremely unlikely that OpenAI would be able to do this for more than a few years. Reports suggest that OpenAI is nowhere near Artificial General Intelligence, and while Altman could potentially raise another round from venture capitalists desperate to get aboard the company, doing so would risk exposing how bad the burn rate truly is.

If we assume that OpenAI's secondary market round from February 2024 was the best-case scenario — $5 billion in cash, and I'd guess it was less, we truly have no idea — the company still needs another lifeline within the next 12 months.

In a more realistic case, I believe OpenAI has to raise within the next 3-6 months, which will mean that doing so will involve them raising funds after at least one quarter of reported earnings from Microsoft, with the next coming up on July 30.

Assuming a burn rate of even $3 billion a year — which would require a remarkable reduction in costs — OpenAI would still have to raise more capital than anyone has ever raised for as long as it takes to either increase revenues or reduce costs. It's extremely concerning, and equally unsustainable.

The Revenue, Cost and Market-Fit Problem

As I've written repeatedly, generative AI is deeply unprofitable, and based on the Information's estimates, the cost of goods sold is unsustainable.

OpenAI's costs have only increased over time, and the cost of making these models "better" are only increasing, and have yet to, to paraphrase Goldman Sachs' Jim Covello, solve the kind of complex problems that would justify their cost. "Better" is also somewhat of a misnomer — a "better" version of ChatGPT may be faster, give answers that are more accurate or that are generated more-quickly, but it is not able to do significantly more. Since November 2022, ChatGPT has grown more sophisticated, faster at generations, capable of ingesting more data, but has yet to generate a true "killer app," an iPhone-esque moment.

Furthermore, transformer-based models have become heavily-commoditized, with competition from independent(ish) companies like Anthropic's Claude and Meta's LLama, all trained on the same massive data-sets, to the point that ChatGPT's biggest advantage is in its brand. As a result, we're already seeing a race to the bottom, with GPT-4o Mini (OpenAI's "cheaper" model) already beaten in price by Anthropic's Claude Haiku model, and I am confident somebody is already working on a similarly-powerful model that they'll sell for even cheaper.

As a result, OpenAI's revenue might climb, but it's likely going to climb by reducing the cost of its services rather than its own operating costs. OpenAI appears to be operating in the standard valley monopoly model — get as many customers as possible and then work out how to get profitable — but is doing so using a technology that is uniquely expensive to both operate and iterate upon.

As discussed previously, OpenAI — like every single transformer-based model developer — requires masses of training data to make its models "better," and the next generation of GPT will require four to five times the amount it needed for GPT-4 at a time when publishers and the wider internet have found numerous ways to block them from taking it.

Doing so is also likely going to lead to perpetual legal action, especially as 404 Media reports that Runway, a generative video company, likely trained its model on thousands of hours of videos taken from YouTube and pirated sources. OpenAI has been incredibly evasive when asked if it trained their "Sora" model on YouTube videos, and if I had to guess, it absolutely has. If it hasn't, it likely will be required to buy tens of thousands — if not millions — of hours of footage, which will be multitudes more expensive than the $250 million it paid to News Corp to train on its articles.

And, to be abundantly clear, I am not sure there is enough training data in existence to get these models past the next generation. Even if generative AI companies were able to legally and freely download every single piece of text and visual media from the internet, it doesn't appear to be enough to train these models, with some model developers potentially turning to model-generated "synthetic" data — a process that could introduce "model collapse," a form of inbreeding that Jathan Sadowski called "Habsburg AI" that destroys the models over time.

Even if they were successful in somehow acquiring this much training data, and doing so in a way that was legally sound (which, to be clear, I do not think is possible), they would then have the increasing costs of training these models. Anthropic CEO recently said on a podcast that "AI models that cost $1 billion to train are underway," and that there are ones in the future that will cost $100 billion. As models become more complex and require more training data, so too does the cost of ingesting the larger (and likely more complex) training data.

And then there's the very big, annoying problem — that generative AI doesn't have a product-market fit at the scale necessary to support its existence.

To be clear, I am not saying generative AI is completely useless, or that it hasn't got any product-market fit. It's useful for digging through massive data-sets, quick summaries of articles (for better or worse), and generating images. These models have utility, despite the propensity of these models to "hallucinate" (authoritatively state something that isn't true, or generate hands with too many fingers), and people are finding useful things for them to do, particularly in finance.

But what they are not, at this time, is essential. 

Generative AI has yet to come up with a reason that you absolutely must integrate it, other than the sense that your company is "behind" if you don't use AI. This wouldn't be a problem if generative AI's operating costs were a minuscule fraction — tens or hundreds of thousands of percent — of what they are today, but as things stand, OpenAI is effectively subsidizing the generative AI movement, all while dealing the problem that while cool and useful, GPT is only changing the world as much as the markets allow it to.

While complex, generative AI is a technology that probabilistically generates answers, and has no "intelligence." It is inherently limited by its architecture, and in turn can only get "better" in a linear fashion. I see no signs that the transformer-based architecture can do significantly more than it currently does.

For OpenAI to continue growing, it will either have to significantly increase functionality — something it hasyet to do, but is theoretically possible — or vastly reduce pricing, which will only increase its operating costs.

And OpenAI must grow, because $3.5 billion to $4.5 billion a year in revenue is simply not enough to keep this company going. This isn't about any personal beliefs I have about generative AI. It's about the fact that this company costs more money to run than any other privately-held startup, and its technology does not — based on OpenAI’s sales — do enough to make up for the fact that it costs so much money.

Outside of reducing prices, which would increase revenues and operating costs, OpenAI could, theoretically, find new functionality in GPT — though I'm not sure how — or create something entirely different, something it has yet to show any sign of doing. While you could point to tools like Sora (which doesn’t seem particularly useful, and is still far from commercialization), or searchGPT (which would have the same hallucinatory issues that dogged Google search’s own pivot to AI, while also competing against the GPT-enabled Bing), it’s tough to make the case that these products will fill the burning shortfall in OpenAI’s balance sheet, and would likely only add to its operational costs

In July, Reuters reported that OpenAI was "working on a new technology called Strawberry" that would be "capable of delivering advanced reasoning capabilities." However, on a deeper read of the article, OpenAI is thinking about this and trying to do it, and has yet to do it.

If it succeeds — which I add would require potentially entirely new branches of psychology and mathematics to do so, as we humans barely understand our brains — that would be a huge technological achievement that still wouldn't turn things around. OpenAI would own a completely new kind of technology, which would be immensely valuable, and could potentially raise in perpetuity off of that, but it would be heavily-dependent on the level of reasoning and the accompanying tasks it could achieve.

And, to be clear, it is very, very unlikely this happens. It could — and there is always stuff I might not know about OpenAI's research and development — but we have seen little sign of OpenAI innovating, and far more signs that it’s only capable at this time of iterating on GPT.

OpenAI also has a problem with its marketing. Sam Altman has repeatedly misled the media about what "AI might do," conflating generative AI — which does not "know" things and is not "intelligence" — with the purely-theoretical concept of an autonomous, sentient artificial intelligence. As a result, expectations are higher of what future generations of GPT might do, making it inevitable that the company will disappoint investors and customers.

While there may be ways to reduce the costs of transformer-based models, the level of cost-reduction would be unprecedented, and likely require entirely new chips, cooling solutions and physical server architecture, none of which OpenAI develops. 

While theoretically Nvidia could produce a much-more-efficient chip, doing so would likely take longer than OpenAI has. Though there are companies like Etched that claim they are working on specialized chips, they are years from delivering any working silicon at the scale that OpenAI would need, and said chips are focused on singular models, making them iterative rather than innovative concepts.

One thing I am not heavily-discussing is the fact that there doesn't seem to be general-purpose adoption of generative AI. These numbers are hard to establish, but what I have previously established for certain — in particular based on Goldman Sachs' reporting — is that actual meaningful revenue is yet to materialize. This is a bigger existential threat than a lack of adoption. It means people are using it and not getting enough out of it, which could potentially lead to a significant loss of revenue as the hype cycle dies down.

To summarize:

  • OpenAI's only real options are to reduce costs or the price of its offerings. It has not succeeded in reducing costs so far, and reducing prices would only increase costs.
  • To progress to the next models of GPT, OpenAI's core product, the company would have to find new functionality.
  • OpenAI is inherently limited by GPT's transformer-based architecture, which does not actually automate things, and as a result may only be able to do "more" and "faster," which does not significantly change the product, at least not in such a way that would make it as valuable as it needs to be.
  • OpenAI's only other option is to invent an entirely new kind of technology, and be able to productize and monetize said technology, something that the company has not yet been able to do.

A Note On Energy: Most of the problems I've listed are existential threats to the future of OpenAI, ones that I can see no quick or easy way out of, but another stands in the way — energy. 

For OpenAI to scale, it would require a massive capital expenditure on multiple levels, chief of them the American power grid (see page 15 of this Goldman Sachs report for a conversation with Microsoft's former VP of energy), which will likely require extensive expansion the likes of which hasn't happened in decades at a time when America is far less apt at infrastructural development. 

While the US steadily added new electricity generation capacity in the second half of the 1900s, things started to plateau in the 2010s. This is a combination of a bunch of things. Electricity consumption has remained flat or decreased slightly across both households and businesses. While the US has added capacity, particularly when it comes to renewables and natural gas, that isn’t increasing the amount of electricity generation available, but rather offsetting the decommissioning of coal-fired power plants

Scaling AI would require an investment in power generation that would be equivalent in ambition to the New Deal, or Eisenhower’s Interstate Highway System, and it would need to happen quickly. That’s something that doesn’t happen in the power-generation world. For context, in 2021 it took an average of 2.8 years for a new solar farm to be connected to the electrical grid. Two years later, that time rose to four years. Small modular reactors — a promising approach designed to reduce the cost and build times of nuclear power generations — are still far from mass-commercialization, and even if they weren’t, they’d still have to tend with the bureaucracy of the sector.  

Even if changing this were possible — and it'd be good for society if it was — artificial intelligence (driven by generative AI) is already massively increasing global emissions, particularly from companies like Google, which saw its emissions increase by 48% in the last five years thanks to AI.

For OpenAI to continue scaling, it is reliant on a dramatic expansion of the power grid, at a time when (according to Brian Janous of Cloverleaf Infrastructure) the wait times are ranging from 40-70 months to spin up new power projects. And OpenAI isn't the company doing the scaling — much like it’s dependent on Nvidia to continue to produce GPUs for generative AI's cloud compute, so too is it dependent on companies like Microsoft, Google, and Oracle working with power companies to expand the grid.

A Grim Situation

Absolutely nothing about what I've written here is based on personal grievance, or a dislike of generative AI, or really anything other than a frank evaluation of a company that I believe may be teetering on the brink of collapse.

For OpenAI to continue operating, things have to change dramatically.

  • In the event that OpenAI is making the highest-end of their reported revenue range — $4.5 billion — and it has that much in the bank as we speak, it will have to raise at least $2 billion, or as much as $10 billion, within the next 12 months.
  • In the event that it has less than a billion in the bank, OpenAI will likely have to raise $5 billion, and do so in the next three to six months.
  • Otherwise, OpenAI will either have to at least halve their operating costs, all while maintaining the current pace of revenue, or find a way to literally double its revenue while keeping costs at the same rate. Even then, these numbers are extremely concerning — though I add there are always things that I don't know as OpenAI is a private company, and thus isn’t subject to the same disclosure rules as publicly-traded companies.

Survival would require OpenAI to raise more money than anybody ever has. This is technically possible, but would require venture capitalists and investment banks to effectively provide a lifeline to the company in perpetuity, unless the company is capable of either heavily reducing costs or finding billions of dollars more in revenue. Even if it succeeds, if the cost of revenue increases along with sales, all growth would be for nought, and create further problems and dependencies on venture capital — or on companies like Microsoft.

It would require a way of expanding what OpenAI can sell to address the entirety of corporate America, which would require use cases that I do not believe generative AI is capable of meeting, like automating chunks of the economy rather than bankrupting freelance designers and copy-editors

To be clear, I am not advocating for workers being replaced by AI. I am simply saying that for OpenAI to grow to the $10+ billion revenue a year it needs to survive, it would need to replace entire chunks of the labor force. And as a reminder, generative AI is not automation.

Even if these problems are surmountable, there is simply not enough training data, and even if there were, the cost of processing it will likely vastly outweigh whatever revenue OpenAI makes. Even if this were resolved — which would likely require $30 billion in, at best, cloud credits — doing so would not necessarily make transformer-based models capable of doing what it takes to sell $10 billion of software a year.

If I had to just choose a number, I hypothesize that OpenAI needs to raise $20 billion in the next two years to even stay in the game, and to get any further — something that isn't guaranteed — will cost them another $20 billion. For context, according to Crunchbase, the aggregate of all funding in 2023 was $299.2 billion, with $147 billion raised so far in 2024. 

OpenAI would have to regularly make up 5-10% of all startup funding, forever, or at least until it works out how to lose less or make more money. 

The only other company that has done so is Uber — and as I’ve discussed above, their situation is very, very different. Comparing the two is ahistorical on the funding climate alone, with Uber existing at a time with lower interest rates. Along with $3.5 billion from Saudi Arabia’s Public Investment fund and over $8 billion from Softbank, the latter a secondary market sale, it also raised (equity and debt financing respectively) from Goldman Sachs and Morgan Stanley — two parties I do not believe are going to be willing to subsidize generative AI.

Even if I’m wrong — which I could be, stranger things have happened — the willingness and ease of getting people to hand over hundreds of millions or billions of dollars became markedly different between Uber’s last funding round (September 14 2020) and 2024. The majority of its funding was raised between 2016 and 2019, too, when interest rates were low and, thus, VC coffers were overflowing.

And, crucially, investors had an incredibly clear path to liquidity — an IPO. Uber was always a dog of a company, but they always knew it’d chug along as a growth-at-all-costs monster on the market. How would OpenAI IPO? Its most recent round was a secondary market raise, selling insider stock to new investors.

If we humor this idea as the ultimate goal of pumping this company full of money, what are its plans to go public? Altman said as recently as June 2023 that its company structure would prohibit an IPO, but do you think that OpenAI wants to subject itself to the scrutiny of the public markets, especially given their approach to copyrighted material?

Microsoft has a built-in profit share, but what of Sequoia? How does it get paid, other than finding another investor who wants the stock?

Perhaps there’s something I’m missing, but if investors have no path to an IPO, this feels like a game of hot potato, except the loser is left with a big, useless stock.Unless, of course, that investor is Microsoft, who will end up being able to use any leftover tech, and will benefit if OpenAI becomes profitable.

Furthermore, I hypothesize a race to the bottom in generative AI will significantly hamper OpenAI's ability to expand revenue, compounded by the fact that we're approaching the limits of transformer-based architecture.

And because OpenAI (and the competition) are so deep in the hole with transformer-based models, I believe they will continue to drive billions into them, burning money on training them using data that may or may not have been legally acquired, and any lawsuit that goes in favor of the plaintiff would have potentially apocalyptic consequences for these models, requiring them to be retrained from scratch with an entirely new dataset, costing further hundreds of millions or billions.

And, quite frankly, I am just not sure how OpenAI will make this all work.

While OpenAI could — and I believe will — raise another huge, industry-defining round, doing so will require pulling in Canadian pension funds, Saudi sovereign wealth funds, or massive investment funds that would require swaths of equity. Doing so multiple times is possible, but very, very unlikely, in the same way that continuing to increase revenue at hundreds of percent each year is possible, but very, very unlikely. It could come up with something new, or find a way to make generative AI much cheaper, but again, that is so very, very unlikely.

OpenAI could be the most important tech company of all time, in that it will have to devour the very Gods of Silicon Valley to continue its rapacious growth. In doing so, it will break records in funding and software revenue, and do so while warding off competition from startups — as well as its own investor Microsoft.

I don't know how OpenAI does it. Writing this piece took me hours, and in doing so, I genuinely tried to work out how OpenAI survives, and every single corner I turned ended with "it can do it if it does something that has never happened."

I have written this with a dispassionate tone because I need people to take me seriously when I say that to survive, generative AI must do so much more than it currently does, and much cheaper than it's currently doing so. To survive, OpenAI must break every startup record known to man, and to thrive, it must either reinvent transformer-based architecture to reduce its compute requirements, and then invent an entirely new kind of artificial intelligence to do the things that people want AI to do.

Perhaps I'm wrong. Perhaps there are things that I don't know — about OpenAI, about the things it’s working on in secret, about some sort of energy or chip breakthrough that will approach so suddenly that I will eat crow. Perhaps it has more money in the bank than I thought, or perhaps their costs are currently inflated in a way that I — and the entirety of the tech media — are unaware of.

But if there were, I believe we would know, or at least have some sort of sign.

I don't know what happens next, but I do know things have to change. I fear OpenAI will compete on price, sending its costs upward, or charge what it needs to to approach break-even, which I don't think it’s willing to do. I fear for Anthropic, which has less money, less revenue, and equally gruesome burn. I fear for those founders relying on the current pricing for GPT and other models.

Without OpenAI, the bottom drops out of the entire generative AI market, and will more than likely brutalize any public stock associated with the generative AI boom.

I recognize reading this you might dismiss me as a cynic, or a pessimist, or as someone rooting for the end, and I have taken great pains to explain my hypotheses here in detail without much opinion or editorializing. If you disagree with me, tell me how I’m wrong — explain to me what I’ve missed, show me the holes in my logic or my math. 

I caution you not to dismiss me, even if what I’ve written deeply upsets you. What I am describing here is a deeply unsustainable company that many have pinned their hopes on, and even if I’m wrong, this kind of analysis is necessary when there is a company burning billions of dollars that will likely attempt to absorb billions more dollars of capital to survive. 

This discussion is equal parts necessary and troubling, uncomfortable in its implications and possibilities, and unsettling in its potential outcomes. 

I hope I'm wrong. I really do. 

Share
Comments

Welcome to Where's Your Ed At!

Subscribe today. It's free. Please.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Ed Zitron's Where's Your Ed At.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.