Premium: The Hater's Guide to Anthropic

Edward Zitron 62 min read
Table of Contents

In May 2021, Dario Amodei and a crew of other former OpenAI researchers formed Anthropic and dedicated themselves to building the single-most-annoying Large Language Model company of all time. 

Pardon me, sorry, I mean safest, because that’s the reason that Amodei and his crew claimed was why they left OpenAI:

Dario Amodei: Yeah. So there was a group of us within OpenAI, that in the wake of making GPT-2 and GPT-3, had a kind of very strong focus belief in two things. I think even more so than most people there. One was the idea that if you pour more compute into these models, they'll get better and better and that there's almost no end to this. I think this is much more widely accepted now. But, you know, I think we were among the first believers in it. And the second was the idea that you needed something in addition to just scaling the models up, which is alignment or safety. You don't tell the models what their values are just by pouring more compute into them. And so there were a set of people who believed in those two ideas. We really trusted each other and wanted to work together. And so we went off and started our own company with that idea in mind.

I’m also being a little sarcastic. Anthropic, a “public benefit corporation” (a company that is quasi-legally required to sometimes sort of focus on goals that aren’t profit driven, and in this case, one that chose to incorporate in Delaware as opposed to California, where it would have actual obligations), is the only meaningful competitor to OpenAI, one that went from (allegedly) making about $116 million in March 2025 to making $1.16 billion in February 2026, in the very same month it raised $30 billion from thirty-seven different investors, including a “partial” investment from NVIDIA and Microsoft announced in November 2025 that was meant to be “up to” $15 billion. 

Anthropic’s models regularly dominate the various LLM model leaderboards, and its Claude Code command-line interface tool (IE: a terminal you type stuff into) has become quite popular with developers who either claim it writes every single line of their code, or that it’s vaguely useful in some situations. 

CEO Dario Amodei predicted last March that in six months AI would be writing 90% of code, and when that didn’t happen, he simply made the same prediction again in January, because, and I do not say this lightly, Dario Amodei is full of shit.

You see, Anthropic has, for the best part of five years, been framing itself as the trustworthy, safe alternative to OpenAI, focusing more on its paid offerings and selling to businesses (realizing that the software sales cycle usually focuses on dimwitted c-suite executives rather than those who actually use the products), as opposed to building a giant, expensive free product that lots of people use but almost nobody pays for. 

Anthropic, separately, has avoided following OpenAI in making gimmicky (and horrendously expensive) image and video generation tools, which I assume is partly due to the cost, but also because neither of those things are likely something that an enterprise actually cares about. 

Anthropic also caught on early to the idea that coding was the one use case that Large Language Models fit naturally:

  • Thanks to sites like Stack Overflow and Github, as well as the trillions of lines of open source code in circulation, there’s an absolute fuckton of material to train the model on.
  • Software engineers are data perverts (I mean this affectionately), and will try basically anything to speed up, automate or “add efficiency” to their work.
  • Software engineering is a job that most members of the media don’t understand.
  • Software engineers never shut the fuck up when they’ve found something new that feels good.
  • Software engineers will spend hours only defending the honour of any corporation that courts them.
  • Software engineers will at times overestimate their capabilities, as demonstrated by  the METR study that found that developers believed they were 24% faster when using LLMs, when in fact coding models made them 19% slower.
    • This, naturally, makes them quite defensive of the products they use, and whether or not they’re actually seeing improvements.

Anthropic has held the lead in coding LLMs since the launch of June 2024’s Claude Sonnet 3.5, and as a story from The Information from December 2024 explained, this terrified OpenAI:

Earlier this fall, OpenAI leaders got a shock when they saw the performance of Anthropic’s artificial intelligence model for automating computer programming tasks, which had gained an edge on OpenAI’s models, according to its own internal benchmarks. AI for coding is one of OpenAI’s strong suits and one of the main reasons why millions of people subscribe to its chatbot, ChatGPT.

OpenAI leaders were already on edge after Cursor, a startup OpenAI funded last year, in July made Anthropic’s Claude model the default for Cursor’s AI coding assistant instead of OpenAI’s models, as it had previously done, according to an OpenAI employee. In a podcast in October, Cursor co-founder Aman Sanger called the latest version of Anthropic’s model, Claude 3.5 Sonnet, the “net best” for coding in part because of its superior understanding of what customers ask it to do.

Cursor would, of course, eventually go on to become its own business, raising $3.2 billion in 2025 to compete with Claude Code, a product made by Anthropic, which Cursor pays to offer its models through its AI coding product. Cursor is Anthropic’s largest customer, with the second being Microsoft’s Github Copilot. I have heard from multiple sources that Cursor is spending more than 100% of its revenue on API calls, with the majority going to Anthropic and OpenAI, both of whom now compete with Cursor.

Dario Amodei Is An Even Bigger Liar Than Sam Altman, He’s Just Better With The Media

Anthropic sold itself as the stable, thoughtful, safety-oriented AI lab, with Amodei himself saying in an August 2023 interview that he purposefully avoided the limelight:

Dwarkesh Patel (01:56:14 - 01:56:26):

You've been less public than the CEOs of other AI companies. You're not posting on Twitter, you're not doing a lot of podcasts except for this one. What gives? Why are you off the radar?

Dario Amodei (01:56:26 - 01:58:03):

I aspire to this and I'm proud of this. If people think of me as boring and low profile, this is actually kind of what I want. I've just seen cases with a number of people I've worked with, where attaching your incentives very strongly to the approval or cheering of a crowd can destroy your mind, and in some cases, it can destroy your soul.

I've deliberately tried to be a little bit low profile because I want to defend my ability to think about things intellectually in a way that's different from other people and isn't tinged by the approval of other people. I've seen cases of folks who are deep learning skeptics, and they become known as deep learning skeptics on Twitter. And then even as it starts to become clear to me, they've sort of changed their mind. This is their thing on Twitter, and they can't change their Twitter persona and so forth and so on.

I don't really like the trend of personalizing companies. The whole cage match between CEOs approach. I think it distracts people from the actual merits and concerns of the company in question. I want people to think in terms of the nameless, bureaucratic institution and its incentives more than they think in terms of me. Everyone wants a friendly face, but actually, friendly faces can be misleading.

A couple of months later in October 2023, Amodei joined The Logan Bartlett show, saying that he “didn’t like the term AGI” because, and I shit you not, “...because we’re closer to the kinds of things that AGI is pointing at,” making it “no longer a useful term.” He said that there was a “future point” where a model could “build dyson spheres around the sun and calculate the meaning of life,” before rambling incoherently and suggesting that these things were both very close and far away at the same time. He also predicted that “no sooner than 2025, maybe 2026” that AI would “really invent new science.”

This was all part of Anthropic’s use of well-meaning language to tell a story that said “you should be scared” and “only Anthropic will save you.” In July 2023, Amodei spoke before a senate committee about AI oversight and regulation, starting sensible (IE: if AI does become powerful, we should have regulations to mitigate those problems) and eventually veering aggressively into marketing slop:

The medium-term risks are where I would most like to draw the subcommittee’s attention. Simply put, a straightforward extrapolation of the pace of progress suggests that, in 2-3 years, AI systems may facilitate extraordinary insights in broad swaths of many science and engineering disciplines. This will cause a revolution in technology and scientific discovery, but also greatly widen the set of people who can wreak havoc. In particular, I am concerned that AI systems could be misused on a grand scale in the domains of cybersecurity, nuclear technology, chemistry, and especially biology. 

Dario Amodei Manipulates The Media With Baseless Proclamations Built To Scare Readers, Trick Businesses, and Raise Funding

This is Amodei’s favourite marketing trick — using a vague timeline (2-3 years) to suggest that something vaguely bad that’s also good for Anthropic is just around the corner, but managed correctly, could also be good for society (a revolution in technology and science! But also, havoc!). Only Dario has  the answers (regulations that start with “securing the AI supply chain” meaning “please stop China from competing”). 

In retrospect, this was the most honest that he’d ever be. In 2024, Amodei would quickly learn that he loved personalizing companies, and that destroying his soul fucking rocked. 

In October 2024, Amodei put out a 15,000-word-long blog — ugh, AI is coming for my job! — where he’d say that Anthropic needed to “avoid the perception of propaganda” while also saying that “as early as 2026 (but there are also ways it could take much longer),” AI would be smarter than a Nobel Prize winner, autonomously able to complete weeks-long tasks, and be the equivalent of a “country of geniuses in a datacenter.” 

This piece, like all of his proclamations, had two goals: generating media coverage and investment. Amodei is a deeply dishonest man, couching “predictions” based on nothing in terms like “maybe,” “possibly,” or “as early as,” knowing that the media will simply ignore those words and report what he says as a wise, evidence-based fact. 

Amodei (and by extension Anthropic) nakedly manipulates the media by having them repeat these things without analysis or counterpoints — such as that “AI could surpass almost all humans at almost everything shortly after 2027 (which I’ll get back to in a bit).” He knows that these things aren’t true. He knows he doesn’t have any proof. And he knows that nobody will ask, and that his bullshit will make for a sexy traffic-grabbing headline.

To be clear, that statement was made three months after Amodei’s essay said that AI labs needed to avoid “the perception of propaganda.” Amodei is a con artist that knows he can’t sell Anthropic’s products by explaining what they actually do, and everybody is falling for it.

And, almost always, these predictions match up with Anthropic’s endless fundraising. On September 23, 2024, The Information reported that Anthropic was raising a round at a $30-$40 billion valuation, and on October 12 2024, Amodei pooped out Machines of Loving Grace with the express position that he and Anthropic “had not talked that much about powerful AI’s upsides.” 

A month later on November 22, 2024, Anthropic would raise another $4 billion from Amazon, a couple of weeks after doing a five-hour-long interview with Lex Fridman in which he’d say that “someday AI would be better at everything.” 

On November 27, 2024, Amodei would do a fireside chat at Eric Newcomer’s Cerebral Valley AI Summit where he’d say that in 2025, 2026, or 2027 (yes, he was that vague), AI could be as “good as a Nobel Prize winner, polymathic across many fields,” and have “agency [to] act on its own for hours or days,” the latter of which deliberately laid foundation for one of Anthropic’s greatest lies: that AI can “work uninterrupted” for periods of time, leaving the reader or listener to fill in the (unsaid) gap of “...and actually create useful stuff.”

Amodei crested 2024 with an interview with the Financial Times, and let slip what I believe will eventually become Anthropic’s version of WeWork’s Community-Adjusted EBITDA, by which I mean “a way to lie and suggest profitability when a company isn’t profitable”:

Let’s just take a hypothetical company. Let’s say you train a model in 2023. The model costs $100mn dollars. And, then, in 2024, that model generates, say, $300mn of revenue. Then, in 2024, you train the next model, which costs $1bn. And that model isn’t done yet, or it gets released near the end of 2024. Then, of course, it doesn’t generate revenue until 2025. 

So, if you ask “is the company profitable in 2024”, well, you made $300mn and you spent $1bn, so it doesn’t look profitable. If you ask, was each model profitable? Well, the 2023 model cost $100mn and generated several hundred million in revenue. So, the 2023 model is a profitable proposition.

These numbers are not Anthropic numbers. But what I’m saying here is: the cost of the models is going up, but the revenue of each model is going up and there’s a mismatch in time because models are deployed substantially later than they’re trained.

Yeah man, if a company made $300 million in revenue and spent $1 billion. No amount of DarioMath about how a model “costs this much and makes this much revenue” changes the fact that profitability is when a company makes more money than it spends. 

On January 5, 2025, Forbes would report that Anthropic was working on a $60 billion round that would make Amodei, his sister Daniela, and five other cofounders billionaires.

Anyway, as I said at Davos on January 21, 2025, Amodei said that he was “more confident than ever” that we’re “very close” to “powerful capabilities,” defined as “systems that are better than almost all humans at almost all terms,” citing his long, boring essay. A day later, Anthropic would raise another $1 billion from Google.

On January 27, 2025, he’d tell Economist editor-in-chief Zanny Minton Beddoes that AI would get “as good and eventually better” at thinking as human beings, and that the ceiling of what models could do was “well above humans.” 

On February 18, 2025, he’d tell Beddoes that we’d get a model “...that can do everything a human can do at the level of a Nobel laureate across many fields” by 2026 or 2027, and that we’re “on the eve of something that has great challenges” that would “upend the balance of power” because we’d have “10 million people smarter than any human alive…” oh god, I’m not fucking writing it out. I’m sorry. It’s always the same shit. The models are people, we’re so scared. 

On February 28, 2025, Amodei would join the New York Times’ Hard Fork, saying that he wanted to “slow down authoritarians,” and that “public officials and leaders at companies” would “look back at this period [where humanity would become a “post-powerful AI society that co-exists with powerful intelligences]” and “feel like a fool,” and that that was the number one goal of these people. Amodei would also add that he had been in the field for 10 years — something he loves to say! — and that there was a 70-80% chance that we will “get a very large number of AI systems that are much smarter than humans at almost everything” before the end of the decade.

Three days later, Anthropic would raise $3.5 billion at a $61.5 billion valuation.

Beneath the hype, Anthropic is, like OpenAI, a company making LLMs that can generate code and text, and that can interpret data from images and videos, all while burning billions of dollars and having no path to profitability. Per The Information, Anthropic made $4.5 billion in revenue and lost $5.2 billion generating it, and based on my own reporting from last year, costs appear to scale linearly above revenue.

Dario Amodei Admits Training Costs Are Never Going Away — And Need To Be Considered Part Of Every AI Lab’s Gross Margins

Some will argue that the majority of Anthropic’s losses ($4.1 billion) were from training, and I think it’s time we had a chat about what “training” means, especially as Anthropic plans to spend $100 billion on it in the next four years. Per my piece from last week:

While most people know about pretraining — the shoving of large amounts of data into a model (this is a simplification I realize) — in reality a lot of the current spate of models use post-training, which covers everything from small tweaks to model behavior to full-blown reinforcement learning where experts reward or punish particular responses to prompts.

To be clear, all of this is well-known and documented, but the nomenclature of “training” suggests that it might stop one day, versus the truth: training costs are increasing dramatically, and “training” covers anything from training new models to bug fixes on existing ones. And, more fundamentally, it’s an ongoing cost — something that’s an essential and unavoidable cost of doing business. 

In an interview on the Dwarkesh Podcast, Amodei even admitted that if you “never train another model” you “don’t have any demand because you’ll fall behind.” Training is opex, and should be part of gross margins.

It’s time we had an honest conversation about Anthropic. 

Despite its positioning as the trustworthy, “nice” AI lab, Anthropic is as big, ugly and wasteful as OpenAI, and Dario Amodei is an even bigger bullshit artist than Sam Altman. It burns just as much of its revenue on inference (59%, or $2.79 billion on $4.5 billion of revenue, versus OpenAI’s 62%, or $2.5 billion  on $4.3 billion of revenue in the first half of 2025, if you use The Information’s numbers), and shows no sign of any “efficiency” or “cost-cutting.”

Worse still, Anthropic continually abuses its users through varying rate limits to juice revenues and user numbers — along with Amodei’s gas-leak-esque proclamations — to mislead the media, the general public, and investors about the financial condition of the company. 

Based on an analysis of many users’ actual token burn on Claude Code, I believe Anthropic is burning anywhere from $3 to $20 to make $1, and that the product that users are using (and the media is raving about) is not one that Anthropic can actually support long-term. 

I also see signs that Amodei himself is playing fast and loose with financial metrics in a way that will blow up in his face if Anthropic ever files its paperwork to go public. In simpler terms, Anthropic’s alleged “38% gross margins” are, if we are to believe Amodei’s own words, not the result of “revenue minus COGS” but “how much a model costs and how much revenue it’s generated.”

Anthropic is also making promises it can’t keep. It’s promising to spend $30 billion on Microsoft Azure (and an additional "up to one gigawatt”), “tens of billions” on Google Cloud, $21 billion on Google TPUs with Broadcom, “$50 billion on American infrastructure,” as much as $3 billion on Hut8’s data center in Louisiana, and an unknowable (yet likely in the billions) amount of money with Amazon Web Services. Not to worry, Dario also adds that if you’re off by a couple of years on your projections of revenue and ability to pay for compute, it’ll be “ruinous.”

I think that he’s right. Anthropic cannot afford to pay its bills, as the ruinous costs of training — which will never, ever stop — and inference will always outpace whatever spikes of revenue it can garner through media campaigns built on deception, fear-mongering, and an exploitation of reporters unwilling to ask or think about the hard questions. 

I see no difference between OpenAI’s endless bullshit non-existent deal announcements and what Anthropic has done in the last few months. Anthropic is as craven and deceptive as OpenAI, and Dario Amodei is as willing a con artist as Altman, and I believe is desperately jealous of his success.

And after hours and hours of listening to Amodei talk, I think he is one of the most annoying, vacuous, bloviating fuckwits in tech history. He rambles endlessly, stutters more based on how big a lie he’s telling, and will say anything and everything to get on TV and say noxious, fantastical, intentionally-manipulative bullshit to people who should know better but never seem to learn. He stammers, he blithers, he rambles, he continually veers between “this is about to happen” and “actually it’s far away” so that nobody can say he’s a liar, but that’s exactly what I call a person who intentionally deceives people, even if they couch their lies in “maybes” and “possiblies.” 

Dario Amodei fucking sucks, and it’s time to stop pretending otherwise. Anthropic has no more soul or ethics than OpenAI — it’s just done a far better job of conning people into believing otherwise.

This is the Hater’s Guide To Anthropic, or “DarioWare: Get It Together.” 

Welcome to Where's Your Ed At!

Subscribe today. It's free. Please.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Ed Zitron's Where's Your Ed At.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.