The Generative AI Con

Edward Zitron 22 min read

It's been just over two years and two months since ChatGPT launched, and in that time we've seen Large Language Models (LLMs) blossom from a novel concept into one of the most craven cons of the 21st century — a cynical bubble inflated by OpenAI CEO Sam Altman built to sell into an economy run by people that have no concept of labor other than their desperation to exploit or replace it.

I realize that Large Language Models like GPT-4o — the model that powers ChatGPT and a bunch of other apps — have use cases, and I'm fucking tired of having to write this sentence. There are people that really like using Large Language Models for coding (even if the code isn't good or makes systems less secure and stable) or get something out of Retrieval-Augmented Generation (RAG)-powered search, or like using one of the various AI companions or journal apps. 

I get it. I get that there are people that use LLM-powered software, and I must be clear that anecdotal examples of some people using some software that they kind-of like is not evidence that generative AI is a sustainable or real industry at the trillion-dollar scale that many claim it is.

I am so very bored of having this conversation, so I am now going to write out some counterpoints so that I don't have to say them again.

Ed, there are multiple kinds of artificial intelligence-

I KNOW. Stop saying this to me like an Uno reverse! I'm talking about generative AI!

Well, Ed, there are 300 million weekly users of ChatGPT. That surely proves that this is a very real industry!

  1. Though I don't have an exact number, I'd estimate that there have been tens of thousands of articles about artificial intelligence written in the last two years that are specifically focused on the generative AI boom, which in turn guarantees that they'll mention ChatGPT.
  2. The AI bubble means that effectively every single media outlet has been talking about artificial intelligence in the vaguest way, and there's really only been one "product" that they can try that "is AI" — and that product is ChatGPT.
  3. Reporting on artificial intelligence, according to the Reuters Institute for the Study of Journalism, is led by industry sources, with coverage of artificial intelligence in the UK being summarized by one study as tending to “construct the expectation of a pseudo-artificial general intelligence: a collective of technologies capable of solving nearly any problem." Specifically, the Reuters Institute's Professor Rasmus Nielsen said that coverage "often takes claims about what the technology can and can’t do, and might be able to do in the future, at face value in ways that contributes to the hype cycle."
    1. In short, most of the coverage you read on artificial intelligence is led by companies that benefit financially from you thinking artificial intelligence is important and by default all of this coverage mentions OpenAI or ChatGPT.
  4. So...yeah, of course ChatGPT has that many users. When you have hundreds of different reporters constantly spitting out stories about how important something may or may not be, and when that thing is available for free on a website, it's going to get a bunch of people using it. This is predominantly the media's doing!
  5. But 300 million people is a lot!
    1. It sure is! But it doesn't really prove anything other than that people are using the single-most-talked about product in the world. By comparison, billions of people use Facebook and Google. I don't care about this number!
    2. User numbers alone tell you nothing about the sustainability or profitability of a business, or how those people use the product. It doesn’t delineate between daily users, and those who occasionally (and shallowly) flirt with an app or a website. It doesn’t say how essential a product is for that person.

Also, uhm, Ed? It's early days for ChatGPT-

  1. Shut the fuck up! There isn't a single god damn startup in the history of anything — other than perhaps Facebook — that has had this level of coverage at such an early stage. Facebook also grew at a time when social media didn't really exist (at least, as a mainstream thing that virtually every demographic used) and thus the ability for something to "go viral" was a relatively new idea. By comparison, ChatGPT had the benefit of there being more media outlets, and Altman himself having spent a decade gladhandling the media through his startup investments and crafting a real public persona. 
  2. The weekly users number is really weird. Did it really go from 200 million to 300 million users in the space of three months? It was at 100 million weekly users in February 2023. You're telling me that OpenAI took, what, over a year to go from 100 million to 200 million, but it took three months (August 29 2024 to December 4 2024) to hit 300 million?
    1. I don't have any insider information to counter this, but I will ask — where was that growth from? OpenAI launched its o1 "reasoning" model (the previews, at least) on September 12 2024, but these were only available to ChatGPT Plus subscribers, with the "full" version released on December 5 2024. You're telling me this company increased its free user base by 50% in less than three months based on nothing other than the availability of a product that wasn't available to free users?
    2. This also doesn't make a ton of sense based on data provided to me by Similarweb, a digital market intelligence company. ChatGPT's monthly unique visitors were 212 million in September 2024, 233.1 million in October 2024, and 247.1 million in November 2024. I am not really sure how that translates to 300 million weekly users at all.
      1. Similarweb also provided me — albeit only for the last few weeks — data on ChatGPT.com's weekly traffic. For the period beginning January 21 2025, it only had 126.1 million weekly visitors. For the period beginning February 11 2025, it only had 136.7 million. Is OpenAI being honest about its user numbers? I've reached out for comment, but OpenAI has never, ever replied to me.
        1. Sidenote: Yes, these are visitors versus users. However, one would assume users would be lower than visitors, because a visitor might not actually use the product. What gives?
      2. There could be users on their apps — but even then, I'm not really sure how you square this circle. An article from January 29 2025 says that the iOS ChatGPT app has been downloaded 353 million times in total. Based on even the most optimistic numbers, are you telling me that ChatGPT has over 100 million mobile only users a week? And no, it isn’t Apple Intelligence. Cupertino didn’t launch that integration until December 11 2024. 
      3. Here's another question: why doesn't OpenAI reveal monthly active users? Wouldn't that number be higher? After all, a monthly active user is one that uses an app even once over a given month! Anyway, I hypothesize that the reason is probably that in September 2024 it came out that OpenAI had 11 million monthly paying subscribers, and though ChatGPT likely has quite a few more people that use it once a month, admitting to that number would mean that we're able to see how absolutely abominable its conversion to paying users is. 300 million monthly active users would mean a conversion rate of less than 4%, which is pretty piss-poor, especially as subscription revenue for ChatGPT Plus (and other monthly subscriptions) makes up the majority of OpenAI's revenue.
    3. Hey, wait a second. Are there any other generative AI products that reveal their users? Anthropic doesn't. AI-powered search product Perplexity claims to have 15 million monthly active users. These aren't big numbers! They suggest these products aren't popular! Google allegedly wants 500 million users of its Gemini chatbot by the end of the year, but there isn't any information about how many it’s at right now.
      1. Similarweb data states that google.gemini.com had 47.3 million unique monthly visitors in January 2025, copilot.microsoft.com had 15.6 million, Perplexity.ai had 10.6 million, and claude.ai had 8.2 million. These aren't great numbers! These numbers suggest that these products aren't very popular at all!
      2. The combined unique monthly visitors in January 2025 to ChatGPT.com (246m), DeepSeek.com (79.9m), Gemini.Google.com (47.3m), Copilot.microsoft.com (15.6m), Perplexity.ai (10.6m), character.ai (8.4m), claude.ai (8.2m) and notebookLM.google.com (7.4m) was 423.4 million - or an astonishing 97.5 million if you remove ChatGPT and DeepSeek. 
        1. For context, the New York Times said in their 2023 annual report that they received 131 million unique monthly visitors globally, and CNN says they have more than 151 million unique monthly visitors
  3. This isn't the early days of shit. The Attention Is All You Need paper that started the whole transformer-based architecture movement was published in June 2017. We're over two years in, hyperscalers have sunk over 200 billion dollars in capital expenditures into generative AI, AI startups took up a third of all venture capital investment in 2024, and almost every single talented artificial intelligence expert is laser-focused on Large Language Models. And even then, we still don't have a killer app! There is no product that everybody loves, and there is no iPhone moment!

Well Ed, I think ChatGPT is the iPhone moment for generative AI, it's the biggest software launch of all time-

  1. Didn't we just talk about this? Fine, fine. Let's get specific. The iPhone fundamentally redefined what a cellphone and a portable computer could be, as did the iPad, creating entirely new consumer and business use cases almost immediately. Cloud computing allowed us to run distinct applications in the cloud, which totally redefined how software was developed and deployed, creating both entirely new use cases for software (as the compute requirements moved from the customer to the provider), and an entirely new cloud computing industry that makes hundreds of billions of dollars a year.
  2. So, what exactly has generative AI actually done? Where are the products? No, really, where are they? What's the product you use every day, or week, that uses generative AI, that truly changes your life? If generative AI disappeared tomorrow — assuming you are not somebody who actively builds using it — would your life materially change?
  3. The answer is "not that much." Putting aside the hype, bluster and ungodly amounts of money, I can find no evidence that any of these apps are making anyone any real money. Microsoft claims to have hit "$13 billion in annual run rate in revenue from its artificial intelligence products and services," which amounts to just over a billion a month, or $3.25 billion a quarter.
    1. This is not profit. It's revenue.
    2. There is no "artificial intelligence" part of Microsoft's revenue or earnings. This is literally Microsoft taking anything with "AI" on it and saying "we made money!"
    3. $3.25 billion a quarter is absolutely pathetic. In its most recent quarter, Microsoft made $69.63 billion, with its Intelligent Cloud segment (which includes things like their Azure cloud computing solutions) making $25.54 billion in revenue, and spent $15.80 billion in capital expenditures excluding non-specific finance leases.
    4. In the last year, Microsoft has spent over $55 billion capital expenditures to maybe (to be clear, the $13 billion in run rate is a projection using current financial performance to predict future revenue) make $13 billion. This is not a huge industry! These are not good numbers, especially considering the massive expenses!

They'll Work It Out!

  1. When? No, really, when?
    1. OpenAI burned more than $5 billion last year.
    2. According to The Information, Anthropic burned $5.6 billion. That may very likely mean Anthropic burned more money than OpenAI somehow last year! These companies are absolutely atrocious at business! The reason I’m not certain is that in the past The Information has been a touch inconsistent with how it evaluates "costs," in that I’ve seen it claim that OpenAI "burned just $340 million in the first half of 2024," a number that they pulled from a piece from last year followed by the statement that "[OpenAI's] losses "are steep due to the impact of major expenses, such as stock compensation and computing costs, that don't flow through its cash statement." To be clear, OpenAI burned approximately $5 billion on compute alone. So yeah, OpenAI “burned only $340 million last year” as long as you don’t consider billions of other costs for some reason. Great stuff! It isn’t obvious how The Information is evaluating Anthropic’s burn versus OpenAI’s, and I’ve reached out to Jon Victor over there to get some clarity. I want to be clear that I very much appreciate, value and recommend The Information’s journalism, but I do not accept the idea of arbitrarily leaving out costs. This isn’t real business! Sorry! 
    3. None of these companies are profitable, and despite repeated claims that "the cost of inference is coming down" (the thing that happens when you prompt the model to do something) it doesn't appear to be helping them. In the weeks following the release of the super-efficient DeepSeek models I kind of expected them to start talking about efficiency. They never addressed it other than OpenAI, which said that DeepSeek would cause it to maintain less of a lead. Great stuff!

What Are We Doing Here?

OpenAI and Anthropic are both burning billions of dollars a year, and do not appear to have found a way to stop doing so. The only "proof" that they are going to reverse this trend is The Information saying that "Anthropic's management expects the company to stop burning cash in 2027."

Sidebar: Hey, what is it with Dario Amodei of Anthropic and the year 2027? He said (made up) that "AI could surpass almost all humans at almost everything" "shortly after 2027" in January. He said in one of his stupid and boring blogs that "possibly by 2026 or 2027 (and almost certainly no later than 2030)" that "capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage—a “country of geniuses in a datacenter." This man is full of shit! Hey, tech media people reading this — your readers hate this shit! Stop printing it! Stop it!

While one could say "the costs will come down," and that appears to be what The Information is claiming, suggesting that "Anthropic said it would reduce its burn rate by "nearly half" in 2025, the actual details are thin on the ground, and there’s no probing of whether that’s even feasible without radically changing its models. Huh? How? Anthropic’s burn increased every single year! So has OpenAI's!

The Information — who I do generally, and genuinely, respect — ran an astonishingly optimistic piece about Anthropic estimating that it'd make $34.5 billion in revenue in 2027 (there's that year again!), the very same year it’d stop burning cash. Its estimates are based on the premise that "leaders expected API revenue to hit $20 billion in 2027," meaning people plugging Anthropic's models into their own products. This is laughable on many levels, chief of which is that OpenAI, which made around twice as much revenue as Anthropic did in 2024, barely made a billion dollars from API calls in the same year.

It's here where I'm going to choose to scream.

Anthropic, according to The Information, generated $908 million in revenue in 2024, and has projected that it will make $2.2 billion in revenue in 2025, and its "base case" — which The Information says would be "the likeliest outcome (???) — is that it will make $12 billion in revenue in 2027. 

This is what happens during bubbles! Assets are over-valued based on a combination of vibes and hysteria! 

Dario Amodei — much like Sam Altman — is a liar, a crook, a carnival barker and a charlatan, and the things he promises are equal parts ridiculous and offensive. The Information (which needs to do better work actually critiquing these people) justified Amodei and Anthropic's obscene and fantastical revenue targets by citing Amodei's blog, which at no point explains what a "country of geniuses in a datacenter" actually means or what the product might be or what he's going to do to increase revenue by more than thirty billion dollars a year by 2027.

But wait, The Information says it got a little more specific!

Anthropic says its technology could transform office roles such as generating or reviewing legal paperwork and automating software engineering. It cited code repository GitLab and legal search firm LexisNexis as examples of customers. Up-and-coming startups such as Anysphere, which develops the Cursor coding assistant for programmers, are also major buyers of Claude software.

So, just to be abundantly clear, it appears Anthropic's big plan is to "sell people more software to some people, maybe."

Anthropic is currently raising $2 billion at a $60 billion valuation primarily based off of this trumped-up marketing nonsense. Why are we humoring these oafs?

What These Oafs Are Actually Doing

When you put aside the hype and anecdotes, generative AI has languished in the same place, even in my kindest estimations, for several months, though it's really been years. The one "big thing" that they've been able to do is to use "reasoning" to make the Large Language Models "think" (they do not have consciousness, they are not "thinking," this just means using more tokens to answer a particular question and having multiple models check the work), which mostly results in them being a bit more accurate when generating an answer, but at the expense of speed and cost.

This became a little less exciting a month ago when DeepSeek released its open source "r1" model which performed similarly to reasoning products from companies like Google and OpenAI, and while some argue it "built the model to game benchmarks," that is quite literally what every single model developer does. Nevertheless, the idea of "reasoning" being the "killer app" — despite the fact that nobody can really explain why it's such a big deal — is now quite dead.

As a result, the model companies are kind of flailing. In a recent post on Twitter, Sam Altman gave an "updated roadmap for GPT 4.5 and GPT-5" where he described how OpenAI would be "simplifying" its product offerings, saying that GPT-4.5 would be OpenAI's "last non-chain-of-thought model," and that GPT-5 would be "a system that integrates a lot of our technology," including o3, OpenAI's "powerful" and "very expensive" reasoning model, which it...would also no longer release as a standalone model.

To break this down, Altman is describing his next model — GPT 4.5 — as launching in some indeterminate timeframe and doing something probably quite similar to the current GPT 4o model. In the case of GPT-5, it would appear that Altman is saying that it won't be a model at all, but some sort of rat king of different mediocre products, including o3, a product that he would no longer be letting you use.

I guess that's the future of this company, right? OpenAI will release models and uh, well. Uhh.

Uhhhhhhh.

Wait! Wait! OpenAI released a new product! It's called Deep Research, which lets you ask ChatGPT to generate a report by browsing the web. This is almost a cool idea. I sure hope that it doesn't make glaring mistakes and cost a shit-ton of money!

Anyway, let's go to Casey Newton at Platformer for the review:

Generally speaking, the more you already know about something, the more useful I think deep research is. This may be somewhat counterintuitive; perhaps you expected that an AI agent would be well suited to getting you up to speed on an important topic that just landed on your lap at work, for example.  In my early tests, the reverse felt true. Deep research excels for drilling deep into subjects you already have some expertise in, letting you probe for specific pieces of information, types of analysis, or ideas that are new to you.

It’s possible that you can make this work better than I did. (I think all of us will get better at prompting these models over time, and presumably the product will improve over time as well.)

Personally, when I ask someone to do research on something, I don't know what the answers will be and rely on the researcher to explain stuff through a process called "research." The idea of going into something knowing about it well enough to make sure the researcher didn't fuck something up is kind of counter to the point of research itself.

Also: "I think all of us will get better at prompting-" Casey, we're paying them! We're paying them for them to do stuff for us!

Nevertheless, I did go and look up one of Casey's examples, specifically one about how the Fediverse could benefit publishers.

Let's do some research!

Despite Newton's fawning praise, the citations in this "deep research" are flimsy at best. The first (and second) citations are from an SEO-bait article about the fediverse from a "news solutions" company called "Twipe" and is used to define "broad cross-platform reach." The next one is from reputable digital advertising outlet Digiday, but it's used to cite how sites like 404 Media and The Verge are "actively exploring the Fediverse to take more control over their referral traffic and onsite audience engagement," which is plagiarised ad-verbatim from the Digiday article.

After that, the next three citations are posts from Hackernews, a web forum started by yCombinator (here's an example). How is this "deep research" exactly?

In fact, this thing isn't well-researched at all. Across the following paragraphs, Deep Research cites the same Digiday article eight times, before going back to citing the same Twipe article again. It also, hilariously, says that federated posts "can simultaneously publish to [a] website and as a toot on federated platforms like Mastodon and Threads," a term that Mastodon retired two years ago.

The next two citations are about Medium's embrace of Mastodon, followed by yet another citation of the Digiday article. Following that, Deep Research cites two different Reddit posts, a company called Interleger moving to the Fediverse, which the report cites several more times, along with yet another forum post, the very same Twipe post several more times, and then the support documentation for social network Bluesky several more times.

I won't go through more of the research paper citation by citation, but you'll be shocked to hear it mostly just cites Twipe and Hackernews and Reddit.

For now, Deep Research is only available on ChatGPT Pro, OpenAI's somehow-unprofitable $200-a-month subscription, though it's apparently coming to ChatGPT Plus in a limited capacity.

Not impressed? Well what if I told you it was very compute-intensive and expensive? Oh, one other detail — the entire thing’s on the very edge of comprehensible.

Here’s a bit under funding models:

"Memberships and Donations: A common monetization approach in the Fediverse (and across the open web) is voluntary support from the audience."

Nobody talks like this! This isn’t how human beings sound! I don’t like reading it! I don’t know how else to say this — there is something deeply unpleasant about how Deep Research reads! It’s uncanny valley, if the denizens of said valley were a bit dense and lazy. It’s quintessential LLM copy — soulless and almost, but not quite, right. 

Ewww.

So there you have it folks. OpenAI's next big thing is the ability to generate a report that you would likely not be able to use in any meaningful way anywhere, because while it can browse the web and find things and write a report, it sources things based on what it thinks can confirm its arguments rather than making sure the source material is valid or respectable. This system may have worked if the internet wasn't entirely poisoned by companies trying to get the highest ranking in Google, and if Google had any interest in making sure its results were high quality, which it does not.

I'm sorry, I know I sound like a hater, and perhaps I am, but this shit doesn't impress me even a little. Wow, you created a superficially-impressive research project that's really long and that cites a bunch of shit it found online that it made little attempt to verify? And said report took a while to generate, can only be produced if you pay OpenAI $200 each month, and it cost a bunch of money in compute to make?

Deep Research has the same problem as every other generative AI product. These models don't know anything, and thus everything they do — even "reading" and "browsing" the web — is limited by their training data and probabilistic models that can say "this is an article about a subject" and posit their relevance, but not truly understand their contents. Deep Research repeatedly citing SEO-bait as a primary source proves that these models, even when grinding their gears as hard as humanely possible, are exceedingly mediocre, deeply untrustworthy, and ultimately useless.

Furthermore, nothing about this product moves OpenAI toward profitability. In fact, I think they're doing the opposite. Deep Research uses OpenAI's o3 model which can cost as much as $1,000 a query, and while I imagine these prompts aren't that expensive, they are still significantly more so than a regular query from ChatGPT.

The whole point of hiring a researcher is that you can rely on their research, that they're doing work for you that would otherwise take you hours. Deep Research is the AI slop of academia — low-quality research-slop built for people that don't really care about quality or substance, and it’s not immediately obvious who it’s for. 

Surely, if you’re engaged enough to spend $200 on an OpenAI subscription and are aware of Deep Research, you probably know what SEO bait is, and can distinguish between low-quality and high-quality content. If you were presented with a document with such low-quality, repetitive citations, you’d shred it — and, if created by an intern, you’d shred them too. Or, at the very least, give them some stern words of guidance. 

Let me put this in very blunt terms: we are more than two years into the generative AI boom and OpenAI's biggest, sexiest products are Deep Research — a product that dares to ask "what if you were able to spend a lot of compute to get a poorly-cited research paper," and Operator, a compute-intensive application that rarely completes a task in minutes that would otherwise have taken you seconds.

As an aside, SoftBank, the perennial money-losers that backed WeWork and WireCard and lost more than $30 billion in the last few years, are trying to invest up to $25 billion in OpenAI.

I Feel Like I'm Going Insane

Everywhere you look, the media is telling you that OpenAI and their ilk are the future, that they're building "advanced artificial intelligence" that can take "human-like actions," but when you look at any of this shit for more than two seconds it's abundantly clear that it absolutely isn't and absolutely can't.

Despite the hype, the marketing, the tens of thousands of media articles, the trillions of dollars in market capitalization, none of this feels real, or at least real enough to sustain this miserable, specious bubble. People like Marc Benioff claiming that "today's CEOs are the last to manage all-human workforces" are doing so to pump up their stocks rather than build anything approaching a real product. These men are constantly lying as a means of sustaining hype, never actually discussing the products they sell in the year 2025, because then they'd have to say "what if a chatbot, a thing you already have, was more expensive?"

The tech industry — and part of our economy — is accelerating at speed into a brick wall, driven by people like Sam Altman, Dario Amodei, Marc Benioff, and Larry Ellison, all men that are incentivized to have you value their companies based on something other than what their businesses actually sell. 

We are in the midst of a group delusion — a consequence of an economy ruled by people that do not participate in labor of any kind outside of sending and receiving emails and going to lunches that last several hours — where the people with the money do not understand or care about human beings. 

Their narrative is built on a mixture of hysteria, hype, and deeply cynical hope in the hearts of men that dream of automating away jobs that they would never, ever do themselves. Altman uses his digital baba yaga as a means to stoke the hearts of weak-handed and weak-hearted narcissists that would sooner shoot a man dead than lose a dollar, even if it means making their product that much worse. CEOs have the easiest jobs in the world, and no job is easier than Satya Nadella waving to the Microsoft 365 staff and saying “make them put AI in it, pronto” and telling Microsoft CFO Amy Hood that “we must make sure that Bing has generative AI” before jetting off to Davos to yell that he intends to burn more money than ever on GPUs.

Sam Altman believes you are stupid. He believes you are a moron that will slurp up whatever slop he gives you. Deep Research and Operator are both half-products that barely brush against the fabric of their intended purposes, and yet the media screams and applauds him like he's a gifted child that just successfully tied his shoes.

I know, I know, I'm a hater, I'm a pessimist, a cynic, but I need you to fucking listen to me: everything I am describing is unfathomably dangerous, even if you put aside the environmental and financial costs.

Let me ask you a question: what's more likely?

That OpenAI, a company that has only ever burned money, that appears completely incapable of making a truly usable, meaningful product, somehow makes its products profitable, and then somehow creates a truly autonomous artificial intelligence?

Or that OpenAI, a company that has consistently burned billions of dollars, that has never shown any sign of making a profit, that has in two years released a selection of increasingly-questionable and obtuse products, actually runs out of money?

How does this industry actually continue? Do OpenAI and Anthropic continue to raise tens of billions of dollars every six months until they work this out? Do the hyperscalers keep spending hundreds of billions of dollars in capital expenditures for little measurable return?

And fundamentally, when will everybody start accepting that the things that AI companies are saying have absolutely nothing to do with reality? When will the media stop treating every single expensive, stupid, irksome, quasi-useless new product is magical, and start asking these people to show us the fucking future already?

Generative AI is a financial, ecological and social time bomb, and I believe that it's fundamentally damaging the relationship between the tech industry and society, while also shining a glaring, blinding light on the disconnection between the powerful and regular people. The fact that Sam Altman can ship such mediocre software and get more coverage and attention than every meaningful scientific breakthrough of the last five years combined is a sign that our society is sick, our media is broken, and that the tech industry thinks we're all fucking morons.

This entire bubble has been inflated by hype, and by outright lies by people like Sam Altman and Dario Amodei, their lies perpetuated by a tech media that's incapable of writing down what's happening in front of their faces. Altman and Amodei are raising billions and burning our planet based on the idea that their mediocre cloud software products will somehow wake up and automate our entire lives.

The truth is that generative AI is as mediocre as it is destructive, and those pushing it as "the future" that "will change everything" are showing how much contempt they have for the average person. They believe that they can shovel shit into our mouths and tell us it's prime rib, that these half-assed products will change the world and that as a result they need billions of dollars and to damage our power grid.

I know this has been a rant-filled newsletter, but I'm so tired of being told to be excited about this warmed-up dogshit. I'm tired of reading stories about Sam Altman perpetually saying that we're a year away from "everything changing" that exist only to perpetuate the myth that Silicon Valley gives a shit about solving anyone's problems other than finding new growth markets for the tech industry

I refuse to sit here and pretend that any of this matters. OpenAI and Anthropic are not innovators, and are antithetical to the spirit of Silicon Valley. They are management consultants dressed as founders, cynical con artists raising money for products that will never exist while peddling software that destroys our planet and diverts attention and capital away from things that might solve real problems.

I'm tired of the delusion. I'm tired of being forced to take these men seriously. I'm tired of being told by the media and investors that these men are building the future when the only things they build are mediocre and expensive. There is no joy here, no mystery, no magic, no problems solved, no lives saved, and very few lives changed other than new people added to Forbes' Midas list.

None of this is powerful, or impressive, other than in how big a con it’s become. Look at the products and the actual outputs and tell me — does any of this actually feel like the future? Isn’t it kind of weird that the big, scary threats they’ve made about how AI will take our jobs never seem to translate to an actual product? Isn’t it strange that despite all of their money and power they’re yet to make anything truly useful? 

My heart darkens, albeit briefly, when I think of how cynical all of this is. Corporations building products that don't really do much that are being sold on the idea that one day they might, peddled by reporters that want to believe their narratives — and in some cases actively champion them. The damage will be tens of thousands of people fired, long-term environmental and infrastructural chaos, and a profound depression in Silicon Valley that I believe will dwarf the dot-com bust.

And when this all falls apart — and I believe it will — there will be a very public reckoning for the tech industry.

Share
Comments

Welcome to Where's Your Ed At!

Subscribe today. It's free. Please.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Ed Zitron's Where's Your Ed At.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.