Put Up Or Shut Up

Edward Zitron 14 min read

I feel like the tech industry is currently in the midst of the most bizarre cognitive dissonance I've ever seen — more so than the metaverse, even — as company after company simply lies about their intentions and the power of AI. 

I get it. Everybody wants something to be excited about. Everybody wants something new and interesting and fun and cool that gives everybody something to write about and point at and say that the tech industry is equal parts important and developing the future. It’s easier this way — to just accept what we’re being told by Sam Altman and a coterie of other people slinging marketing nonsense — rather than begin to ask whether any of this actually matters, and whether the people hyping it up might be totally full of shit. 

Last week, HR platform Lattice announced that it would be, to quote CEO Sarah Franklin, "the first company to lead in the responsible employment of AI ‘digital workers’ by creating a digital employee record to govern them with transparency and accountability." This buzzword-laden nonsense, further elaborated on in a blog post while adding absolutely nothing in the process, suggested that Lattice would be treating digital workers as if they were employees, giving them "official employee records in Lattice," and "securely onboarding, training and assigning them goals," as well as performance metrics, "appropriate systems access, and even a manager, just as any person would be."

Lattice claimed that this would "mark a significant moment in the evolution of AI technology — the moment when the idea of an "AI employee moves from concept to reality in a system and into an org chart."

After less than a week of people relentlessly dunking on the company, Lattice would add a line to the blog post stating that "this innovation sparked a lot of conversation and questions that have no answers yet" and that it "[looks] forward to continuing to work with our customers on the responsible use of AI, but will not further pursue digital workers in the product."

This idea was (and is), of course, total nonsense. From what I can tell — as Lattice didn't really elaborate beyond a few screenshots and PR-approved gobbledygook — the company planned to create a profile for AI "workers" within the platform, which would then, in turn, allow something else to happen, though what that is doesn't seem obvious, because I'm fairly certain that this entire announcement was the equivalent of allowing you to make a profile in a CRM but with a dropdown box that said "AI." 

CEO Sarah Franklin — who spent sixteen years at Salesforce, a company that has announced it was adding AI every year for the last decade, and that nobody really understands what it does — wanted to turn what was a fairly mediocre and meaningless product addition into a big marketing push, only to find out that doing so resulted in annoying people like me asking annoying questions like "what does this mean?" and "will this involve paying them benefits, as they're employees?"

It's also a nonsensical product idea, one that you'd only conceive if you'd never really used AI or done a real job in quite some time. When you use ChatGPT or any of the other generative AI bots that Franklin listed, you're not using...them, you're accessing a tool that generates stuff. 

Why would you add ChatGPT to your org chart? What does that give you? What does it mean to be ChatGPT's manager, or ChatGPT's direct report, or for ChatGPT to be considered an "employee"? This would be like adding Salesforce, or Gmail, or Asana as an employee, because these are tools that, uh, do stuff in the organization.

It's all so fucking stupid! Putting aside the ethical concerns that never crossed Franklin's mind (they're employees — do they get paid? If there's a unionization vote, who controls them? Given the fears of AI replacing human workers, how will this make existing employees feel?), this entire announcement is nonsense, complete garbage, conjured up by minds disconnected from productivity or production of any kind. What was this meant to do? What was the intent of this announcement, other than to get attention, and what was the product available at the end?

Nothing. The answer is nothing. There was nothing behind this, just like so much of the AI hype — a bunch of specious statements and empty promises in a wrapper of innovation despite creating nothing in the process.

According to Charles Schwab, there's an "AI revolution" happening, with companies like Delta allegedly using it to "deliver efficiency" through the power of AI, and Lisa Martin, "CMO Advisor" (?) of hype-fiend analysts The Futurum Group claiming that there are "hard results" with call center volume "dropping 20% thanks to Delta's AskDelta Chatbot." One might assume that this was a recent addition as it was the one and only statistic in a rambling screed about "efficiency" and "customer service value," except AskDelta was launched sometime before 2019 according to this PMYNTS article, assuming that this VentureBeat article from 2016 is talking about some other Delta chatbot, potentially made by a company called 247.ai that it sued for a data breach that happened in 2017. 

Now OpenAI has announced that it has created a "five-level system" to track development toward Artificial General Intelligence and "human-level problem-solving," a non-story that should be treated with deep suspicion. 

According to Bloomberg, these five levels go from Level 1 ("Chatbots, AI with conversational language") to Level 5 ("Organizations, AI that can do the work of an organization") and, somehow, Bloomberg credulously accepted an OpenAI spokesperson's statement that it is "on Level 1, but on the cusp of Level 2," when GPT can do "human-level problem solving" without any explanation of what that means, how that's measured, or how GPT, a model that probabilistically guesses the right thing to do next, will start "reasoning," a thing that Large Language Models cannot do according to multiple academics

It’s kind of like saying you’re on the first step to becoming Spider-Man because you’re a man. 

It’s also important to note that the steps between these stages are huge. To reach “reasoner” level, Large Language Models would have to do things they are currently incapable of doing. To reach level 3 (whatever “Agents, systems that can take actions” means) you’d need a level of sentience dependent on entirely new forms of mathematics. While a kinder, more patient person would suggest that these were just frameworks created for potential endeavors, the fact that they’re tactically leaking feels like a marketing exercise far more than anything resembling innovation. 

Alex Kantrowitz on X: "OpenAI has a five level ranking for how close it is  to AGI. Internally, the company says it's approaching Level 2.  https://t.co/tmXlqcFvtB https://t.co/ualz5jOxeL" / X
OpenAI's Five Levels Of Bullshit That Mean Nothing, Source: Bloomberg reporting

OpenAI is likely well aware that Large Language Models are incapable of reasoning, because Reuters reports that it’s got some sort of in-development model called Strawberry, which is designed to have human-like reasoning capabilities. 

However, this story is riddled with strange details. It’s based on a document that Reuters viewed in May, and “Strawberry” is apparently the new name for Q*, a supposed breakthrough from last year that Reuters reported on around the time of Altman’s ouster, which allegedly could “[answer] tricky science and math questions out of reach of today’s commercially-available models.” Reuters also misreports Bloomberg’s story about how “OpenAI showed a demo of a research project that it claimed had new human-like reasoning skills,” a liberal way of describing, to quote Bloomberg, “a research project involving its GPT-4 AI model that OpenAI thinks shows some new skills that rise to human-like reasoning.” 

The document — again, viewed in May but reported on in July, for whatever reason — describes “what Strawberry aims to enable, but not how,” yet also somehow describes using “post-training” of models, which Reuters says means “adapting the base models to hone their performance in specific ways after they’ve already been “trained” on reams of generalized data,” which sounds almost exactly like how models are trained today, making me wonder if this wasn’t so much a “leak” as it was “OpenAI handing a document to Reuters for marketing reasons.” 

I realize I’m being a little bit catty, but come on. What is the story here? That OpenAI is working on something it was already working on, and have yet to achieve anything with?

Take this quote:

The document describes a project that uses Strawberry models with the aim of enabling the company’s AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms “deep research,” according to the source.

Who cares! I guarantee you Anthropic, Google and Meta are all working on something doing exactly the same thing, because what they are describing is something that limits every single Large Language Model due to the very nature of how they’re built, something Reuters even acknowledges by adding that “this is something that has eluded AI models to date.”

And here’s another wrinkle. If this Q/Strawberry thing was meant to be such a big breakthrough, why hasn’t it broken through yet? Reuters wrote up Q*'s existence in December of last year! Surely if there was some sort of breakthrough - even a small one - these companies would share it/"a source would leak it."

The problem is that these pieces aren’t about breakthroughs — they’re marketing collateral dressed up as technological progress, superficially-exciting yet hollow on the inside, which almost feels a little too on the nose. 

I hate to be that guy, but it’s all beginning to remind me of the nebulous roadmaps that cryptocurrency con artists used to offer. What’s the difference between OpenAI vaguely suggesting that “Strawberry” will “give LLMs reasoning” and NFT project Bored Ape Yacht Club’s roadmap that promises a real-life clubhouse in Miami and a “top secret blockchain game”? I’d argue that the Bored Ape Yacht Club has a better chance of delivering, if only because “a blockchain game” would ostensibly use technology that exists.

No, really, what’s the difference here, exactly? Both are promising things they might never deliver.

Look, I understand that there’s a need to studiously cover these companies and how they’re (allegedly) thinking about the future, but both of these stories demand a level of context that’s sorely lacking.


These stories dropped around the same time a serious-seeming piece from the Washington Post reported that OpenAI had rushed the launch of GPT-4o, its latest model, with the company "planning the launch after-party prior to knowing if it was safe to launch," inviting employees to celebrate the product before GPT-4o had passed OpenAI's internal safety evaluations. 

You may be reading this and thinking "what's the big deal?" and the answer is "this isn't really a big deal," other than the fact that OpenAI said it cared a lot about safety and only sort-of did, if it really cared at all, which I don't believe it does. 

The problem with stories like this is that they suggest that OpenAI is working on Artificial General Intelligence, or something that left unchecked could somehow destroy society, as opposed to what it’s actually working on — increasingly faster iterations of a Large Language Model that's absolutely not going to do that. 

OpenAI should already be treated with suspicion, and we should already assume that it’s rushing safety standards, but its "lack of safety" here is absolutely nothing to do with ethical evaluators or "making sure GPT-4o doesn't do something dangerous." After all, ChatGPT already spreads election misinformation, telling people how to build bombs, giving people dangerous medical information and generating buggy, vulnerable code. And, to boot, former employees have filed a complaint with the SEC that alleges its standard employment contracts are intended to discourage any legally-protected whistleblowing.

The safety problem at OpenAI isn't any bland fan fiction about how "it needs to be more careful as it builds artificial general intelligence," but that a company stole the entirety of the internet to train a model that regularly and authoritatively states incorrect information. We don't need a story telling us it kind-of-sort-of-rushed a product to market, even though it didn't actually do that, but that this company, on a cultural level, operates without regard for the safety of its users, and is deliberately misrepresenting to outlets like Bloomberg that it is somehow on the path to creating AGI.

Where is OpenAI's investment in mitigating or avoiding hallucinations? While there's tons of evidence that they're impossible to fix, surely a safety-minded culture would be one that sought to fix or mitigate the most obvious and dangerous problem of Large Language Models.

The reality is that Sam Altman and OpenAI don't give a shit, have never given a shit, and will not give a shit, and every time they (and others) are given the opportunity to talk in flowery language about "safety culture" and "levels of AI," they're allowed to get away from the very obvious problem: that Large Language Models are peaking, will not solve the kind of complex problems that actually matter, and OpenAI (and other LLM companies) are being allowed to accumulate money and power in a way that's allowed them to do actual damage in broad daylight.

To quote my friend and Verge reporter Kylie Robison, Sam Altman is more interested in accruing power than he is in developing AGI, and I'd take that a level further and add that I think he's more than aware that OpenAI will likely never do so. 

OpenAI has allowed Altman to gain thousands of breathless headlines whenever he makes some sort of half-assed statement about how his model will "solve physics," with the media helping build an empire for a career bullshit-merchant that has a history of failure, including his specious crypto-backed biometric data hoarder Worldcoin, which has missed its target of signing up 1 billion users by 2023 by nine-hundred and ninety-four million people, largely because it doesn't do anything and there's not a single reason for it to exist.

But it doesn't matter, because any time Sam Altman or any major CEO says something about AI everybody has to write it up and take it seriously. Last week, career con-artist Arianna Huffington announced a partnership between Thrive Global (a company that sells "science-backed" productivity software(?)) and OpenAI that would fund a "customized, hyper-personalized AI health coach" under Thrive AI Health, another company for Arianna Huffington to take a six-figure salary from. It claims it will "be trained on the best peer-reviewed science as well as Thrive's behavior change methodology," a mishmash of buzzwords and pseudoscientific pablum that means nothing because the company has produced no product and may likely never do so.

It's pretty tough to find out what Thrive actually does, but from a little digging in, one of its products appears to be called "Thrive Reset," which is, and I shit you not, a product that makes customer service agents take a 60-second "science-backed" break during the workday. According to Glassdoor, a publicly-available database of honest reviews of companies, Thrive Global has a "toxic culture" and "awful leadership" that is "mostly bad" with a management team lacking a clear pathway to success," with one review saying that Thrive had "the most toxic work environment they'd ever encountered" with "direct bullying," and another saying you should "stay away" because it's "toxic beyond belief," with a hierarchy "based on who the founder favors the most." 

If I had to pick an actual, real safety story, it'd be investigating the fact that OpenAI, a company with a manipulative, power-hungry charlatan as its CEO, considers it safe to partner up with a company that doesn't appear to do anything other than make its employees miserable (and has done so for years).

Indeed, we should be deeply concerned that Thrive Global hosts — to quote Slate — a "an online community whose members frequently publish writings that traffic in myths about "alternative COVID cures," and that this is the partner that OpenAI believes should work with it on a personalized health coach.

And, crucially, we should all be calling this what it really is: bullshit! It's bullshit! 

Arianna Huffington and Sam Altman "co-wrote" an advertisement masquerading as an editorial piece in TIME Magazine to hype up a company that's promised to build something extremely vague on an indeterminate timeline that will allegedly "learn your preferences across five behaviors," with "superhuman long-term memory" and a "fully integrated personal AI coach." When you cut through the bullshit, this sounds — if it's ever launched — like any number of other spurious health apps that nobody really wants to use because they're not useful to anybody other than millionaire and billionaire hucksters trying to get attention.

Generative AI's one real innovation is that it's allowed a certain class of scam artist to use the vague idea of "powerful automation" to hype companies to people that don't really know anything. The way to cover Thrive's AI announcement isn't to say "huh, it said it will do this," or to both-sides the argument with a little cynicism, but to begin asking a very real question: what the fuck is any of this doing? Where is the product? What is any of this stuff doing, and for whom is it doing it for? Why are we, as a society or as members of the media blandly saying "AI is changing everything" without doing the work to ask whether it's actually changing anything? I understand why some feel it's necessary to humor the idea that AI could help in healthcare, but I also think they're wrong to do so. Arianna Huffington has a long history of producing nothing, and if we're honest, so does Sam Altman.

No, really, where are the innovations from generative AI? What great product has Sam Altman ushered into the world and what does it do? What evidence do we have that generative AI is actually making meaningful progress toward anything other than exactly what a Large Language Model has already been doing? What evidence do we have that generative AI is actually helping companies, other than Klarna's extremely suspicious stories about saving money? Why does CNBC have an interview with sex-pest enabler and former Activision Blizzard CEO Bobby Kotick talking about how "AI can help personalize education" as the biggest "AI education pioneer" hocks a product that struggles with basic maths?

The media seems nigh-on incapable of accepting that generative AI is a big, stupid, costly and environmentally-destructive bubble, to the point that they'll happily accept marketing slop and vague platitudes about how big something that's already here will be in the future based on a remarkable lack of proof. Arianna Huffington's announcement should've been met with suspicion, with any article — and I'd argue there was no reason to cover this at all — refusing to talk about how "it'd be nice if AI helped with medical stuff" in favor of a brusque statement about how Arianna Huffington has built very little and Sam Altman isn't much better.

When it comes to OpenAI, now is a couple of years too late to suddenly give a shit about safety. Sam Altman has been a seedy character for years, and was already fired from OpenAI once for intentionally misleading the board, his company has made billions by training its models on other people's work, his company's models are actively damaging the environment, all so that they can authoritatively provide false information to their users and not make any businesses any money.

The "safety" problem with AI isn't about the ethical creation of a superintelligence, but the proliferation of quasi-useful technology at a scale that destroys the environment, and that the cost of said technology involves stealing things written by hundreds of millions of people while occasionally making people like The Atlantic's Nick Thompson money.

And it’s time for the media to start pushing back and asking for real, tangible evidence that anything is actually happening. Empty promises and vacuous documents that say that a company might or might not do something sometime in the future are no longer sufficient proof of the inevitability of artificial intelligence. 

It’s time to treat OpenAI, Anthropic, Google, Meta and any other company pushing generative AI with suspicion, to see their acts as an investor-approved act of deception, theft and destruction. There is no reason to humor their “stages of artificial intelligence” - it’s time to ask them where the actual intelligence is, where the profits are, how we get past the environmental destruction and the fact that they’ve trained on billions of pieces of stolen media, something that every single journalist should consider an insult and a threat. And when they give vague, half-baked answers, the response should be to push harder, to look them in the eye and ask “why can’t you tell me?”

And the answer is fairly simple: there isn’t one. Generative AI models aren’t getting more energy-efficient, nor are they getting more “powerful” in a way that would increase their functionality, nor are they even capable of automating things on their own. They’re not getting “reasoning,” nor are they getting “sentience,” nor are they “part of the path to superintelligence.” 

Share
Comments

Welcome to Where's Your Ed At!

Subscribe today. It's free. Please.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Ed Zitron's Where's Your Ed At.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.