Editor’s Note: Due to the length of this piece, you may need to click a button to read the whole thing in your email. Sorry! On Friday, Sam Altman was unceremoniously fired as CEO of OpenAI, with the board citing that he was “not consistently candid with his communications…hindering [the board’s] ability to exercise its responsibilities.” Greg Brockman, then-President and board member,
What strikes me about this whole drama is how much I want both sides to lose. This seems to be a cult of personality build around, if his tweets are anything to go by, a surprisingly stupid person versus a group of surprisingly stupid people who have read A Fire Upon The Deep as a documentary and who cast him out over failure to communicate while failing to communicate that decision to key stakeholders, to use the relevant corporate jargon.
The fact that the post is full of quotes and links painting Altman as a Promethean Genius who single-handedly drives AI while in reality the product was created by the labour of dozens of engineers, the labour of hundreds of underpaid annotating/labeling contractors, and the creative efforts of millions of people whose IP has been stolen, makes me despair all by itself.
I have done computer vision projects. Where we do not generate all training images ourselves, we have had careful discussions to the effect of "we can use other people's images, but only if they are creative commons, and only because this project is not for profit, and we need to think about attribution". But these guys just go JOINK!, all your IPs are belong to us now, for our commercial service whose main use case is automating spam generation, and for that, an army of Twitter followers worships Altman as a genius.
On the other side, nobody who warns about extinction risk from AGI has a more plausible pathway of getting from current LLMs to super-AGI than "and then magic happens", nor do they have a more plausible mechanism of action for our extinction than "if AI is smart enough, it can do magic, and resource constraints, physics, or somebody pulling the power plug don't matter anymore". The closest I ever got to one of these guys formulating a mechanism of action, when I pressed him on Twitter, was the AI paying humans to assemble a super-killer-pathogen without realising they are doing it, and it included several steps that are biologically and biochemically impossible (for context, I am a biologist). But you just wait!, the AI will be so smart, it can do things we know are impossible, it will be that smart, believe me, bro!
This is all so deeply embarrassing to watch. Why can't both sides lose this spat?
Very well-written piece, and I finished reading just about when the QOTSA track you suggested (Straight Jacket Fitting) ended. I hate how Altman, Andressen, et al. turned "AI" from an actual concept into another extension of their ongoing destruction of the world that exists to buy their way into Capitalist Heaven.
The themes in this debacle are very similar to the drama and high emotion in our house around our 5 kids ages 15-20. It’s staggering how reminiscent these Silicon Valley twits are of my kids with their undeveloped brains and raging hormones.
- Deciding someone in the group doesn’t get to join in any more because of something no one else is aware of. The. WW3 kicks off and know one knows why.
- Telling the other kids ‘I don’t want him in the group because he lies’ then refusing to say what the lie is, even though it’s making you look a bit silly.
- Bringing all the kids with you into your new friendship group and posting about it passive aggressively on social media, stirring shit instead of calming it down.
- All your mates getting involved, posting love hearts on your posts, not really sure why though.
- Creating fucking mayhem and drama when open communication would have worked better
- Bringing home a new BFF because you’ve kicked out the popular one, because you don’t know how to say sorry.
Thanks for the analysis Ed. I will definitely subscribe to your Substack next year. I do appreciate your work dissecting these tech sleazebags and their foul cancer-like growth and Accelerationism drivel.
Such an engrossing read I neglected to get off at the right station on my way to work
This was good, a lot of the Big Tech critics are all "look what they did to my boy" all the sudden.
Terrific reporting. William Gibson couldn’t make this shit up. I’m guessing that the Accelerationist believe that the foretold Jesus AI will save them (as in just them secure in their gated communities) from any financial consequences. AI has become the singularity end-point of the cult of Rot economics. Increasing technical complexity faces diminishing returns. I say bring it on now. The sooner we pop the bubble the better.
Where is the Harry Potter and the Methods of Rationality connection here? There has to be one. There always is, I swear to god it's the case whenever it comes to AI bullshit in the valley. the board is using all of the talking points and rhetoric characteristic to LessWrong posters—they have to have some connection.
That’s still a simpler ownership diagram than your average crypto exchange, by a long shot. Imagine if FTX had involved *only* half a dozen or so corporate entities!
Ed, thank you for this. I'm not part of the tech industry at all, just a concerned human, and this very clearly laid out the situation and the positions held.
I hope that someone is savvy enough to create a bot that can determine when ChatGPT has visited an asset (ie similar to a google crawler) (ie a newspaper website, university’s white papers, UN census data’s website et), logged when it stole its data and automatically begins a lawsuit for theft of IP/breach of copyright.
That’s an AI I can get behind.
Large language models are simple plagiarism machines, and it’s about time we called them what they are, instead of fantastical reporting about them being Wintermute level AIs.
The sooner the altman and others are taken out as frauds like SBF the better.
One of the few great contributions a monster like Rupert Murdoch did, was hold google and Facebook to account for the theft of his IP. Hopefully someone is able to step up in the AI generation and do the same.
I feel like Ed has an inborn hostility to AI that colors his opinions. I usually love his thoughts but somehow when he talks about AI...I dunno he seems like a crank to me. I suspect it's because he's an ally of the creator economy and AI is... pretty disruptive to this.
This was long but just as satisfying as any Thanksgiving meal.
Who am I supposed to be rootin’ for here? 🤷🏼♂️
im a little confused near the end. how was openai's nonprofit board proven to be beholden to corporate interests--i thought firing sam would imply the opposite?