Want to listen to this interview instead? Download the latest episode of Better Offline! You can listen to it on Apple Podcasts, Spotify, or anywhere else you can insert an RSS feed.
Last week, I had the privilege of interviewing Daron Acemoglu — one of the world’s most important and influential economists, and someone who has written extensively about the rise and impact of AI, and the malign influence of the technology industry on our lives, our governments, and our societies.
It was, to put it simply, an eye-opening and fascinating experience, and one I’m immensely grateful to have had. Our conversation spanned a variety of topics — from the short-termism of the managerial class, to the impact of generative AI, to the societal cost of a technology industry that’s too powerful, and totally unconstrained.
The following conversation has been tweaked a bit for readability, but is otherwise a faithful transcript of our conversation.
A term that you've popularized, though not necessarily invented, is creative destruction. Do you mind explaining it for the listeners [Editor's note: you are reading this]?
Yeah, I definitely did not invent it and I think many other people deserve much more credit for inventing it and making it work. It's this idea that goes back to Joseph Schumpeter, a famous Austrian economist who spent most of his career — or the most important part of his career at Harvard — who emphasized that in capitalist growth, you will have new firms taking market away and destroying old firms and as a corollary of that, new technologies taking market share away and driving out old technologies, and he understood this was a difficult and tumultuous process but believed that that was the essence of sort of capitalist growth.
It's one of these things that is a fact of life in a market process, but different types of social, economic and political reactions to it are natural and how you react to is going to have various effects both on growth, what type of growth, and its distributional effects.
Right, so something I've written about and spoken about a lot as well is like this idea of the rot economy, which is the kind of growth that's overtaken most of the modern markets. And I'd argue, least in the creative destruction field, tech has stopped really innovating. It doesn't feel like they're creating things to create new jobs, to create new markets. I'm wondering how you feel about looking at the general tech industry.
Well, I am a critic of the tech industry and I have become so over the last decade or so. And my problem with the tech industry is not its dynamism. I applaud that. It's not risk-taking. I applaud that. And it's not the drive towards economic growth, which I think is also generally desirable.
But [my problem] is the direction of research and technologies that the tech industry has focused on, both because of ideological reasons and because of a particular business model that they developed. And I think both of those have pushed us towards technologies that I see as socially less desirable, in some cases actually undesirable. And as a result, we're actually getting growth without as much social benefits.
And let me try to just make one very simple point. Economic output, as measured by statistical agencies such as Gross Domestic Product, does not have any welfare element in it. So if I find a way of hacking into your computer and spending $1,000, and you find a way of defending against me spending $2,000, that will increase GDP by $3,000. I think even the most demented person wouldn't say that that's a social [good].
Yeah, I think you made a point like this to Goldman Sachs, where [paraphrasing], you could make a trillion dollars if you did deep fakes in a certain manner.
Exactly, So therefore new products that increase GDP may have socially undesirable consequences. That wasn't part of the original Schumpeter point and it's not something that I would worry about when I'm talking to people in Mexico who are trying to get the economy going, but it is a very important concern when it comes to new tech.
When do you think this shift happened?
Well, I think it was probably a gradual process, but the tech sector initially was very heavy on hardware and with some software elements. And when that started changing and the entire field became software, I think the possibilities for different types of technologies to go in very different social directions also multiplied. So money today, NVIDIA being an exception, is not made by hardware and even in NVIDIA, I think a lot of the innovation is with software.
Yeah, particularly with CUDA being able to do stuff with GPUs.
Exactly. But when you are also doing software, you have ways in which that software becomes an information control tool, a monitoring tool, or a surveillance tool. It becomes a way of automating work in various different ways. It can become a manipulative tool and it can also create lots of new products, some of them very beneficial, but some of them very addictive and conducive to mental health problems. So I think software sort of expands the capabilities, but together with the capabilities, you also have expanded the set of distortionary or manipulative things that you can [do].
Taking it a step back, you mentioned the kind of dynamism element of how tech is working and the growth. I don’t know if I agree that tech is in a dynamistic state. It almost feels like it's been spinning its wheels for the last few years. Crypto, the metaverse, all of this stuff. It doesn't feel like new things are happening.
Well, they are new things. It's just that I think you are saying what I just said in a different way and it might be that your way is better. They're not super socially valued, but they are new products. So you would say that metaverse or virtual worlds are a new product by any category. It's not not something that's going to make humanity better. In fact, they might make humanity worse by alienating them more or isolating them more from their social milieu.
And there is another element of what's going on here, which I'll comment on, but again, it doesn't contradict that they are generating new products. A lot of new technologies and new ideas that are being invented in tech are not being implemented. And part of the reason for that is the competitive environment. Google, Facebook, Microsoft, Amazon are all buying up a lot of competitors and not even sometimes using their technology. So that's the consolidated structure.
So, the invention is there, but the invention is not translating into implementation. Now, don't get me wrong, some of that invention may also go in the wrong way. So a better version of TikTok may not be a great thing either, but there is that consolidated, concentrated market structure that is also changing what gets implemented.
I feel the thing I'll push back on is they're making new things, but it feels like the ones I mentioned — crypto, metaverse, and now generative AI — are actual products at the end. It's not so much that we couldn't live in a virtual world or that digital money wouldn't be useful, but it's more that the actual output from the companies is not translating into meaningful products, and they're still monetized.
Well again, this is a question of what we are measuring and whether what we're measuring is the right thing and whether it's welfare relevant. If I create a metaverse and you're willing to pay a million dollars for it, that will increase GDP by a million dollars. So that's a new service. So a lot of things we consume today are services. It's not something produced like a t -shirt. They are based on digital services.
Now, of course, to produce those digital services, we're actually using real resources such as energy. Now, some digital services are extremely useful, and some of them are useless, and some of them may be bad for [welfare]
I think my point is more, there's not actually things that people [want or need]. Like, they're not making particularly useful services. They're doing well monetizing the things they've had for years, but it kind of reminds me of something you wrote in 2019 where you were talking about automation and its effect on growth [and suggested] we may have run out of ideas for automating new high productivity labor intensive tasks. Do you think we're approaching that point?
I would say that the tech sector is not producing sufficient new tasks for workers to use their skills and to expand their capabilities, and firms perhaps are not demanding and implementing enough of them. But it is not, according to me, because we're running out of possibilities. It's just that we haven't focused on those. And that's where the ideology reference earlier on that I made comes from.
You know, I think the software industry would have done somewhat more productive things if it did not become too focused on replacing humans, having machines as humans overlords, which today has of course reached its apex with the craze on AGI.
Here's the thing, I get that that might be what they're pushing toward, but generative AI isn't even automation at this point. A lot of what you've written about AI is the effects of if we automate these tasks, but it feels like they aren't even successfully automating anything.
Absolutely. My take is that generative AI is actually an informational tool. So you should use generative AI as a way of generating, filtering, summarizing, finding, checking information. That's actually what it's good at.
If you try to use it for other purposes, sometimes you can get away with it, but it won't be very good at that. So you can try to automate a lot of warehouse tasks today by using the current crop of robots. People don't do that because they are not good at it. If you did it, costs would go up, people would lose their jobs, delays would pile up — but you can do it. It's the same thing with generative AI. Even though it's not an automation tool, automation wouldn't be its best use, especially given the current unreliability.
I think many people are going to use it for automation because that's the why. That's what companies are being told. You know, if you talk to business leaders today, everybody's asking them — financial journalists, their shareholders, and their friends — “where are you with the AI investment?” So that's the hype. And then people are going to rush in to use AI, to implement AI, even when they don't know what to do with it. And automation will often appeal to them because it's like the easiest thing to do.
It's the thing that they may experience from other technologies and it's the thing that some people are telling them that that's what they should do. There are companies, integrators, and websites devoted to automation [with] AI.
Even though it doesn't really automate anything.
Even though it wouldn't be very good at it. I mean, again, it could automate some tasks. You could have more of your customer service done without people.
But even then, even then that feels like a stretch of what automation means because, using the customer service example — and I think you may have raised this point as well — how does it get better? How do you measure better in the case of customer service? But even then it's automation only in so much as you can trust it. And it feels like these core issues of hallucinations almost kill the concept of automation with generative.
Yeah, exactly. So here is a good use case for automation of customer service, which is you call your bank and you enter some password and they tell you your balance. That's perfect. You don't need a person there to tell you the balance because the current technologies can faithfully take those numbers and communicate them to you after the right security steps.
Yeah, and it's not a generative answer because it's a number in a database.
It's not a generative answer. So now, put generative AI in there, you're probably going to get lots of incorrect answers there. But some companies might still do it
It just feels like a crazy time that you have companies shoving this through. Almost like the... It's very much like a post Jack Welch situation where just...
Yes, absolutely. Yeah, it's exactly the Jack Welch mindset. We have to cut labor costs and machines are superior to humans.
Do you think a part of the problem is that the people running these companies aren't really technologists?
I don't know. Look, I think this is another branch of my work, but US businesses are often led by people who have been trained into thinking their only priority should be increasing short-term shareholder [value] and a very effective way of doing that is to cut labor costs. But that's not the right social objective. Even maximizing long-term shareholder value is not the right objective.
Even more fundamentally, cutting short run labor may be an illusion and be associated with longer term problems. So if you have a company where your workers are skilled, talented,and they are very useful for liaising with customers, creating new services, products, and innovation, you can — in the short run — cut your labor costs, but it would destroy you in the long run. I think many more companies are in this bucket than American business leaders realize.
It's funny you mention that. I get emails from Google people all the time and they all talk about the kind of brain drain of layoffs and how it's not just the output you're losing — it's the person who knew how the stuff worked and where the stuff was and who built the stuff and why the stuff is good or bad. It almost feels like American capitalism is dramatically disconnected from labor.
Yeah, absolutely. So look, there is a tremendous amount of tacit knowledge that workers have, which often gets unrecognized, and even bosses sometimes don't recognize that. So both French and British trade unions in their history have experimented with these types of strikes where workers just follow the rules. They do exactly what the rule book says their responsibilities are. And it turns out to be quite disastrous for the companies. Because most of what actually workers do is much more adaptive than just following the rules.
Right, like kind of outsourcing risk almost.
Yeah, it's just like, you know, the rule book says, you know, operate the machinery, but you know when to actually operate the machinery, not just operate the machinery without [thinking]. So, that's the kind of tacit knowledge that people acquire via training, via experience, their social network, talking to friends. And if we don't value that, we'll lose that and it's going to be very difficult to replace it with machines or information technologies
Do you think we're in a bubble right now?
Define a bubble.
Actually, let me reframe the question. Do you think generative AI is a trillion dollar industry? Do you actually think it is the next hypergrowth market?
I believe that generative AI has the capacity to add a trillion dollars or more over time if we use it correctly. Because as an information technology, it has great capabilities. We live in an age in which useful information is scarce. All sorts of junk you don't want is on the internet, but when you actually need to solve a problem — get better at what you're doing, get more background information — those things are very difficult to find. And generative AI could be a tool for providing that sort of information to all sorts of decision makers and workers. But that's not the direction we're going, in which case, I don't think it's going to add trillions of dollars of value.
But that also doesn't mean that generative AI companies are going to go bust because they're going to be able to monetize this in other ways. So if generative AI enables you to take over the search market from Google, that's a huge amount of money. It may take over the search market from Google without providing much better service to consumers, but it might still be hugely profitable. If generative AI companies convince businesses to invest in generative AI, that's going to be very profitable for them, but not so good for the businesses that misimplement it.
So the thing is — and I understand why you're making these assumptions — what if it doesn't get cheaper? Because right now the thing I've been on about with generative AI is on top of not being super useful, it's so unprofitable. And it feels that every report seems to be suggesting it isn't making people money. What if it stays where it is? Because right now it does not. In the last 18 months, GPT-4o is not significantly different. What if they've stalled? What if this is all we've got.
Yeah, my guess is that it will get somewhat cheaper because right now it's very costly to even answer queries. And with more GPU capacity, it will get somewhat cheaper. With better designs, it will get somewhat cheaper. But I do not believe that there is a mysterious scaling law, which is that you double the GPU capacity or compute capacity, you double the data, and you get twice the performance.
More just an aberration of Moore's law by people who don't necessarily understand…
Yeah. But first of all, what does it mean to say “double the data”? We're going to throw more Reddit at it? So, even if there was such a scaling law, you would require high quality data which we're not producing and we're not paying for..
What happens if tech doesn't have a next step? Do you think that one of these companies — this is a bit of a big one — could die? Do you think that there is actually an existential risk from if generative AI falls apart?
No, I don't think so. None of these companies are committed to generative AI. They have other businesses that are making money. And even Nvidia can make still a lot of money [from other customers].
Let me rephrase it then. So right now, all of these tech companies, they do very well and their multiples in the markets are because they have a relatively low cost of goods and their actual costs are pretty great. But they're predicated on this ongoing growth. They must always grow. But what happens if they don't have a new growth thing? Because they haven't for a while. And what if they turn on generative AI?
Mm-hmm.
This feels like this could be an economic panic unto itself.
Yeah, it could be. It could be. There could be some drops in valuation. The general pattern we have seen with many other products and technologies is that it looks a little bit like an S curve. You have an acceleration and then you plateau and that's when new products are invented, new investors move on to other things. And that hasn't happened with tech. You know, Microsoft is living its fourth life or whatever since MS-DOS, partly because they have acquired new businesses, some competitors, some competing technologies. And sometimes some tech companies have invested in the wrong things. I mean, cryptocurrency was more crazy than AI. There, I really didn't see the use case.
Yeah. It's just, the question I keep asking — and I've asked a lot of people this — is just what happens if there's nothing though? Because growth is slowing. There is a pattern of slowing growth within these companies. And there isn't a new thing that they can pick up and acquire. I don't know whether tech has ever had this happen, is the problem.
Yeah, it's a good point, but it's even deeper than that. Growth has slowed in the industrialized world, and it's not a new phenomenon. This is one of these paradoxes which needs to be repeated more and more. The tech age has also coincided with a slowdown of aggregate growth and every indicator of aggregate growth. So we are growing much less today than we did in the 70s or 60s. Productivity is growing less. And I think this is also related to the fact that we're not getting enough out of the new technologies and the new ideas and the new scientific discoveries that we are making. And part of the reason why there is so much hunger for AI hype is that many people, including policymakers, are wishfully thinking “well, this could be a solution to our productivity slowdown. So perhaps in the next decade, we can have a much faster productivity growth thanks to generative AI or thanks to AI.”
It's almost history is kind of slowing down. I've not really heard anyone really discuss it in these terms, but it's interesting. So you've seen that this is the growth at all costs is everywhere and growth is slowing. But it sounds like growth isn't just about a money thing.
No, no, growth is not just about money and I think if you do look at other indicators we're doing worse. One of the regularities of the 20th across the world is that health and life expectancy have improved everywhere. Today, people in Sub Saharan Africa have twice the life expectancy at birth as people who lived in London or Manchester in the 1800s. And Americans have had tremendous improvements in life expectancy and health until the last decade when it slowed and it started getting reversed. So on many indicators we're actually doing even worse than GDP suggests.
So what's contributing to it? Is it a welfare issue? Is it a societal one?
Well, I don't think there is a clear answer. Some people think it's because of, you know, the life expectancy part is because of early deaths due to alcoholism, opioids and drugs. But there is a more general deterioration, mental health. There's a mental health crisis. So if you look at the health of surviving people, it's much worse if you're factoring that mental health issue.
I wonder if it's also where tech falls into this, as well as the exposure to social media. I've had this overall feeling — which is one of my flimsier theories — that I don't think people should be thinking about politics as much as they do. I’m not saying people shouldn't be political, but just the immediacy of political discussion has been errosive to people's mental health.
Well, I'll give you two factoids that might support your idea, although I'm not sure whether I completely agree with it. One is that if you look at when the mental health crisis seems to start, it coincides with smartphones. So people accessing social media and other things on their smartphones 24-hours a day might have something to do with it.
Another one: two economists, Hunt Alcott and Matthew Gentskow, did this experiment where they incentivized Facebook users to stop using the platform. So when people stop using the platform, they get happier, their mental health improves, but they can answer questions about what's going on in current politics much less well. So their immediate superficial knowledge about what's going on in politics also declines.
Interesting. Yeah, it does feel like there is a wider discussion — eh, discussion, perhaps, is the wrong word — within the tech industry that there is almost no consideration of the social aspects or the welfare aspects of any technology being built. Take the metaverse, for example. As ridiculous as that was, I can understand an executive being like, “yeah, we use the internet now, what if we used more internet?” But there was just no consideration as to whether people wanted to. It feels there's just a disconnection between capitalism and... people.
I mean, I think tech is much more complicated. Oftentimes, it's multi-use, so something that may appear to have good uses also has bad uses. But I do think that tech workers also need to own up to greater social responsibility. So if you are a physicist, nuclear physicist today, it's unthinkable that you do not have some social responsibility related knowledge as well as training about nuclear weapons.
The same degree of thinking about ethical implications — social implications, what happens if I unleash this on humanity — doesn't quite exist to the same extent in the tech industry. And I think it's going to develop. There are many people who are very socially minded in the tech sector, but I think we may need something more systemic.
On a better note, I suppose, what can we do to kind of reverse this disconnected trend? Is it regulation? Is it better safety culture?
Well, all of the above, but here is a problem I have with both regulation and the discussion that we have about regulation. It is very reactive. Something happens and we react to it by thinking of how we can regulate so that we reduce harm. But the problem, as I try to articulate, including in the earlier parts of this conversation, is about what types of technologies we are developing and where we are putting our efforts.
Ex-post regulation that's reactive is not going to achieve that. So, I think we need a new tech culture, as well as societal norms and priorities that says there is an alternative that is technically feasible and socially desirable for technology, especially for AI. Articulate what this is. Let's have a conversation about how we can get there. What can we do to encourage researchers, engineers, and businesses to actually go in that direction? What does the government need to do? What does civil society need to do? What does the media need to do?
By the way, I think the media is a big part of the problem. Media often increases the appeal of the tech industry. It sort of paints a picture of tech leaders as these geniuses who are revolutionized things and it personalizes their power and it makes it harder for the public to keep the tech sector accountable.
Also, on the AI field, I think the media is part of the reason why there is so much hype. Now many of the leading publications such as The Economist or The New York Times, every week print something about AI will solve this problem or that problem. AI is going to revolutionize. AI will solve it. AI hasn't solved anything yet.
And it's always will solve it, not is. Yeah. And that's, I mean, part of the reason the show exists, because I do blame a lot of this on the growth at all cost economy, but it's almost like there is no long-termism anymore in a lot of the tech economy. It's all “this will happen, just trust us, and give us as much time and money as possible, but we're not going to invest in R&D.” It's just bizarre.
Well, look, let's also think about the world at large. There are six billion people who live outside of Europe, US, Canada, and China. That includes the weakest, the poorest people in the world. How can we improve their lives? Nothing we're talking about here is going to be helped by AI.
Yeah, actually this leads me to a question. What did you think of cryptocurrency? I wish I would have had this podcast and asked you about this at the time.
Well, I said that I see the positive for generative AI. I think it's actually a promising technology. I do not see any positive for cryptocurrency. I never did. When I read the manifesto first about Bitcoin, it was interesting. It was thought provoking. But two days later, I was inoculated against it.
Yeah, it's... well, you kind of remembered real money exists at that point.
And that actually, you know, we cannot trust the government, yes. We cannot trust politicians, yes. But as long as we keep politicians and the government under some sort of check with true democratic means, you know, the money is not the most important problem. So that's not the biggest issue that we have to worry about.
So a wrap up question, I really appreciate your time. Are you optimistic about the future for the tech industry?
No. I am not a techno-optimist and I'm not a market-optimist — meaning that if I define optimism as things are going to work out, there is an arc of progress, I am not an optimist. I think we have serious problems with the tech industry, have serious problems with the market process in the United States right now, with social processes, but I'm hopeful. I believe that there is a direction in which we could use technology that would make things better and there is a way in which we can improve better, introduce better regulation, better worker organizations, better training that would make the market system work better. But that's the hope that we could achieve that if we did the right things. But I don't think that we are heading there left to our own devices.
So where does it head if we're heading in that direction?
I prefer not to answer that question.
That's a perfectly fine way to end it. Daron, thank you so much for joining me today.