This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.
Well, Casey, have you heard the exciting news this week?
Which news, Kevin?
The Golden Globes are adding a podcast category.
I did not hear that.
Yeah, that just came out. So yet another award we’re not going to win.
Well, I don’t know about that because if I know one thing about the Golden Globes, it’s that until very recently, it seems like, you could just bribe them directly to win.
[CHUCKLES]:
I don’t know if that’s still true, but we should look into it.
Yeah, what does it cost to win a Golden Globe these days?
I don’t know, a few hundred dollars?
[CHUCKLES]: Check’s in the mail.
Wait, unless there are tariffs! Topical.
[LAUGHS]: OK, now we’re definitely not winning.
Now we’re not winning? Because I accused them of corruption?
Yeah.
Oh, listen, we speak the truth on this podcast.
We do.
I don’t care what it costs me. [MUSIC PLAYING]
I’m Kevin Roose, a tech columnist at “The New York Times.” I’m Casey Newton from Platformer. And this is “Hard Fork.”
This week, the scathing court ruling that forced Apple to give up some control over its App Store and could send an executive to jail. Then author Karen Hao joins us to discuss her new book on the history of OpenAI and the hidden costs of reaching massive scale. And finalini, it’s time for me to teach Kevin about the joys of Italian brainrot.
Mamma Mia.
[MUSIC PLAYING]
Casey, have you noticed a smell in the air over San Francisco this week?
Many smells in the air, Kevin.
Well, the smell I’m talking about, Casey, is the smell of freedom.
Mm.
Because in the last week, Apple has lost its iron grip on the iOS App Store thanks to a ruling by a judge.
Commerce is legal in America again, Kevin.
Yes. So we’re going to talk about this today. Apple has been forced to make some big changes to its App Store by a lawsuit that was brought by Epic Games, the maker of Fortnite. A judge ruled last week that Apple had not complied with an earlier injunction, and we will get into all of that.
But first, I just want you to make the case that this matters to normal people. Why should the average person with an iPhone care about what Apple’s rules for its App Store are?
Well, to me, it actually starts with the Kindle app, Kevin. Lots of people love to read on their phones and tablets, and I think most people I know in my life have had the experience of opening up the Kindle app or the Amazon app thinking, I want to buy that book. And then there’s just a big blank spot where you’re expecting to see the Buy button.
And Apple is the reason for that blank spot. They charge such a high commission on ebooks that Amazon and other companies cannot profitably sell them. And so since the dawn of the App Store, in order to buy a book on your phone, something that should be very easy, has required you to open up a browser, log in to an Amazon account, navigate that whole system.
And Amazon is not alone. Many, many developers have had to go through similar contortions just to be able to sell their products and still make any kind of profit.
Yeah, this is the so-called Apple tax of up to 30 percent that developers have to pay when they want to charge for apps or purchases within their apps. And for many years, Apple has not only levied this tax, but they have also made it impossible for those developers to direct users off of Apple’s platforms to say, hey, if you want a better deal on this Spotify subscription or this Netflix subscription, or this purchase of an iPhone game, you can actually go on the web and get a better deal there because there we don’t have to pay Apple’s 30 percent fee.
That has not been allowed.
And so Epic Games, which makes Fortnite, brought a lawsuit years ago to try to get those policies changed. And in 2021, a judge in California named Yvonne Gonzalez Rogers ruled that Apple had violated the law in California against unfair competition. She ordered Apple to allow apps to provide users with links to pay developers directly for their services, and that way they could avoid paying Apple’s 30 percent commission. And after that ruling, Apple did go and make some changes, but apparently, they didn’t do a good enough job.
No. And I would say this has been apparent to most people who’ve been following this. I think we’ve talked about this on the show. Apple did what is often called malicious compliance, doing the absolute least while dragging and kicking and screaming the whole time.
Yeah. So we’re going to talk about some of that malicious compliance. But let’s just say straight up, this was a scathing opinion. I have rarely read a judge who is so obviously angry at a tech company for doing what they did.
No, this was the kind of speech that you typically only see on a Bravo reality show.
Yes. So Judge Gonzalez Rogers not only accused Apple of doing this kind of malicious compliance, but she also accused them of outright lying to the court under oath. She referred both Apple and their vice president of finance, Alex Roman, for potential criminal prosecution for perjury. And we should just read the last paragraph of the order from Judge Gonzalez Rogers, which is truly the mic drop moment.
She writes, quote, “Apple willfully chose not to comply with this court’s injunction. It did so with the express intent to create new anti-competitive barriers, which would, by design and effect, maintain a valued revenue stream, a revenue stream previously found to be anti-competitive. That it thought this court would tolerate such insubordination was a gross miscalculation. As always, the cover up made it worse. For this court, there is no second bite at the Apple.
Period. But you know what? It kind of was a second bite at the Apple because she bit them the first time and then they didn’t do it, so she had to bite him again.
Yes. So let’s just talk for a second about some of the details that were revealed in this judge’s opinion that have come out about how Apple tried to skirt compliance with this earlier 2021 injunction.
Yeah, well, and this was well known to all of the developers, but if you wanted to use an external sales system in the App Store, you still had to pay Apple a commission. And that commission was 27 percent, or 3 percent less than it was paying Apple. And of course, these companies have to pay the payment provider. So basically, Apple created a system where you were actively disadvantaged in multiple ways from trying to operate outside of the App Store.
Yes. So I knew that Apple was charging a commission for apps that would send people, like if you’re Spotify and you want people to be able to subscribe to your app on the internet, pay a lower price, pay you directly rather than going through Apple, you could do that under Apple’s sort of revised rules, but Apple would actually charge you a 27 percent commission, which, by the time you added credit card fees on top of that, would probably be more than the 30 percent that they would charge you. So this was clearly a case of Apple trying to say, well, go ahead and use this other system, but it’s not actually going to save you any money.
No.
And what I did not realize until I read Judge Gonzalez Rogers’ opinion here was that Apple would not just collect those commissions if you went directly from an iOS app to the web to buy a subscription or a service, but if you went a week later, they would be able to track that you had gone to the web from the iOS app, and they would still charge the developer that commission.
Yeah, it was absolutely outrageous.
It was insane. And it was also not the only thing that Apple did to try to dissuade iOS users from going to external links to buy goods and services outside of their payment system, Casey. What is a scare screen, and how did Apple use this?
The scare screen was a pop-up that you would see when a user did actually try to click out of the App Store to make a purchase using an external system. And while these were not the exact words, Kevin, here was the vibe. Hey, loser. Looks like you’re trying to do something stupid. You’re probably going to die. Do you want to try it anyway? And believe it or not, Kevin, when people saw a message that had that vibe, most of them just chose not to click it.
Yeah. And what was so amazing about this was that Apple, I guess, had tried to protect some of its private company communications from being seen by the judge in this case by claiming some sort of attorney-client privilege. But the judge said, no, no, no. Out with it. Let’s see those emails. And so we have, in this opinion, lots of emails between Apple executives, including Tim Cook, the CEO, talking about the very specific language to put on this scare screen and how to make it even scarier so that users would be less inclined to go outside of Apple’s ecosystem and make a purchase.
Yes, and these internal documents showed that the company would lose minimal revenue or no revenue at all from this, that they built a system that was maximally designed to protect their revenue, which was contra to the judge’s order, which she wrote in the spirit of increasing competition, and other companies’ revenue.
Yeah. So to put it mildly, Judge Gonzalez Rogers did not find any of this charming in the least, and she also directly accused at least one Apple executive of lying outright under oath about what it had done. Casey, explain the perjury charge here.
Yeah, so this perjury charge was leveled against Alex Roman, the vice president of finance at Apple. And among other things, she focuses on this moment where he testifies that until January 16, 2024, which is when Apple’s revised system went into effect, Apple had no idea what fee it would impose on purchases that linked out of the App Store. He testified that the decision to impose a 27 percent fee was made that day, which is just like, so obviously untrue. And of course, during the legal proceedings, business documents revealed that the main components of the plan were determined in July of 2023. So basically, this guy got caught red-handed and the judge is going to punish him for it.
Yeah. And so effective immediately, according to Judge Gonzalez Rogers’ order, Apple has to drop these commissions, these 27 percent fees, on these external links. And Apple, as of last week, had officially updated its App Store guidelines to allow those links out of the app in the US. But Casey, what are the implications of this, and how are other developers that put stuff on iPhones reacting?
So developers are reacting by implementing the links that they’ve always wanted to have. So in the Kindle app, for example, now you will see a Get Book button. You’ll tap it and it’ll kick you out immediately into a browser where you can complete a purchase. Spotify, Patreon are also doing something like this. This is not a perfect solution. Like you can’t actually just buy a book in the Kindle yet, for reasons that actually aren’t entirely clear to me. Maybe we’ll get there.
But on the whole, we are essentially removing the restrictions that prevents outside businesses from communicating with their customers, telling them about deals, telling them about their websites. Just these very onerous restrictions on the speech of these other companies have been wiped out.
Yes. And I think that gets to why these arcane and somewhat small-seeming changes to the rules governing Apple’s App Store really are important. Apple has been, for many years, this sort of godlike gatekeeper on any company that wants to make things for the billion plus iPhones out there. They have made extremely strict and specific rules about how developers can and can’t build their apps and sell products and services to customers. They have effectively been a landlord over the entire digital services economy. And I think, judging from this opinion, they have really abused that power, and now they are getting slapped on the wrist for it.
Yeah, and I think it has been to their own detriment, Kevin. Apple’s view is that these developers should feel lucky that they get to sell in the App Store at all, when in reality, a big reason that we buy iPhones is because of the apps that are there. If you took off the Amazon app and the Spotify app and the Patreon app, and all these other apps off of the iPhone, people would start considering alternatives. Right? And so I think that the balance in between the developers and Apple had just gotten completely skewed. And Apple has not been recognizing the value of what those developers are bringing to iOS.
Yeah. So you think this ruling is a good thing.
I think it is absolutely a good thing. I think it has been long overdue, and I hope it is upheld after Apple appeals, which it is going to do. But what do you think?
Yeah, I mean, I think it’s an open question. So Apple’s defense of these App Store rules has always been some version of, we’re protecting our customers. If we let people sideload apps onto the iPhone in a way other than through the App Store, people will put all kinds of dangerous malware and stuff on the iPhone, and you’ll be sorry. If we let people pay for things on external websites, then people will run all kinds of scams, and people will be taken advantage of. And so by implementing these rules, we’re really protecting our customers. It’s for your own benefit, essentially.
And I think it’ll be really interesting to see if when these restrictions are gone, people actually do say, we wish that Apple were taking a more active role here. We want some of these restrictions back. Or if the net result is just going to be that people have more choice and they pay a little less for stuff because the developers making that stuff are not having to pay 30 percent of their revenue to Apple.
Well, I think that’s going to be the case. This whole argument that Apple maintains this pristine, vigilant control over the App Store I think has always been mostly a fantasy. Think about in the early days of ChatGPT, before there was an app, you would go on to the App Store and you would search for ChatGPT. You would see a dozen plus apps that were all just clearly misrepresenting themselves as OpenAI, that were some of the most revenue-generating apps in the entire App Store. Apple could have stepped in to prevent that. They didn’t.
I’ll give you a more recent example. One of the best video games of the year is called Blue Prince, P-r-i-n-c-e. All of the gaming bloggers love it. I’ve been playing it. I’ve been loving it myself. The day it came out, somebody just ripped it off and just uploaded it onto the App Store and was selling it for, I don’t know, $10 or something. Why didn’t Apple know that? They are not paying the attention to the App Store that they are telling you that they are paying.
Yeah. I mean, to me, the most interesting part of this, as with a lot of these antitrust trials that are going on right now, was just seeing the internal communications at these companies. And in this ruling, there are all these fascinating excerpts from these emails and messages between Apple executives, talking about the various plans that they had to circumvent this injunction and charge this 27 percent fee. They had all these code names, like Project Michigan or Project Wisconsin, so that they could talk about this stuff in a way that would not be obvious that they were doing some sort of price fixing.
And it just makes you realize these giant tech monopolies did not end up that way by accident. They have had to work very hard for a very long time to prevent competition, to keep their market power and their dominance. And I don’t know, man, there’s just something really depressing about that. Like, these are companies that used to succeed by making good things that people loved. And in some respects, they still do that. But they also spend just a ton of time — their top executives are in these meetings talking about whether the fees should be 27 percent or some other number. And it just makes you realize they have really lost the plot here.
Absolutely. Well, let me try to cheer you up a little bit then, Kevin, because I think there actually is a negative consequence for these folks of just growing their profits so big on the basis of this extremely easy money, where they just make every developer pay this very high rent to them. And that is Apple has been missing the boat on next generation technologies. We know that they invested billions of dollars into a car project that they could never figure out and had to abandon. Right? We know that they are struggling to figure out how to do anything with AI, and have had to walk back a bunch of claims recently in a really embarrassing way.
We know that the Vision Pro, their most recent hardware initiative, is not taking off, in part because developers do not want to make apps for it because they have not been able to get rich making apps for it. Right? So all of this stuff is just adding up in a way where Apple’s decisions really are coming back to haunt it. And while it remains a giant, and I’m sure will for a very long time, we are starting to see some little cracks in its armor.
Yes. And yet, Apple just reported its earnings for the last quarter. It made $95.4 billion in revenue, up 5 percent year over year. So despite the fact that they are missing all of these new innovations and trends, that they’re late on generative AI, that they haven’t succeeded with the Vision Pro in the way that they had hoped, they are still doing quite well as a company. So I don’t know that this is actually coming back to bite them in the way that we might hope it would.
Well, I mean, let’s see what happens. The idea behind these rules was never to make Apple a tiny company that was struggling to get by. It was just to get them to share a very small portion of the wealth with a large number of developers.
Like, Apple has done a ton of incredible, innovative things. They deserve to be rewarded for that. They deserve to take some sort of commission from the apps in the App Store, right? But this has been about trying to create a more level playing field for other developers out there. And if the end result of this is that Apple is still pretty rich and profitable, I think that will actually make the point that the judge is making, which is that there is no need for Apple to engage in the shenanigans it’s been up to.
Yeah, I think the best outcome possible here is that all the big developers that can afford to develop their own payment systems for their apps, or send people to external websites to buy things, that they do that, and they start charging way, way less than 27 percent for that, and that Apple is ultimately forced to improve its own payment system, to maybe reduce its fees, to, in other words, compete. That is what all of this is about is forcing Apple, a company that has not had to compete for the affections of iOS developers in a long time, to finally step up and do something different.
Keep in mind even Microsoft, which was sued for anti-competitive behavior back in the early 2000s, they never said, we want to take a 30 percent cut of every software program sold on Windows. They actually left a lot of money on the table, and it helped that ecosystem to thrive. I would like to believe something similar could happen here.
[MUSIC PLAYING]
When we come back, we’ll talk to author Karen Hao about her new book on OpenAI and the costs of building such big models.
[MUSIC PLAYING]
Well, Casey, it’s a day ending in y. So there’s some OpenAI drama making the rounds this week.
Yeah, although I don’t know if this is so much drama as the company is trying to retreat from drama, Kevin.
Yes. So OpenAI announced on Monday of this week that it was no longer trying to get out from under the control of its nonprofit board. That was something that a lot of people, including Elon Musk, had objected to. A lot of former OpenAI employees and others in the AI field had said, hey, wait a minute, you can’t do that. You’ve still got to have this nonprofit board controlling you. And OpenAI, after hearing from some attorneys general that they were not happy about this plan, has retreated. So what is the new plan, Casey? And how it different than the old plan?
So the old plan was basically, the nonprofit is going to no longer have any control over for-profit enterprise. It’s going to go be a separate thing. It’s going to invest in various AI-related causes and philanthropies.
Under the new plan, the nonprofit is going to retain control over the for-profit. So basically, the status quo is going to be in effect, Kevin, except for a couple of key changes. One is what is now a limited liability corporation is going to become what they call a public benefit corporation. And a PBC, as they are called, has responsibility not just to think about shareholders like Microsoft and SoftBank and everybody else who owns a chunk of OpenAI, but also to think about the general public. So that’s one important idea that’s there.
The other big idea is that the nonprofit is currently set to get some unlimited amount of profits if OpenAI does eventually become a trillion company. That’s not going to be the case anymore. Under this new model, the for-profit is going to give some stake to the nonprofit, but after that, it’s going to be a very normal tech company. Everybody who owns shares, all of the employees, they can get unlimited upside. And the more money that OpenAI makes, the more money that they can make, too.
Right. So these profit caps that OpenAI had previously had in place, where investors like Microsoft were sort of limited to earning some multiple of the amount that they put in, and no more, those caps are now going away.
Yeah, they put on their thinking caps and they said, we’re getting rid of the profit caps.
[CHUCKLES]: Well, it just goes to your point that you’ve been making on this show for years now, which is that OpenAI is a very weird company.
And I have to say, when Sam Altman wrote a letter to employees this week, the first sentence of the letter was, quote, “OpenAI is not a normal company and never will be.” And I felt so seen.
Somebody’s been listening to “Hard Fork.”
And in other OpenAI corporate news, the company announced late Wednesday that its board member Fidji Simo would leave her job as CEO of Instacart to come be the company’s new CEO of Applications, overseeing its business and product divisions.
So we are not going to do a whole segment about the OpenAI corporate conversion story this week.
Because we love you too much. We love our listeners too much.
We would not subject you to that. But we are going to talk about it and many other things related to OpenAI with Karen Hao. Karen Hao is a reporter who has been covering OpenAI and the AI industry for years now, and she has a book that’s coming out later this month called “Empire of AI,” where she writes about Sam Altman and OpenAI and what she calls the dreams and nightmares of this very strange company.
Yeah, and by the way, I think she should already start working on a sequel and call it “The Empire Strikes Back.”
[CHUCKLES]:
Something to think about.
Yes. And this is a very buzzy book. People in Silicon Valley and at the AI companies have been sort of nervously waiting for it. Karen is very unsparing in her descriptions of AI companies and the AI industry. I would not say it is a book that the AI industry will think is flattering, but it’s an important conversation to have because I think it’s got a lot of people talking.
Absolutely. And before we do that, Kevin, do we have anything we want to disclose?
Well, let me make mine first. My boyfriend works at Anthropic.
[CHUCKLES]: Kevin! You’re coming out? I’m so happy for you.
[CHUCKLES]: No, I work at The New York Times company, which is suing OpenAI and Microsoft for alleged copyright violation.
Interesting. And my boyfriend works at Anthropic.
Yours too?
Yes! Anyways, let’s bring in Karen.
[MUSIC PLAYING]
Karen Hao, welcome to “Hard Fork.”
Thanks so much for having me.
So I imagine your book is sitting there behind you on the shelf. It’s all printed up. It’s ready to go. And then this very week, OpenAI puts out a story saying, hey, maybe we’re going to change our structure around again. Why the heck not? So what’s it like trying to write a self-contained book about a company that just never stops making news?
Tiring.
[CHUCKLES]:
Yeah. But honestly, people have been asking me this question a lot. Like, how do you even write a book at a book scale? Because usually it’s months on end before it goes to publish. And I think sometimes the news is actually a little bit distracting in that, yes, there are a lot of changes happening, yes, things are evolving really fast, but there are some fundamentals that are kind of ever-present. And so I tried to keep the book focused on the things that don’t change so much.
Yeah well, and among other things, this book is a history of OpenAI. So maybe let’s go back all the way to the beginning. What was this company like when you started writing about it?
So I started writing about OpenAI in 2019, and I went to the office to embed with them for three days as the first journalist to profile what had just become a newly minted company. So right before I started covering it, it was still founded as a nonprofit, and it had this explicit goal that it should be a counterbalance to for-profit companies. And it sort of became clear to me during my time at the company then that the idea that this was a bastion of idealism and transparency and was going to be totally open and share all of its technologies to the world, and not at all be beholden to any kind of commercialization was already going away.
And there were a lot of early signs of that that I picked up on while I was there. Just, there was a lot of secrecy for a company that purported to be incredibly transparent. And there was a lot of competitiveness, which, to me, suggested that if you’re going to be competitive and you want to specifically reach AGI first, you are going to have some really hard trade-offs with this transparency mission and this open up everything to the public mission.
So I’ve talked to some people at OpenAI who have said that they felt quite burned by some of your early coverage of them. Like they were expecting something different than they got. And you write in the book that after you published your story on them, they stopped talking to you for three years. I’m just curious what you think surprised them about your coverage, or if they should have been surprised given some of the questions you were asking.
I think they were surprised because they gave me a lot of access, and they thought that I would adopt a lot of the narrative that they were giving me. And to be honest, I came in without really a lot of expectations. It was actually my first-ever company profile, and I was going in just with an open mind of OK, this company presents itself as this ethical lighthouse, and let’s try to understand a little bit how do they organize themselves and how do they try to achieve the goals that they’ve set out to do.
And I just found that they couldn’t quite articulate what their vision was, what their plan was, what AGI was. And I think the prioritization of the problems that they were saying that they were focusing on just didn’t quite feel right to me. Like, I pointed out to them that there were environmental issues that were starting to become more and more of a concern as AI models were scaling larger and larger.
And Ilya said to me, he was like, yes, of course, that’s a concern, but when we get to AGI, climate change will be solved. And that was just like, OK, that’s kind of like a cop-out card to just be like, well, when we get to the thing that we don’t know how to define, all the problems that we might have created along the way will just magically disappear.
And so that’s when I started being like, I think we need to scrutinize this company more, and just be more cautious about taking all of the things that they say at face value.
Right. I mean, it sort of sounds like a microcosm of the arguments that have taken place for the last few years among the AI safety crowd and the AI ethics crowd, that the AI safety people, they’re worried about existential risk and bioweapons, and malicious use of these systems, and the AI ethics crowd are much more worried about issues like bias and the environmental concerns, and things like that. So I want to make sure I’m characterizing it fairly. You yourself are coming from more of the perspective of the AI ethics crowd in that you think we should be paying more attention to immediate harms of these models rather than trying to avert some future harms.
Yeah. So I would call it the AI accountability crowd. And the reason why I use the term accountability instead of ethics is because I think accountability acknowledges that there’s a huge power dynamic happening here, where the developers of these technologies have an extraordinary amount of power that they’ve accrued and amassed, and are continuing to accrue and amass based on this narrative that they need all of these resources to build so-called AGI. Right? So I definitely come from that perspective.
And I think that if we take seriously the present-day harms of what is happening now, that will help us not get to future harms, because we will be more thoughtful about how we develop AI systems today so that they don’t end up having wild detrimental effects in the future. And I think this idea that we don’t really know how bad AGI might happen or what the catastrophic scenarios are is not quite right in that we have already so much evidence right now of how AI is affecting people in society.
And also, AI is harming people, literally, right now. So we need to address that. We need to document that. We need to change that.
One of the central arguments of your book is that OpenAI, and the AI industry in general, has become an empire. It’s the title of your book, “Empire of AI,” and that it has done so by exploiting people and resources around the world for their own benefit. Sketch that argument for us.
Yeah. So if we think about empires of old, the long, centuries-long history of European colonialism, they effectively went around the world, laid claim to resources that were not their own, but they designed rules that suggested that they suddenly were. They exploited a lot of labor, as in they didn’t pay the labor, or they paid extremely little amounts to the labor that ultimately helped to fortify the empire. And all of that, resource extraction, labor exploitation, went and accrued benefits to the empire. And they did this all under a justification of a civilizing mission. They’re ultimately doing this to bring progress and modernity to the rest of the world.
And we’re literally seeing empires of AI effectively do the same thing. And what I say in the book is like, they are not as overtly violent as empires of old. We’ve had 150 years of social mores and progress. So there isn’t that kind of overt violence today. But they are doing the same thing of laying claim to resources that are not their own. That includes the labor of a lot of artists and a lot of writers. That includes all of the data that people have put online, that they’ve just scraped in these internet loads of data sets. That includes exploiting labor of the people who they contract to help clean their models and annotate the data that goes into their models.
That also includes labor exploitation in the sense that they are building technologies that are ultimately — OpenAI literally says, their definition of AGI is to create AI systems that will be able to outperform most humans at economically valuable work. That is a labor automation machine. So they’re also exploiting labor in the sense that they’re creating these AI systems that will dramatically make it more difficult for workers to demand rights, and they’re doing it under this civilizing mission where they’re saying, ultimately, this is for the benefit of all of humanity.
But what we’re seeing is that’s not true. When you go far and away from Silicon Valley, when you go to places like the Global South, when you go to rural communities, impoverished communities, marginalized communities, they really feel like the brunt of this AI development, this extraction, and this exploitation. And they’re not at all receiving any of the supposed benefits of this accelerating AI, quote unquote, “progress.”
Let’s talk about some of that extraction of natural resources. This is one of the things that your book gets into that I think doesn’t get discussed a lot in the context of AI. Tell us about some of your reporting and what you saw.
Yeah, so I ended up spending a lot of time in Latin America, and also in Arizona to understand the just sheer amount of computational infrastructure that is now being built to support the generative AI paradigm and the quest to AGI. And these are massive data centers and supercomputers that are being plopped kind of in communities that initially accept this kind of infrastructure, either because they don’t know about it, because companies enter these communities in shell companies and aren’t transparent about actually putting this infrastructure there, or they’re sort of persuaded into it because there seems to be a really positive economic case, where a company comes in and says, we’re going to give you hundreds of millions of dollars to build this data center here, and it’s going to create a bunch of jobs.
And what they don’t say is that the jobs are not permanent. They’re talking about construction jobs. And once the construction jobs are over, there’s actually not that many jobs for running the data center.
And these data centers, they consume an enormous amount of power and they consume an enormous amount of water, because they need to be cooled when they’re training these models 24/7. And this infrastructure is permanent. So once it gets put there, even if a city doesn’t have that kind of energy anymore or the water to provide to these data centers, they can’t really roll it back.
And in Chile, I was with activists who had been fighting tooth and nail to try and get these data centers from not literally taking all of their drinking water. And they were entering also communities in Uruguay, where I was spending time as well, during a drought, where people literally were drinking bottled water if they could afford it, or they were drinking contaminated water if they could not, because there was not enough fresh drinking water to go around. And that was when Google decided to build a data center there.
So that’s kind of, when I say that the current AI development paradigm is creating a lot of harms at a mass scale, that’s the kind of stuff that I’m referring to.
Yeah. I mean, part of empire building is about exerting political power, right? I’m curious why the governments in Chile and Uruguay are OK with this. What is the mechanism through which they’re deciding to grant all of this power to these AI companies?
A lot of governments learn that they have to serve the Global North if they want to get more investment and more jobs and more opportunity into their country. And in the AI case, it ends up not being a good bargain. But a lot of them don’t know that upfront.
And so they think that if they can open up their lands, their water, their energy to these companies, that somehow they will get more investment, more high quality, like white collar jobs in the future. Like, I was talking with politicians who said that they hoped that if they allowed a data center, then eventually, Microsoft would bring in an office with software engineering jobs nearby their data center.
And so that’s kind of the reason why they end up doing this. And Chile has a really interesting history in particular, in that they have dealt with centuries of extraction. Most recently, they’ve become a huge provider of lithium for the lithium boom. And so they sort of have developed this mentality over time that this is what they do. They open up their natural resources to these multinationals, and that somehow this will convert into economic growth, broad-based economic growth for people. But unfortunately, it doesn’t really.
Well, I want to push back on that a little bit because I think if I’m trying to be sympathetic to the people, the politicians, the communities that are accepting this stuff, I think there’s a case to be made that it is actually helping them, maybe not in terms of direct GDP or economic growth.
But the World Bank recently did a randomized control trial with students in Nigeria who were given access to GPT-4 for AI-assisted tutoring, and found that it boosted their test scores significantly, and that the gains were especially big among girls who were behind in their classes. So as I’m hearing you talk about the exploitation taking place, I’m thinking, well, maybe there is something that they’re getting in return. Maybe there is something worth it to them. Maybe this technology can, in some instances, help level the playing field between poorer countries in the Global South and places like America.
And maybe there’s a deal to be had where it’s like, OK want to extract our lithium? You want to build a data center in our country? Sure. But you have to give all of our students free access to ChatGPT Pro, or something like that. Is there any sort of fair exchange that you can imagine that would help these people?
So I think this question is kind of premised on the idea that we have to make these trade-offs in order to get that kind of gain. Like, we have to give you our lithium in order to have some kind of educational boost from ChatGPT. And that’s kind of a premise that I just don’t agree with. I think that there are ways to develop AI that gives you the gains without this kind of extraction.
So the reason why I call it Empire of AI in the book is in part to point out that this is not the only pathway to AI development. These companies have chosen a very particular pathway of AI development that is predicated on absolutely massive amounts of scale, massive amounts of resources, massive amounts of data.
Well, that’s how you get the models to be general and good, and to be able to work in all kinds of different languages. Is there another path that — you’re suggesting there’s another path. Like, what is the path other than through scale?
So we don’t necessarily know what it is yet, but it isn’t being explored at all. And there are already signs that there can be other ways to get to these more general capabilities without that scale. Deep Seek is a really interesting example of this. I think there are a lot of, also, problems with DeepSeek. But DeepSeek demonstrated that there is a — even in a resource-constrained environment, you can actually develop models that have more generality.
And so, I mean, this is what science is. You have to discover the frontiers of what we don’t know yet. And the industry has fallen into this very specific scaling paradigm that they know works, but it has so many externalities with it that it’s ultimately not actually achieving what OpenAI says its mission is, to benefit all of humanity. And so, if we constrained the problem to think, how can we get more positives out of this technology without having all of that negative harm, I think there would actually be more innovation that would come out, true innovation that would come out, that would be more beneficial.
Karen, one thing that is very clear in your book is that you are not a fan of the big general purpose AI models. You call them monstrosities built from consuming previously unfathomable amounts of data, labor, computing power, and natural resources. Is there any way for people to engage ethically with these models in your view? Or is it all fruit from a poisoned tree?
I think the way that they’re being developed right now, me personally, I do think that it’s fruit from a poisoned tree.
Do you use ChatGPT at all?
Not really. No.
Have you ever?
Yes, I have.
Did you — I’m just curious because writing a book is, I’m doing it now, and I’m finding a lot of uses for AI, and I’m just curious. This is a very, well, thoroughly researched book. Was it helpful? Any AI tools were used in the creation of this book?
So no generative AI tools, but I did use predictive AI tools. So I used Google reverse image search to try and figure out the price of OpenAI’s furniture, because they had some really nice chairs. And I was trying to explain the level of upgrade that happened when they went from a nonprofit in one office to this new Microsoft-backed capped-profit entity in this other office. And when I ran the reverse image search through, it came up. It was like, Brazilian designer chairs that were like $10,000 each.
Yeah. So I mean, I do use predictive AI, but I did not use generative AI for this book other than to just understand how the tool works and test its new features. But I never used it for getting research or organizing thoughts or anything like that. Because at the end of the day, I’m writing a book about OpenAI. Like, I’m not going to willingly hand a bunch of my data about what I’m thinking about and what I’m researching to OpenAI in the process.
And that’s where you and Kevin are different. So I want you guys to interact about this a little bit, because Karen, let me tell you, if Kevin can use generative AI to do something, he’s doing it. OK? Like, there’s going to be a lot of generative AI that’s going into the making of this book you’re writing, right?
Well, in the research phase.
Yeah.
Because I found that it’s not that good at composing.
Right.
But it is super, super useful for doing like, give me a history of the term AGI and where it originated, and who was the first people to use it and how it evolved over the years, and how has every lab defined it in all of their various publications. That kind of thing would have taken me weeks before, and now it’s like minutes.
Right. So, Karen, make your case that Kevin should stop doing that.
So I’m not going to make that case. But what I’m going to say is this is the perfect use case for these tools because these companies are constantly testing their tools on AI topics. Like, that is the thing that they stress test their tools on. And so if there were any topic in the world that these chatbots would be particularly good at talking about, it would be AI and AGI.
And so Kevin, move forward. Fire away.
No, but so, here’s another thing that I wanted to ask you, Karen, because I think this is another place where we disagree.
Yeah.
You are very skeptical about the claims that the AI labs are making about AI safety or the concept of AGI. And I guess I’m trying to understand that argument.
My view on these folks is that they are sincere, that they are sincere when they worry about AI posing risks to humanity. I think that’s why they’re investing tons of money into AI safety, and trying to work on things like interpretability, figuring out how these language models work. Is your view that they are sincere but just wrong about AI being an existential threat possibly? Or that they don’t believe it at all, and that they’re just kind of using AI safety as a smokescreen or an excuse for raising money and continuing to build their models.
I think it totally depends on who you’re talking about. So in general, I think there are a lot of people that are incredibly sincere about believing in these problems. I don’t have any doubt about that. I talked with a lot of them for my book. And I talked to people who are like — their voice was quivering while they were telling me about being really, really scared about the demise of humanity. That’s a sincere belief and a sincere reaction.
I think there are other people who pretend that they believe in this as the smokescreen. But I think by and large, a lot of these people do truly believe, and their heart is where their mouth is, and they are trying to do good by the world.
My critique is that this particular worldview is just really narrow. It’s just really, really narrow, and a product of being in Silicon Valley, which is one of the wealthiest epicenters of one of the wealthiest countries in the world. Of course you are going to have the luxury to think about these really far-off problems that don’t have to do with things that are literally harming and affecting people all around the world today.
And it’s not that I don’t think we should focus on any research to these problems. That’s not what I’m saying. But I think the sheer amount of resources that are going to prioritizing these problems over present-day problems is a super — it’s just not at all proportional to what the problem landscape literally is in reality.
Yeah. So when people like Sam Altman or Dario Amodei or Demis Hassabis say that we are a couple years away from something like AGI or even superintelligence, your view is that that just has no reflection on reality, or that we should cross that bridge when we come to it and pay attention to the stuff that we can actually observe in the world now?
So, I think it also depends on how they define AGI. Like when OpenAI says that they are two years away from potentially automating away most labor, I could believe that they’re on a path to systems that would appear to do so in two years, and then lead to a lot of company executives deciding to hire the AI instead of hiring workers.
If we’re talking about AGI in another definition, then, I mean, it would have to be on a case by case, like, how are they defining AGI and what is their time scale. But do I think that OpenAI has high convictions to try and create a labor automating machine, and that they have the resources to start making a dent in labor opportunities for people? Yes, I do.
Well, maybe let’s have the how do you define AGI conversation. It’s come up a few times during this conversation. And I know there are a lot of folks who regularly remark that the definition of AGI seems really sort of amorphous and slippery to them.
I have to say, it doesn’t feel that amorphous to me. I work with an assistant. My assistant does customer service stuff, scheduling stuff, a little bit of sales. If there was a tool that I could use and pay a subscription to that did those things on my behalf, I think I would say, yeah, I think that feels like AGI.
So that’s kind of how I conceive of it in my mind. But I know there are so many folks out there who say, no, no, no, no, no. The definition is always changing and slippery, and this is a really big problem. So Karen, how do you feel about it?
I mean, what you were describing, like, yeah, if you want to define that as AGI, that’s totally fine. But I don’t think that’s how the companies are necessarily defining it as AGI. Right? They are not defining it well.
But when they need to raise capital, when they need to rally public support, when they need to get in front of Congress and try and ward off regulation, the things that they say are, one day AGI will solve climate change. One day it will cure cancer. I think that the AGI system that you’re describing is not exactly the AGI system that they are sketching out in that kind of broad, sweeping vision that they’re trying to use as justification to continue doing what they’re doing.
Right. There’s a lot of hand waving that goes on when somebody says that some future AI technology is going to cure cancer. It’s leaving out many, many steps.
Well, but —
Yeah.
In partial defense of the labs here, I think we have seen things like AlphaFold, which was Google DeepMind’s system that solved the protein folding problem, essentially. And that was not something that they thought was going to be the end of their progress toward scientific cures for disease. That was the beginning stages. And actually, if you talk to biomedical researchers, they say that was a huge deal, and really did make it possible to do all kinds of new drug discoveries.
And I guess that part feels a little separate to me than the AGI discussion. But it does feel like the quest for AGI, the sort of scaling up of these models, the attempt to make them more general, there have just been good things that fall out of that process, and also some externalities that you mentioned, Karen. But I’m just curious if you see any positive applications of the scaling hypothesis and the dominant paradigm.
I don’t think I have come across a positive application that I think justifies the amount of cost going into it. And I think to return back to also DeepMind, AlphaFold, that was not a general intelligence system. That was a task-specific system, which I advocate for. I think we need more task-specific AI systems, where we give them a well-scoped problem, we curate the data, we then train the model, and then it does remarkable things. Like, I totally agree that AlphaFold was a remarkable achievement. And I don’t think that has much correlation with what AGI labs are now doing with the scaling paradigm. That’s not — those are like two perpendicular tracks to me.
Yeah, I mean, I think it’s clear that the hype is far ahead of the results right now. We have heard a lot more about AGI curing cancer than we’ve actually seen progress toward curing cancer in the moment of this recording. Now, some people believe that’s going to change very soon, but I can understand why if you read a lot of headlines and you don’t see cancer being cured yet, that you’d have some questions.
Yeah. And I think the other thing here is, I mean, these companies are continuing to say that they’re AGI labs, they’re pursuing AGI. But they’ve dramatically shifted, and now they’re really just focused on building products and services that they can charge lots of money for. And all of the maneuvering that they’ve tried to do to make it seem like that is on exactly the same path as to what they’re saying is AGI, like, come on. That’s probably not what’s happening here.
And ultimately, these companies are building these — I mean, in the last episode that you guys were talking about AI flattery and the debacle around that, and how they’re turning to maximizing for engagement because this is the thing that they’ve realized gets them a lot of users, get some more cash flow. And that is ultimately what they’re now building.
So I think what they’re saying they’re building and what they’re building is also starting to diverge in the kind of new era, I guess, where they need to be able to justify a $40 billion raise.
Yeah. Well, let’s bring it home here by talking about one thing that I think all three of us agree on. You write that the most urgent question of our generation is, how do we govern artificial intelligence? I agree with you on that front, Karen. And so let me ask, how do we govern artificial intelligence?
Please help us.
Democratically.
Yeah. Yeah. Yeah. So what is a more democratic way of governing AI look like?
So, to me, it’s you consider the supply chain of AI development. You have data, you have compute, you have models, you have applications. I think at every single stage of that supply chain, there should be input from people, not just the companies.
When companies decide that they’re going to train, to curate a data set, there should be people that can opt in and opt out of that data set. There should be people that, not just for their own data, but maybe there’s consortiums that are debating what kind of data, like publicly accessible data should or should not go into these tools. There should be debates about content moderation of the data, because, as I write in the book, there were a lot of moments in OpenAI’s history where they kind of just debated internally, like, should we keep in pornographic images in the data set or not? And then they just decided it on the fly. That, to me, is not democratic governance. We should be having open public discourse about those types of decisions.
When it comes to compute, there should be an ability for communities to even know that data centers are coming in to their communities, and they should then be able to go to a city council meeting and actually talk with their city council, talk with the companies about whether or not they want the data center, and have good, solid information about what actually the long-term trajectory of hosting a data center would look like. And when it comes to the labor, the contract workers that are working for AI, there should be — they should follow international human rights norms, because a lot of the conditions in which these workers are working do not follow international human rights norms.
So I think that’s the way that I think about all of these different stages all need to be democratic. And when OpenAI says, we’re going to develop democratic AI simply because we’re an American company, that’s not how it works. Everyone actually has to participate, have agency, have a say to shape and change what is and isn’t developed, and how.
Well, Karen, this has been a fascinating conversation. Really appreciate your time.
Thanks.
Thank you so much for having me. [MUSIC PLAYING]
When we come back, turn your brain off.
It’s time to talk about Italian brain rot.
Ooh, sounds fancy. [MUSIC PLAYING]
Kevin, if I were to start referring to you as Kevinnini Rossellini, what would that mean to you?
I would think it was some sort of mockery of my Italian heritage.
I would never. I would never. What about Tralalero tralala? You know him?
No, I think you’re having a stroke.
What about Bambardino crocodilo?
OK, now this is just getting ridiculous.
Ballerina cappuccina?
Nope.
All right, listen, if you or someone you love recognizes any of these terms, Kevin, you may be suffering from a case of Italian brain rot.
I’m almost afraid to ask. I have not been following this story, although I know you were very excited to tell me about it today. What is going on with Italian brain rot?
Do not be afraid of Italian brain rot, Kevin. If you have been on TikTok or Instagram or YouTube over the past many weeks, you may have encountered this unique form of AI-enabled insanity.
Now, typically, I know that brain rot refers to this kind of feeling of, I don’t know, cognitive decline related to excessive use of social media, or something like that. People on TikTok are always complaining about their brain rot. But what is Italian brain rot?
Well, if you want to catch up on this, I highly recommend a story in “The Times” by Alisha Haridasani Gupta, who kind of catches you up. This stuff started to emerge in January, and it really is an AI phenomenon.
Recently, Kevin, we’ve seen advances in some of these text-to-video generators. So you might be able to, for example, create a short clip of a little coffee cup that is also a ballerina. Well, congratulations, you just invented ballerina cappuccina.
I mean, to me, this is the difference between this age of viral content and previous generations of viral content. Like, I spend a lot of time on TikTok, but I have never, literally never seen anything about Italian brain rot. And it’s such a contrast to like, everyone knew that ice bucket challenge was happening because you could see it everywhere. But things have become so siloed and atomized that you can tell me literally anything was happening on TikTok and that millions of people were into it, it was the trend sweeping the youth, and I would have no idea. So either that means I’m old or something has changed about social media.
Well, look, this is why you have to have your younger colleagues, like myself, come in and tell you what’s happening in middle school.
You are not younger than me.
Well, spiritually, I think there’s a case for it. So, listen, there’s no way to talk about Italian brain rot that improves on the experience of actually watching it. So let’s watch a couple clips of brain rot. And I believe we have one queued up.
I hope I get hazard pay for this.
- archived recording 1
Tung, tung, tung, tung, tung, sahur. Brr, brr, pata pim. Il mio capello, piano disali. Cappu, cappuccino assassino, ballerina cappuccina, mimi mimi, chimpanzeeni bananini. Wa, wa, wa, troppi, troppa trippa. Glorbo, frutto grillo.
So if you are not watching these, let me just describe what I just saw. This was sort of a compilation of these Italian brainrot memes, which were all kind of like AI-generated weird characters. Like, one of them was, like, a — looked like a sort of hamster poking out from a half of a coconut.
That’s right.
And they’re just saying these, like, Italian phrases. So this is Italian brainrot?
This is Italian brainrot. You’re probably grasping the Italian part, because they’re sort of being voiced in this, uh, over-the-top Italian accent. And all of these sort of strange phrases that you’re hearing are the names of the characters. So I know you’re probably wondering, who is trippy trappy tropa tripa? And that’s a shrimp with a cat head.
[LAUGHS]:
So I love this one, because a lot of meme explainers, there’s a lot of excavating to do, of where did this come from, and what is this about. Here, it really is just what it says on the tin. It is an Italian accent over a series of images that make you feel like you’re going insane.
(LAUGHING) Yes. And was this made by an Italian?
No. In fact, in “The Times,” one of the main creators — this was the person who created “Ballerina Cappuccina”— was Susanu Sava-Tudor, who is a 24-year-old from Romania and who told “The Times” that this is just a form of absurd humor that really has very little to do with Italy. But this creator just sort of created the name Ballerina Cappuccina, and they’ve gotten more than 45 million views on TikTok and 3.8 million likes.
(LAUGHING) Oh, my god. Now, at the risk of explaining a joke and thereby killing it, like, is there any point to Italian brainrot? Is it making some sort of social commentary? Is it trying to say, like, Italians are big users of social media, and therefore getting brainrot?
Well, so I actually do have a theory about this. I think here is what makes this feel new, is that whatever this is actually does feel fresh. And we live in a time where everything that Hollywood is giving us feels like a recycled version of something else. We are on phase six of the Marvel Cinematic Universe. And in that world, where it’s like, oh, and here’s Ant-Man’s cousin, people are saying, F that, give me Ballerina Cappuccina!
[LAUGHS]: It does just feel like there is some organic hunger out there for just, like, really stupid shit, just, like, really random — like, I was thinking about this recently. You know, the “Minecraft Movie” is a big hit, right? People are — it’s like one of the biggest movies of the year.
And there’s this moment in the movie, apparently — I’ve not seen it, but where someone says the word “chicken jockey.” Jack Black does, I think. And at that moment, like, teens and other young people have decided that this is the moment in the movie to stand up and cause a ruckus.
They start throwing popcorn. Someone actually brought a live chicken to the theater and held it up. Like, this feels like of a league with chicken jockey from the “Minecraft Movie,” in the sense that it is just absurdist. Trying to explain it actually makes you dumber in some way, and so there’s a kind of appealing randomness to it.
Yeah, and by the way, I think that is actually part of being a young person — is building a language that is inaccessible to people older than you. Right? That is how the identity formation process works, is there are older people. Older people have no idea who trippy, troppi, troppa, trippa is. And that is something that you can talk about with your friends that belongs to you.
Wait, what are some of the other ones?
OK, well, so I’m glad you asked, because we haven’t actually watched enough of these videos yet. So Kevin, I would now like to direct your attention to one Salamino Pinguino.
- archived recording 1
Salamino pinguino, mezzo salami. Mezzo Pinguino, tutto problema. Non-chivo —
This is like a penguin covered in salami.
Like, wearing almost like a sort of headdress made out of salami.
- archived recording 1
Es puta pepperoni picante. Salamino pinguino. La lijenga salumeria.
Now, let’s take a look at Gorbo.
- archived recording 1
Glorbo.
OK, this is a crocodile or alligator with a watermelon for a body.
Yeah. This is a still image with 578,000 likes.
- archived recording 1
Tutto alligatora.
Everybody loves Glorbo.
Is this even real Italian? Are we sure it’s real Italian?
I’m pretty sure it’s not real Italian.
[LAUGHS]:
Let’s stop that one there. And then let’s sort of — now, I know what you’re saying. You’re saying, Casey, these are — these characters are just standing around. Like, that seems, like, super boring. What if I were to tell you that other creators are now incorporating them into dramas, Kevin?
Oh, boy.
Let’s take a look at one of those. And this one stars Trallalero Tralala, who is a shark sneakers.
And is that Ballerina Cappuccina I see?
That is Ballerina Cappuccina, and she’s with Tung, Tung, Tung, Sahur.
- archived recording 1
Tonga, Tonga, Tonga, and Tung, Tung, Tung. Sahur enjoying their —
So he leaves for the day. And oh, there comes Trallalero Tralala the shark. And now, they’re kissing in bed. And — oh, no! Ballerina Cappuccina’s pregnant!
(LAUGHING) Oh no! Ah!
- archived recording 1
La policia, no.
No! [LAUGHS]
Now, Tung, Tung, Tung — so it’s chasing after the shark.
Oh!
And that’s Bombolini Crocodini, and he sends in an airstrike.
[LAUGHS]:
So that was — let’s just review. That was — I don’t know. That was 10 or 15 seconds. In that, you see two of these characters. One of them gets into an affair, has a love child. Her partner finds out, and then sends in an airstrike to attack the cheater. So they’re doing a lot in 15 seconds.
Yeah, wow. Um, that was not a Pixar film. That was really something. I feel like I’m on a very powerful psychedelic right now.
Well, you know, you mentioned earlier that in the old days, we would do things like the ice bucket challenge. Kevin, what if I told you that some of these Italian brainrot characters are actually doing the ice bucket challenge?
No!
Yeah, let’s watch that one.
- archived recording 1
My name is Chimpanzee Bananini, and I’ve been nominated for the USC —
This is a chimpanzee who is also a banana.
- archived recording 1
— bucket challenge. I nominate Bombombini Gusini, Trippy Troppi, and Boneca Ambalabu.
And he’s nominating the other characters to do the ice bucket challenge.
This is so dumb!
Yeah.
It’s very funny, though. I am, like, genuinely laughing at this. But it is like, I could not explain to you why this is funny if you paid me.
Well, here, listen. I have done a little bit of comedy in my life, and one thing that I learned in improv was that everyone goes nuts for an over-the-top Italian accent. It’s extremely funny. All I have to do is say “bigga bowl of spaghetti!” You’re already laughing. See? I didn’t even do anything.
[LAUGHS]:
Then Italian brainrot functions much in the same way. But they are taking advantage of this AI thing. And, look, we talked earlier on this show. These systems are being trained with other people’s art without their consent.
There are some people who feel like you can never make anything truly creative or truly artistic with AI. And yet, here you have this bona fide viral phenomenon that is people making extremely silly stuff using AI, and it is resonating with us.
And I think this has been one of the more counterintuitive lessons of AI slop — is, a year or so ago, we were looking at images of Shrimp Jesus all over Facebook, and we were saying, that seems silly. I’m sure the company is going to get rid of this.
No, no, no, my friend. They’re going to lean into it, because there are riches that lie down this path. And Italian brainrot is the first example, I think, of that happening.
God, it just — I mean, so I have a couple of reactions. One of them is, yes, I absolutely think that AI has utility and that there are good things that have come out with it, but seeing Italian brainrot makes me want to nuke the data center. (LAUGHING) So I’m like, shut it all down! We’ve gone too far!
But seriously, I do think there is something here, not just in the sort of absurdist humor of this thing, but I do think there are going to be new kinds of entertainment that are birthed out of these tools. Because, you know, if you wanted to make something like a ballerina with a cappuccino for a head, you know, 10 years ago, you needed to be an animator to do that, or at least have some facility with animating software. Now, you just go into an AI tool, and you type, give me a Ballerina Cappuccina, and out comes this pretty perfect animation.
Yeah, which has always been the case for this sort of tool, by the way, is that it takes people who do not have those kinds of artistic skills and lets them express themselves creatively if they can think it, they can visualize it, they can make it available to other people. Here is my case why this is actually a good thing, Kevin.
You know, I was thinking this morning about a few years back, during the height of the crypto boom, when people started talking about how crypto could be used to fund these alternative worlds of entertainment. Right? Like, the Bored Apes Yacht Club was going to become this mega franchise, but what made it cool was that anybody could buy in.
Anyone could get a slurp juice.
Anyone could get a slurp juice, put it on a Mutant Ape, transform your Mutant Ape, et cetera. And people didn’t really get into this, because I think nobody wanted to be involved in what was essentially like a homeowners association for creating entertainment.
But I look at Italian brainrot, and I see something similar happening, where it’s like, as far as I can tell, no one has a trademark on Ballerina Cappuccino or Chimpanzee Bananini. You can just make your own version of it and put it up there, and nobody is going to issue a copyright strike.
You can have these characters do whatever you want to. So it feels like there is actually a freedom in making this the people are really responding to. And so maybe we do actually get the next version of, like, crowdsource entertainment, and it all comes out of these bizarre text-to-video makers.
I got to say, I believe you when you say that is a possible outcome, but my brain just goes immediately to some office at, like, Disney headquarters, where they’re, like, watching these Italian brainrot memes and furiously trying to license the IP to make, like, a series of seven movies about Chimpanzee Bananini.
Yeah.
And I do think that there’s a possibility that this becomes just like any other entertainment franchise.
It could go that way. But, you know, maybe that sort of robs it of the fun of it that makes it go viral today to begin with.
And they’re making movies out of Minecraft. They can make movies on anything.
They’re not really running out of things to make movies out of, as far as I can tell. So do I lean optimistic about this? Yes. At the same time, do I think that if China had just come up with this idea independently as a way of bringing down American civilization, it would be a great idea?
If they were like, what if we just sort of did weird characters in Italian accents? Could that distract all of American middle schoolers for a year? Probably worth doing. How hard could it be?
[LAUGHS]: This is all a CCP plot to undermine American sovereignty.
That’s kind of always been the thing with TikTok. It’s like, I don’t think it’s a Chinese plot to destroy America, but it is working.
Well, if Cappuccina Ballerina starts talking — singing the praises of Xi Jinping, we’ll know that something grave has gone wrong.
Yeah, we’ll keep our eyes on that one. [MUSIC PLAYING]
“Hard Fork” is produced by Rachel Cohn and Whitney Jones. We’re edited this week by Matt Collette. We’re fact-checked by Nina Alvarado. Today’s show is engineered by Chris Wood. Original music by Elisheba Ittoop, Diane Wong, and Dan Powell.
Our executive producer is Jen Poyant. Video production by Sawyer Roquet, Pat Gunther, and Chris Schott. You can watch this whole episode on YouTube at youtube.com/hardfork.
Special thanks to Paula Szuchman, Pui-Wing Tam, Dahlia Haddad, and Jeffrey Miranda. You can email us at hardfork@nytimes.com. Or should I say hardforkini@nytimes.comonini.
Don’t actually send a message to that email address. It’ll bounce back.
(LAUGHING) Yeah, that address is not active.
[MUSIC PLAYING]