Hayao Miyazaki's AI Nightmare
This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here.
This week, OpenAI released an update to GPT-4o, one of the models powering ChatGPT, that allows the program to create high-quality images. I've been surprised by how effective the tool is: It follows directions precisely, renders people with the right number of fingers, and is even capable of replacing text in an image with different words.
Almost immediately—and with the direct encouragement of OpenAI CEO Sam Altman—people started using GPT-4o to transform photographs into illustrations that emulate the style of Hayao Miyazaki's animated films at Studio Ghibli. (Think Kiki's Delivery Service, My Neighbor Totoro, and Spirited Away.) The program was excellent at this task, generating images of happy couples on the beach (cute) and lush illustrations of the Kennedy assassination (not cute).
Unsurprisingly, backlash soon followed: People raised concerns about OpenAI profiting off of another company's intellectual property, pointed to a documentary clip of Miyazaki calling AI an 'insult to life itself,' and mused about the technology's threats to human creativity. All of these conversations are valid, yet they didn't feel altogether satisfying—complaining about a (frankly, quite impressive!) thing doesn't make that thing go away, after all. I asked my colleague Ian Bogost, also the Barbara and David Thomas Distinguished Professor at Washington University in St. Louis, for his take.
This interview has been edited and condensed.
Damon Beres: Let's start with the very basic question. Are the Studio Ghibli images evil?
Ian Bogost: I don't think they're evil. They might be stupid. You could construe them as ugly, although they're also beautiful. You could construe them as immoral or unseemly.
If they are evil, why are they evil? Where does that get us in our understanding of contemporary technology and culture? We have backed ourselves into this corner where fandom is so important and so celebrated, and has been for so long. Adopting the universe and aesthetics of popular culture—whether it's Studio Ghibli or Marvel or Harry Potter or Taylor Swift—that's not just permissible, but good and even righteous in contemporary culture.
Damon: So the idea is that fan art is okay, so long as a human hand literally drew it with markers. But if any person is able to type a very simple command into a chatbot and render what appears at first glance to be a professional-grade Studio Ghibli illustration, then that's a problem.
Ian: It's not different in nature to have a machine do a copy of a style of an artist than to have a person do a copy of a style of an artist. But there is a difference in scale: With AI, you can make them fast and you can make lots of them. That's changed people's feelings about the matter.
I read an article about copyright and style—you can't copyright a style, it argued—that made me realize that people conflate many different things in this conversation about AI art. People who otherwise might hate copyright seem to love it now: If they're posting their own fan art and get a takedown request, then they're like, Screw you, I'm just trying to spread the gospel of your creativity. But those same people might support a copyright claim against a generative-AI tool, even though it's doing the same thing.
Damon: As I've experimented with these tools, I've realized that the purpose isn't to make art at all; a Ghibli image coming out of ChatGPT is about as artistic as a photo with an Instagram filter on it. It feels more like a toy to me, or a video game. I'm putting a dumb thought into a program and seeing what comes out. There's a low-effort delight and playfulness.
But some people have made this point that it's insulting because it's violating Studio Ghibli co-founder Hayao Miyazaki's beliefs about AI. Then there are these memes—the White House tweeted a Ghiblified image of an immigrant being detained, which is extremely distasteful. But the image is not distasteful because of the technology: It's distasteful because it's the White House tweeting a cruel meme about a person's life.
Ian: You brought up something important, this embrace of the intentional fallacy—the idea that a work's meaning is derived from what the creator of that work intended that meaning to be. These days, people express an almost total respect for the intentions of the artist. It's perfectly fine for Miyazaki to hate AI or anything else, of course, but the idea that his opinion would somehow influence what I think about making AI images in his visual style is fascinating to me.
Damon: Maybe some of the frustration that people are expressing is that it makes Studio Ghibli feel less special. Studio Ghibli movies are rare—there aren't that many of them, and they have a very high-touch execution. Even if we're not making movies, the aesthetic being everywhere and the aesthetic being cheap cuts against that.
Ian: That's a credible theory. But you're still in intentional-fallacy territory, right? Studio Ghibli has made a deliberate effort to tend and curate their output, and they don't just make a movie every year, and I want to respect that as someone influenced by that work. And that's weird to me.
Damon: What we haven't talked about is the Ghibli image as a kind of meme. They're not just spreading because they're Ghibli images: They're spreading because they're AI-generated Ghibli images.
Ian: This is a distinctive style of meme based less on the composition of the image itself or the text you put on it, but the application of an AI-generated style to a subject. I feel like this does represent some sort of evolutionary branch of internet meme. You need generative AI to make that happen, you need it to be widespread and good enough and fast enough and cheap enough. And you need X and Bluesky in a way as well.
Damon: You can't really imagine image generators in a paradigm where there's no social media.
Ian: What would you do with them, show them to your mom? These are things that are made to be posted, and that's where their life ends.
Damon: Maybe that's what people don't like, too—that it's nakedly transactional.
Ian: Exactly—you're engagement baiting. These days, that accusation is equivalent to selling out.
Damon: It's this generation's poser.
Ian: Engagement baiter.
Damon: Leave me with a concluding thought about how people should react to these images.
Ian: They ought to be more curious. This is deeply interesting, and if we refuse to give ourselves the opportunity to even start engaging with why, and instead jump to the most convenient or in-crowd conclusion, that's a real shame.
Article originally published at The Atlantic
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
Meta reportedly hires four more researchers from OpenAI
Looks like Meta isn't done poaching talent from OpenAI. Earlier this week, TechCrunch reported that Meta had hired influential OpenAI researcher Trapit Bansal, and according to The Wall Street Journal, it also hired three other researchers from the company. Now The Information is reporting four more Meta hires from OpenAI: Researchers Shengjia Zhao, Jiahui Yu, Shuchao Bi, and Hongyu Ren. This hiring spree comes after the April launch of Meta's Llama 4 AI models, which reportedly did not perform as well as CEO Mark Zuckerberg had hoped. (The company was also criticized over the version of Llama that it used for a popular benchmark.) There's been some back-and-forth between the two companies, with OpenAI CEO Sam Altman suggesting that Meta was offering '$100 million signing bonuses' while adding that 'so far, none of our best people' have left. Meta CTO Andrew Bosworth then told employees that while senior leaders may have been offered that kind of money, 'the actual terms of the offer' were more complex than a simple one-time signing bonus.


New York Post
an hour ago
- New York Post
We've all got to do more to protect kids from AI abuse in schools
For the sake of the next generation, America's elected officials, parents and educators need to get serious about curbing kids' use of artificial intelligence — or the cognitive consequences will be devastating. As Rikki Schlott reported in Wednesday's Post, an MIT Media Lab study found that people who used large language models like ChatGPT to write essays had reduced critical thinking skills and attention spans and showed less brain activity while working than those who didn't rely on the AI's help. And over time the AI-users grew to rely more heavily on the tech, going from using it for small tweaks and refinement to copying and pasting whole portions of whatever the models spit out. Advertisement A series of experiments at UPenn/Wharton had similar results: Participants who used large language models like ChatGPT were able to research topics faster than those who used Google, but lagged in retaining and understanding the information they got. That is: They weren't actually learning as much as those who had to actively seek out the information they needed. The bottom line: Using AI for tasks like researching and writing makes us dumber and lazier. Advertisement Even scarier, the MIT study showed that the negative effects of AI are worse for younger users. That's bad news, because all signs are that kids are relying more and more on tech in classrooms. A Pew poll in January found that some 26% of teens aged 13 to 17 admit to using AI for schoolwork — twice the 2023 level. It'll double again, faster still, unless the adults wake up. Advertisement We've known for years how smartphone use damages kids: shorter attention spans, less fulfilling social lives, higher rates of depression and anxiety. States are moving to ban phones in class, but years after the dangers became obvious — and long after the wiser private schools cracked down. This time, let's move to address the peril before a generation needlessly suffers irrevocable harm. Some two dozen states have issued guidance on AI-use in classrooms, but that's only a start: Every state's education officials should ensure that every school cracks down. Advertisement Put more resources into creating reliable tools and methods to catch AI-produced work — and into showing teachers how to stop it and warning parents and students of the consequences of AI overuse. Absent a full-court press, far too many kids won't build crucial cognitive skills because a chat bot does all the heavy lifting for them while their brains are developing. Overall, AI should be a huge boon for humanity, eliminating vast amounts of busy work. But doing things the hard way remains the best way to build mental 'muscle.' If the grownups don't act, overdependence on AI wll keep spreading through America's classrooms like wildfire. Stop it now — before the wildfire burns out a generation of young minds.


TechCrunch
an hour ago
- TechCrunch
Meta reportedly hires four more researchers from OpenAI
In Brief Looks like Meta isn't done poaching talent from OpenAI. Earlier this week, TechCrunch reported that Meta had hired influential OpenAI researcher Trapit Bansal, and according to The Wall Street Journal, it also hired three other researchers from the company. Now The Information is reporting four more Meta hires from OpenAI: Researchers Shengjia Zhao, Jiahui Yu, Shuchao Bi, and Hongyu Ren. This hiring spree comes after the April launch of Meta's Llama 4 AI models, which reportedly did not perform as well as CEO Mark Zuckerberg had hoped. (The company was also criticized over the version of Llama that it used for a popular benchmark.) There's been some back-and-forth between the two companies, with OpenAI CEO Sam Altman suggesting that Meta was offering '$100 million signing bonuses' while adding that 'so far, none of our best people' have left. Meta CTO Andrew Bosworth then told employees that while senior leaders may have been offered that kind of money, 'the actual terms of the offer' were more complex than a simple one-time signing bonus.