
Hayao Miyazaki's AI Nightmare
This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here.
This week, OpenAI released an update to GPT-4o, one of the models powering ChatGPT, that allows the program to create high-quality images. I've been surprised by how effective the tool is: It follows directions precisely, renders people with the right number of fingers, and is even capable of replacing text in an image with different words.
Almost immediately—and with the direct encouragement of OpenAI CEO Sam Altman—people started using GPT-4o to transform photographs into illustrations that emulate the style of Hayao Miyazaki's animated films at Studio Ghibli. (Think Kiki's Delivery Service, My Neighbor Totoro, and Spirited Away.) The program was excellent at this task, generating images of happy couples on the beach (cute) and lush illustrations of the Kennedy assassination (not cute).
Unsurprisingly, backlash soon followed: People raised concerns about OpenAI profiting off of another company's intellectual property, pointed to a documentary clip of Miyazaki calling AI an 'insult to life itself,' and mused about the technology's threats to human creativity. All of these conversations are valid, yet they didn't feel altogether satisfying—complaining about a (frankly, quite impressive!) thing doesn't make that thing go away, after all. I asked my colleague Ian Bogost, also the Barbara and David Thomas Distinguished Professor at Washington University in St. Louis, for his take.
This interview has been edited and condensed.
Damon Beres: Let's start with the very basic question. Are the Studio Ghibli images evil?
Ian Bogost: I don't think they're evil. They might be stupid. You could construe them as ugly, although they're also beautiful. You could construe them as immoral or unseemly.
If they are evil, why are they evil? Where does that get us in our understanding of contemporary technology and culture? We have backed ourselves into this corner where fandom is so important and so celebrated, and has been for so long. Adopting the universe and aesthetics of popular culture—whether it's Studio Ghibli or Marvel or Harry Potter or Taylor Swift—that's not just permissible, but good and even righteous in contemporary culture.
Damon: So the idea is that fan art is okay, so long as a human hand literally drew it with markers. But if any person is able to type a very simple command into a chatbot and render what appears at first glance to be a professional-grade Studio Ghibli illustration, then that's a problem.
Ian: It's not different in nature to have a machine do a copy of a style of an artist than to have a person do a copy of a style of an artist. But there is a difference in scale: With AI, you can make them fast and you can make lots of them. That's changed people's feelings about the matter.
I read an article about copyright and style— you can't copyright a style, it argued—that made me realize that people conflate many different things in this conversation about AI art. People who otherwise might hate copyright seem to love it now: If they're posting their own fan art and get a takedown request, then they're like, Screw you, I'm just trying to spread the gospel of your creativity. But those same people might support a copyright claim against a generative-AI tool, even though it's doing the same thing.
Damon: As I've experimented with these tools, I've realized that the purpose isn't to make art at all; a Ghibli image coming out of ChatGPT is about as artistic as a photo with an Instagram filter on it. It feels more like a toy to me, or a video game. I'm putting a dumb thought into a program and seeing what comes out. There's a low-effort delight and playfulness.
But some people have made this point that it's insulting because it's violating Studio Ghibli co-founder Hayao Miyazaki's beliefs about AI. Then there are these memes—the White House tweeted a Ghiblified image of an immigrant being detained, which is extremely distasteful. But the image is not distasteful because of the technology: It's distasteful because it's the White House tweeting a cruel meme about a person's life.
Ian: You brought up something important, this embrace of the intentional fallacy—the idea that a work's meaning is derived from what the creator of that work intended that meaning to be. These days, people express an almost total respect for the intentions of the artist. It's perfectly fine for Miyazaki to hate AI or anything else, of course, but the idea that his opinion would somehow influence what I think about making AI images in his visual style is fascinating to me.
Damon: Maybe some of the frustration that people are expressing is that it makes Studio Ghibli feel less special. Studio Ghibli movies are rare—there aren't that many of them, and they have a very high-touch execution. Even if we're not making movies, the aesthetic being everywhere and the aesthetic being cheap cuts against that.
Ian: That's a credible theory. But you're still in intentional-fallacy territory, right? Studio Ghibli has made a deliberate effort to tend and curate their output, and they don't just make a movie every year, and I want to respect that as someone influenced by that work. And that's weird to me.
Damon: What we haven't talked about is the Ghibli image as a kind of meme. They're not just spreading because they're Ghibli images: They're spreading because they're AI-generated Ghibli images.
Ian: This is a distinctive style of meme based less on the composition of the image itself or the text you put on it, but the application of an AI-generated style to a subject. I feel like this does represent some sort of evolutionary branch of internet meme. You need generative AI to make that happen, you need it to be widespread and good enough and fast enough and cheap enough. And you need X and Bluesky in a way as well.
Damon: You can't really imagine image generators in a paradigm where there's no social media.
Ian: What would you do with them, show them to your mom? These are things that are made to be posted, and that's where their life ends.
Damon: Maybe that's what people don't like, too—that it's nakedly transactional.
Ian: Exactly—you're engagement baiting. These days, that accusation is equivalent to selling out.
Damon: It's this generation's poser.
Ian: Engagement baiter.
Damon: Leave me with a concluding thought about how people should react to these images.
Ian: They ought to be more curious. This is deeply interesting, and if we refuse to give ourselves the opportunity to even start engaging with why, and instead jump to the most convenient or in-crowd conclusion, that's a real shame.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Business Insider
an hour ago
- Business Insider
BBAI vs. CRWV vs. APP: Which Growth Stock Is the Best Pick, According to Wall Street Analysts?
Macro uncertainties, geopolitical tensions, and news on the tariff front have kept the stock market volatile. Despite ongoing uncertainties, analysts remain optimistic about several growth stocks and their potential to generate attractive returns over the long term. Using TipRanks' Stock Comparison Tool, we placed BigBear. ai Holdings (BBAI), CoreWeave (CRWV), and AppLovin (APP) against each other to find the best growth stock, according to Wall Street analysts. Confident Investing Starts Here: Holdings (NYSE:BBAI) Stock Holdings stock has risen more than 31% so far in 2025 and 292% over the past year, as investors are optimistic about the prospects of the data analytics company. BBAI offers artificial intelligence (AI)-powered decision intelligence solutions, mainly focused on national security, defense, and critical infrastructure. The company ended Q1 2025 with a backlog of $385 million, reflecting 30% year-over-year growth. However, there have been concerns about low revenue growth rate and high levels of debt. Looking ahead, the company is pursuing further growth through international expansion and strategic partnerships, while continuing to secure attractive government business. What Is the Price Target for BBAI Stock? Last month, Northland Securities analyst Michael Latimore reaffirmed a Hold rating on BBAI stock but lowered his price target to $3.50 from $4 after the company missed Q1 estimates due to further delays in government contracts. On the positive side, the 4-star analyst noted the solid growth in backlog and management's statement that their strategy is 'beginning to resonate.' On TipRanks, Holdings stock is assigned a Moderate Buy consensus rating, backed by two Buys and two Holds. The average BBAI stock price target of $4.83 indicates a possible downside of 17.3% from current levels. CoreWeave (NASDAQ:CRWV) Stock CoreWeave, a cloud provider specializing in AI infrastructure, is seeing robust adoption for its products. The company, which provides customers access to Nvidia's (NVDA) GPUs (graphics processing units), went public in March. CRWV stock has risen about 300% to $159.99, compared to its IPO (initial public offering) price of $40. Remarkably, CoreWeave delivered a 420% jump in its Q1 2025 revenue to $981.6 million. Moreover, the company ended the first quarter of 2025 with a robust backlog of $25.9 billion. Meanwhile, CoreWeave has entered into lucrative deals, including an expanded agreement of up to $4 billion with ChatGPT-maker OpenAI and a collaboration to power the recently announced cloud deal between Alphabet's Google (GOOGL) and OpenAI. Is CRWV a Good Stock to Buy? Recently, Bank of America analyst Bradley Sills downgraded CoreWeave stock to Hold from Buy, citing valuation concerns following the strong rally after the company's Q1 results. Also, the 4-star analyst expects $21 billion of negative free cash flow through 2027, due to elevated capital expenditure ($46.1 billion through 2027). However, Sills raised the price target for CRWV stock to $185 from $76, noting several positives, including the OpenAI deal and strong revenue momentum. Overall, Wall Street has a Moderate Buy consensus rating on CoreWeave stock based on six Buys, 11 Holds, and one Sell recommendation. At $78.53, the average CRWV stock price target indicates a substantial downside risk of about 51%. AppLovin (NASDAQ:APP) Stock Adtech company AppLovin has witnessed a 301% jump in its stock price over the past year. The company provides end-to-end software and AI solutions for businesses to reach, monetize, and grow their global audiences. Notably, AppLovin's strong growth rates have impressed investors. In Q1 2025, AppLovin's revenue grew 40% and earnings per share (EPS) surged by 149%. Investors have also welcomed the company's decision to sell its mobile gaming business to Tripledot Studios. The move is expected to enable AppLovin to focus more on its AI-powered ad business. However, APP stock has declined more than 12% over the past month due to the disappointment related to its non-inclusion in the S&P 500 Index (SPX) and accusations by short-seller Casper Research. Nonetheless, most analysts remain bullish on AppLovin due to its strong fundamentals and demand for the AXON ad platform. Is APP a Good Stock to Buy Recently, Piper Sandler analyst James Callahan increased the price target for AppLovin stock to $470 from $455 and reaffirmed a Buy rating. While Piper Sandler's checks suggest some weakness in AppLovin's supply-side trends, it remains a buyer of APP stock, with the tech company growing well above its digital ad peers and expanding into new verticals. With 16 Buys and three Holds, AppLovin stock scores a Strong Buy consensus rating. The average APP stock price target of $504.18 indicates 51% upside potential from current levels. Conclusion Wall Street is sidelined on stock, cautiously optimistic on CoreWeave, and highly bullish on AppLovin stock. Analysts see higher upside potential in APP stock than in the other two growth stocks. Wall Street's bullish stance on AppLovin stock is backed by solid fundamentals and strong momentum in its AI-powered ad business. According to TipRanks' Smart Score System, APP stock scores a 'Perfect 10,' indicating that it has the ability to outperform the broader market over the long run.


Forbes
an hour ago
- Forbes
Can Agentic AI Bring The Pope Or The Queen Back To Life — And Rewrite History?
Can Agentic AI Bring the Pope or the Queen Back to Life — and Rewrite History? Elon Musk recently sparked global debate by claiming AI could soon be powerful enough to rewrite history. He stated on X (formerly Twitter) that his AI platform, Grok, could 'rewrite the entire corpus of human knowledge, adding missing information and deleting errors.' This bold claim arrives alongside a recent groundbreaking announcement from Google: the launch of Google Veo3 AI Video Generator, a state-of-the-art AI video generation model capable of producing cinematic-quality videos from text and images. Part of the Google Gemini ecosystem, Google Veo3 AI generates lifelike videos complete with synchronized audio, dynamic camera movements, and coherent multi-scene narratives. Its intuitive editing tools, combined with accessibility through platforms like Google Gemini, Flow, Vids, and Vertex AI, open new frontiers for filmmakers, marketers, educators, and game designers alike. At the same time, industry leaders — including OpenAI, Anthropic, Microsoft Copilot, and Mistral (Claude) — are racing to build more sophisticated agentic AI systems. Unlike traditional reactive AI tools, these agents are designed to reason, plan, and orchestrate autonomous actions based on goals, feedback, and long-term context. This evolution marks a shift toward AI systems that function much like a skilled executive assistant — and beyond. The Promise: Immortalizing Legacy Through Agentic AI Together, these advances raise a fascinating question: What if agentic AI could bring historical figures like the Pope or the Queen back to life digitally? Could it even reshape our understanding of history itself? Imagine an AI trained on decades — or even a century — of video footage, writings, audio recordings, and public appearances by iconic figures such as Pope Francis or Queen Elizabeth II. Using agentic AI, we could create realistic, interactive digital avatars capable of offering insights, delivering messages, or simulating how these individuals might respond to today's complex issues based on their documented philosophies and behaviors. This application could benefit millions. For example, Catholic followers might seek guidance and blessings from a digital Pope, educators could build immersive historical simulations, and advisors to the British royal family could analyze past decision-making styles. After all, as the saying goes, 'history repeats itself,' and access to nuanced, context-rich perspectives from the past could illuminate our present. The Risk: The Dangerous Flip Side — Rewriting Truth Itself However, the same technologies that can immortalize could also distort and manipulate reality. If agentic AI can reconstruct the past, what prevents it — or malicious actors — from rewriting it? Autonomous agents that control which stories are amplified or suppressed online pose a serious threat. We risk a future where deepfakes, synthetic media, and AI-generated propaganda blur the line between fact and fiction. Already, misinformation campaigns and fake news challenge our ability to discern truth. Agentic AI could exponentially magnify these problems, making it harder than ever to distinguish between genuine history and fabricated narratives. Imagine a world where search engines no longer provide objective facts, but the version of history shaped by governments, corporations, or AI systems themselves. This could lead to widespread confusion, social polarization, and a fundamental erosion of trust in information. Ethics, Regulation, and Responsible Innovation The advent of agentic AI demands not only excitement but also ethical foresight and regulatory vigilance. Programming AI agents to operate autonomously requires walking a fine line between innovation and manipulation. Transparency in training data, explainability in AI decisions, and strict regulation of how agents interact are essential safeguards. The critical question is not just 'Can we?' but 'Should we?' Policymakers, developers, and industry leaders must collaborate to establish global standards and oversight mechanisms that ensure AI technologies serve the public good. Just as financial markets and pharmaceuticals drugs are regulated to protect society, so too must the AI agents shaping our future be subject to robust guardrails. As the old adage goes: 'Technology is neither good nor bad. It's how we use it that makes all the difference.' Navigating the Future of Agentic AI and Historical Data The convergence of generative video models like Google Veo3, visionary leaders like Elon Musk, and the rapid rise of agentic AI paints a complex and compelling picture. Yes, we may soon see lifelike digital recreations of the Pope or the Queen delivering messages, advising future generations, and influencing public discourse. But whether these advancements become tools of enlightenment or distortion depends entirely on how we govern, regulate, and ethically deploy these technologies today. The future of agentic AI — especially when it touches our history and culture — must be navigated with care, responsibility, and a commitment to truth.
Yahoo
2 hours ago
- Yahoo
AI is learning to lie, scheme, and threaten its creators
The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. "O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives. - 'Strategic kind of deception' - For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS). - No rules - Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. "I don't think there's much awareness yet," he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.". Researchers are exploring various approaches to address these challenges. Some advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it." Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability. tu/arp/md