logo
Hollywood's pivot to AI video has a prompting problem

Hollywood's pivot to AI video has a prompting problem

The Verge3 days ago
It has become almost impossible to browse the internet without having an AI-generated video thrust upon you. Open basically any social media platform, and it won't be long until an uncanny-looking clip of a fake natural disaster or animals doing impossible things slides across your screen. Most of the videos look absolutely terrible. But they're almost always accompanied by hundreds, if not thousands, of likes and comments from people insisting that AI-generated content is a new art form that's going to change the world.
That has been especially true of AI clips that are meant to appear realistic. No matter how strange or aesthetically inconsistent the footage may be, there is usually someone proclaiming that it's something the entertainment industry should be afraid of. The idea that AI-generated video is both the future of filmmaking and an existential threat to Hollywood has caught on like wildfire among boosters for the relatively new technology.
The thought of major studios embracing this technology as is feels dubious when you consider that, oftentimes, AI models' output simply isn't the kind of stuff that could be fashioned into a quality movie or series. That's an impression that filmmaker Bryn Mooser wants to change with Asteria, a new production house he launched last year, as well as a forthcoming AI-generated feature film from Natasha Lyonne (also Mooser's partner and an advisor at Late Night Labs, a studio focused on generative AI that Mooser's film and TV company XTR acquired last year).
Asteria's big selling point is that, unlike most other AI outfits, the generative model it built with research company Moonvalley is 'ethical,' meaning it has only been trained on properly licensed material. Especially in the wake of Disney and Universal suing Midjourney for copyright infringement, the concept of ethical generative AI may become an important part of how AI is more widely adopted throughout the entertainment industry. However, during a recent chat, Mooser stresses to me that the company's clear understanding of what generative AI is and what it isn't helps set Asteria apart from other players in the AI space.
'As we started to think about building Asteria, it was obvious to us as filmmakers that there were big problems with the way that AI was being presented to Hollywood,' Mooser says. 'It was obvious that the tools weren't being built by anybody who'd ever made a film before. The text-to-video form factor, where you say 'make me a new Star Wars movie' and out it comes, is a thing that Silicon Valley thought people wanted and actually believed was possible.'
In Mooser's view, part of the reason some enthusiasts have been quick to call generative video models a threat to traditional film workflows boils down to people assuming that footage created from prompts can replicate the real thing as effectively as what we've seen with imitative, AI-generated music. It has been easy for people to replicate singers' voices with generative AI and produce passable songs. But Mooser thinks that, in its rush to normalize gen AI, the tech industry conflated audio and visual output in a way that's at odds with what actually makes for good films.
'You can't go and say to Christopher Nolan, 'Use this tool and text your way to The Odyssey,'' Mooser says. 'As people in Hollywood got access to these tools, there were a couple things that were really clear — one being that the form factor can't work because the amount of control that a filmmaker needs comes down to the pixel level in a lot of cases.'
To give its filmmaking partners more of that granular control, Asteria uses its core generative model, Marey, to create new, project-specific models trained on original visual material. This would, for example, allow an artist to build a model that could generate a variety of assets in their distinct style, and then use it to populate a world full of different characters and objects that adhere to a unique aesthetic. That was the workflow Asteria used in its production of musician Cuco's animated short 'A Love Letter to LA.' By training Asteria's model on 60 original illustrations drawn by artist Paul Flores, the studio could generate new 2D assets and convert them into 3D models used to build the video's fictional town. The short is impressive, but its heavy stylization speaks to the way projects with generative AI at their core often have to work within the technology's visual limitations. It doesn't feel like this workflow offers control down to the pixel level just yet.
Mooser says that, depending on the financial arrangement between Asteria and its clients, filmmakers can retain partial ownership of the models after they're completed. In addition to the original licensing fees Asteria pays the creators of the material its core model is trained on, the studio is 'exploring' the possibility of a revenue sharing system, too. But for now, Mooser is more focused on winning artists over with the promise of lower initial development and production costs.
'If you're doing a Pixar animated film, you might be coming on as a director or a writer, but it's not often that you'll have any ownership of what you're making, residuals, or cut of what the studio makes when they sell a lunchbox,' Mooser tells me. 'But if you can use this technology to bring the cost down and make it independently financeable, then you have a world where you can have a new financing model that makes real ownership possible.'
Asteria plans to test many of Mooser's beliefs in generative AI's transformative potential with Uncanny Valley, a feature film to be co-written and directed by Lyonne. The live-action film centers on a teenage girl whose shaky perception of reality causes her to start seeing the world as being more video game-like. Many of Uncanny Valley's fantastical, Matrix-like visual elements will be created with Asteria's in-house models. That detail in particular makes Uncanny Valley sound like a project designed to present the hallucinatory inconsistencies that generative AI has become known for as clever aesthetic features rather than bugs. But Mooser tells me that he hopes 'nobody ever thinks about the AI part of it at all' because 'everything is going to have the director's human touch on it.'
'It's not like you're just texting, 'then they go into a video game,' and watch what happens, because nobody wants to see that,' Mooser says. 'That was very clear as we were thinking about this. I don't think anybody wants to just see what computers dream up.'
Like many generative AI advocates, Mooser sees the technology as a 'democratizing' tool that can make the creation of art more accessible. He also stresses that, under the right circumstances, generative AI could make it easier to produce a movie for around $10–20 million rather than $150 million. Still, securing that kind of capital is a challenge for most younger, up-and-coming filmmakers.
One of Asteria's big selling points that Mooser repeatedly mentions to me is generative AI's potential to produce finished works faster and with smaller teams. He framed that aspect of an AI production workflow as a positive that would allow writers and directors to work more closely with key collaborators like art and VFX supervisors without needing to spend so much time going back and forth on revisions — something that tends to be more likely when a project has a lot of people working on it. But, by definition, smaller teams translates to fewer jobs, which raises the issue of AI's potential to put people out of work. When I bring this up with Mooser, he points to the recent closure of VFX house Technicolor Group as an example of the entertainment industry's ongoing upheaval that began leaving workers unemployed before the generative AI hype came to its current fever pitch.
Mooser was careful not to downplay that these concerns about generative AI were a big part of what plunged Hollywood into a double strike back in 2023. But he is resolute in his belief that many of the industry's workers will be able to pivot laterally into new careers built around generative AI if they are open to embracing the technology.
'There are filmmakers and VFX artists who are adaptable and want to lean into this moment the same way people were able to switch from editing on film to editing on Avid,' Mooser says. 'People who are real technicians — art directors, cinematographers, writers, directors, and actors — have an opportunity with this technology. What's really important is that we as an industry know what's good about this and what's bad about this, what is helpful for us in trying to tell our stories, and what is actually going to be dangerous.'
What seems rather dangerous about Hollywood's interest in generative AI isn't the 'death' of the larger studio system, but rather this technology's potential to make it easier for studios to work with fewer actual people. That's literally one of Asteria's big selling points, and if its workflows became the industry norm, it is hard to imagine it scaling in a way that could accommodate today's entertainment workforce transitioning into new careers. As for what's good about it, Mooser knows the right talking points. Now he has to show that his tech — and all the changes it entails — can work.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Copyrighted Books Are Fair Use For AI Training. Here's What To Know.
Copyrighted Books Are Fair Use For AI Training. Here's What To Know.

Forbes

time34 minutes ago

  • Forbes

Copyrighted Books Are Fair Use For AI Training. Here's What To Know.

The use of AI systems has become part of our daily lives. The sudden presence of generative AI systems in our daily lives has prompted many to question the legality of how AI systems are created and used. One question relevant to my practice: Does the ingestion of copyrighted works such books, articles, photographs, and art to train an AI system render the system's creators liable for copyright infringement, or is that ingestion defensible as a 'fair use'? A court ruling answers this novel question, and the answer is: Yes, the use of copyrighted works for AI training is a fair use – at least under the specific facts of those cases and the evidence presented by the parties. But because the judges in both cases were somewhat expansive in their dicta about how their decisions might have been different, they provide a helpful roadmap as to how other lawsuits might be decided, and how a future AI system might be designed so as not to infringe copyright. The rulings on Meta and Anthropic's respective cases require some attention. Let's take a closer look. More than 30 lawsuits have been filed in the past year or two, in all parts of the nation, by authors, news publishers, artists, photographers, musicians, record companies and other creators against various AI systems, asserting that using the authors' respective copyrighted works for AI training purposes violates their copyrights. The systems' owners invariably assert fair use as a defense. They provide a helpful roadmap as to how other lawsuits might be decided, and how a future AI system might be designed so as not to infringe copyright. The Anthropic Case Anthropic planned to create a central library of "all the books in the world." The first decision, issued in June, involved a lawsuit by three book authors, who alleged that Anthropic PBC infringed the authors' copyrights by copying several of their books (among millions of others) to train its text generative AI system called Claude. Anthropic's defense was fair use. Judge Alsup, sitting the Northern District Court of California, held that the use of the books for training purposes was a fair use, and that the conversion of any print books that Anthropic had purchased and converted to digital was also a fair use. However, Anthropic's use of pirated digital copies for purposes of creating a central library of 'all the books in the world' for uses beyond training Claude, was not a fair use. Whether Anthropic's copying of its central library copies for purposes other than AI training (and apparently there was some evidence that this was going on, but on a poorly developed record) was left for another day. It appears that Anthropic decided early on in its designing of Claude that books were the most valuable training materials for a system that was designed to 'think' and write like a human. Books provide patterns of speech, prose and proper grammar, among other things. Anthropic chose to download millions of free digital copies of books from pirate sites. It also purchased millions of print copies of books from booksellers, converted them to digital copies and threw the print copies away, resulting in a massive central library of 'all the books in the world' that Anthropic planned to keep 'forever.' None of this activity was done with the authors' permission. Significantly, Claude was designed so that it would not reproduce any of the plaintiffs' books as output. There was not any such assertion by the plaintiffs, nor any evidence that it did so. The assertions of copyright infringement were, therefore, limited to Claude's ingestion of the books for training, to build the central library, and for the unidentified non-training purposes. Users of Claude ask it questions and it returns text-based answers. Many users use it for free. Certain corporate and other users of Claude pay to use it, generating over one billion dollars annually in revenue for Anthropic. The Anthropic Ruling Both decisions were from the federal district court in Northern California, the situs of Silicon ... More Valley. To summarize the legal analysis, Judge Alsup evaluated each 'use' of the books separately, as it must under the Supreme Court's 2023 Warhol v. Goldsmith fair use decision. Turning first to the use of the books as training data, Alsup found that the use of the books to train Claude was a 'quintessentially' transformative use which did not supplant the market for the plaintiffs' books, and as such qualified as fair use. He further found that the conversion of the purchased print books to digital files, where the print copies were thrown away, was also a transformative use akin to the Supreme Court's 1984 Betamax decision in which the court held that the home recording of free TV programming for time-shifting purposes was a fair use. Here, Judge Alsup reasoned, Anthropic lawfully purchased the books and was merely format-shifting for space and search capability purposes, and, since the original print copy was discarded, only one copy remained (unlike the now-defunct Redigi platform of 2018). By contrast, the downloading of the over seven million of pirate copies from pirate sites, which at the outset was illegal, for central library uses other than for training purposes could not be held to be a fair use as a matter of law, because the central library use was unjustified and the use of the pirate copies could supplant the market for the original. Anthropic Is Liable For Unfair Uses – The Cost of Doing Business? The case will continue on the issue of damages for the pirated copies of the plaintiffs' books used for central library purposes and not for training purposes. The court noted that the fact that Anthropic later purchased copies of plaintiffs' books to replace the pirated copies will not absolve it of liability, but might affect the amount of statutory damages it has to pay. The statutory damages range is $750 per copy at a minimum and up to $150,000 per copy maximum. It tempts one to wonder about all those other millions of copyright owners beyond the three plaintiffs – might Anthropic have to pay statutory damages for seven million copies if the pending class action is certified? Given the lucrativeness of Claude, could that be just a cost of doing AI business? The Meta Case Meta's decision to use shadow libraries to source books was approved by CEO Mark Zuckerberg. The second decision, issued two days following the Anthropic decision, on June 25, involves thirteen book authors, most of them famous non-fiction writers, who sued Meta, the creator of a generative AI model called Llama, for using the plaintiffs' books as training data. Llama (like Claude), is free to download, but generates billions of dollars for Meta. Like Anthropic, Meta initially looked into licensing rights from book publishers, but eventually abandoned those efforts and instead downloaded the books it desired from pirate sites called 'shadow libraries' which were not authorized by the copyright owners to store their works. Also like Claude, Llama was designed not to produce output that reproduced its source material in whole or substantial part, the record indicating that Llama could not be prompted to reproduce more than 50 words from the plaintiffs' books. Judge Chhabria, also in the Northern District of California, held Meta's use of plaintiffs' works to train Llama was a fair use, but he did so very reluctantly, chiding the plaintiff's lawyers for making the 'wrong' arguments and failing to develop an adequate record. Chhabria's decision is riddled with his perceptions of the dangers of AI systems potentially flooding the market with substitutes for human authorship and destroying incentives to create. The Meta Ruling Based on the parties' arguments and the record before him, like Judge Alsup, Judge Chhabria found that Meta's use of the books as training data for Llama was 'highly transformative' noting that the purpose of the use of the books - for creating an AI system - was very different than the plaintiffs' purpose of the books, which was for education and entertainment. Rejecting plaintiff's argument that Llama could be used to imitate the style of plaintiffs' writing, Judge Chhabria noted that 'style is not copyrightable.' The fact that Meta sourced the books from shadow libraries rather than authorized copies didn't make a difference; Judge Chhabria (in my opinion rightly) reasoned that to say that a fair use depends on whether the source copy was authorized begs the question of whether the secondary copying was lawful. Although plaintiffs tried to make the 'central library for other purposes than training' argument that was successful in the Anthropic case, Judge Chhabria concluded that the evidence simply didn't support that copies were used for purposes other than training, and noted that even if some copies were not used for training, 'fair use doesn't require that the secondary user make the lowest number of copies possible.' Since Llama couldn't generate exact or substantially similar versions of plaintiffs' books, he found there was no substitution harm, noting that plaintiffs' lost licensing revenue for AI training is not a cognizable harm. Judge Chhabria's Market Dilution Prediction Judge Chhabria warns that generative AI systems could dilute the market for lower-value mass market ... More publications. In dicta, clearly expressing frustration with the outcome in Meta's favor, Judge Chhabria discussed in detail how he thought market harm could – and should - be shown in other cases, through the concept of 'market dilution' - warning that a system like Llama, while not producing direct substitutes for a plaintiff's work, could compete with and thus dilute the plaintiff's market. There may be types of works unlike award-winning fictional works more susceptible to this harm, he said, such as news articles, or 'typical human-created romance or spy novels.' But since the plaintiffs before him didn't make those arguments, nor presented any record of the same, he said, he could not make a ruling on the same. This opportunity is left for another day. AI System Roadmap For Non-Infringement The court decisions provide an early roadmap as to how to design an AI system. Based on these two court decisions, here are my take-aways for building a roadmap for a non-infringing generative AI system using books:

Caitlin Clark Claims WNBA Was 'Sick' Over Commissioner's Cup
Caitlin Clark Claims WNBA Was 'Sick' Over Commissioner's Cup

Yahoo

time39 minutes ago

  • Yahoo

Caitlin Clark Claims WNBA Was 'Sick' Over Commissioner's Cup

Caitlin Clark Claims WNBA Was 'Sick' Over Commissioner's Cup originally appeared on The Spun. Caitlin Clark had a lot to say on last night's Instagram Live. Celebrating the Fever's Commissioner's Cup victory with her teammates on Tuesday, the 2025 All-Star captain not only made a plea to Cathy Engelbert but also rubbed a bit of salt in the wound for the rest of the WNBA. Advertisement As her teammates danced with champagne goggles and bottles in-hand, Clark could be heard off-camera saying: "Guys, I just know everybody in the league is sick!" Fans got a good laugh on X. "This isn't trash talk. This is truth talk," a user pointed out. "They are UPSET," another replied. "They sure are," a fan said. "I know Kelsey Plum is one person who embarrassed herself by forcing this lady to change her shirt in the first row. Then telling her, do Better. Absolutely unreal." "Damn they're aware 💔😂😂😂😂" another person commented. "BOW BOW BOW." "My little demon 😈" another user posted. "Aliyah followed up with 'They're pissed.' LOL." Advertisement "Caitlin knows what time it is!" another fan exclaimed. "It's Fever vs Everyone and they know it!" "🤣🤣🤣🤣🤣🤣🤣🤣" INDIANAPOLIS, IN - JULY 6: Caitlin Clark #22 of the Indiana Fever looks on during the game against the New York Liberty on July 6, 2024 at Gainbridge Fieldhouse in Indianapolis, Indiana. (Photo by Justin Casterline/NBAE via Getty Images)When Clark did arrive on-screen she let viewers know about the weird pay discrepancy between winning the Cup and winning a WNBA title. "Listen, you get more for this [the Commissioner's Cup] than you do if you're a [WNBA] champion," the reigning Rookie of the Year said. "Makes no sense. Someone tell Cathy to help us out." Don't know about you, but we kind of like this side of Caitlin Clark. Related: Former NFL Veteran Dead Suddenly At 53 Caitlin Clark Claims WNBA Was 'Sick' Over Commissioner's Cup first appeared on The Spun on Jul 2, 2025 This story was originally reported by The Spun on Jul 2, 2025, where it first appeared.

Despite Protests, Elon Musk Secures Air Permit for xAI
Despite Protests, Elon Musk Secures Air Permit for xAI

WIRED

time39 minutes ago

  • WIRED

Despite Protests, Elon Musk Secures Air Permit for xAI

Jul 2, 2025 7:41 PM xAI's gas turbines get official approval from Memphis, Tennessee, even as civil rights groups prepare to sue over alleged Clean Air Act violations. Photograph:A local health department in Memphis has granted Elon Musk's xAI data center an air permit to continue operating the gas turbines that power the company's Grok chatbot. The permit comes amid widespread community opposition and a looming lawsuit alleging the company violated the Clean Air Act. The Shelby County Health Department released its air permit for the xAI project Wednesday, after receiving hundreds of public comments. The news was first reported by the Daily Memphian. In June, the Memphis Chamber of Commerce announced that xAI had chosen a site in Memphis to build its new supercomputer. The company's website boasts that it was able to build the supercomputer, Colossus, in just 122 days. That speed was due in part to the mobile gas turbines the company quickly began installing at the campus, the site of a former manufacturing facility. Colossus allowed xAI to quickly catch up to rivals OpenAI, Google, and Anthropic in building cutting-edge artificial intelligence. It was built using 100,000 Nvidia H100 GPUs, making it likely the world's largest supercomputer. xAI's Memphis campus is located in a predominantly Black community known as Boxtown which has been historically burdened with industrial projects that cause pollution. Gas turbines like the ones xAI is using in Memphis can be a significant source of harmful emissions, like nitrogen oxides, which create smog. Memphis already has some of the highest child asthma rates in Tennessee. Since xAI began running its turbines, residents have repeatedly met and rallied against the project. 'My neighbors and I are forced to breathe the pollution this company pumps into our air every day. We smell it. We inhale it. This isn't just an environmental issue — it's a public health emergency,' wrote State Rep. Justin Pearson, who grew up near Boxtown, in an MSNBC op-ed last week. Under the Clean Air Act, 'major' sources of emissions—like a cluster of gas turbines—need a permit, known as a Prevention of Significant Deterioration (PSD) permit. However, Shelby County Health Department officials told local reporters in August that this wasn't necessary for xAI since its turbines weren't designed to be permanent. Amid mounting local opposition, xAI finally applied for a permit with the Shelby County Health Department in January, months after it first began running the turbines. Last month, the NAACP and the Southern Environmental Law Center (SELC) announced that they intended to sue xAI for violating the Clean Air Act. 'xAI's decision to install and operate dozens of polluting gas turbines without any permits or public oversight is a clear violation of the Clean Air Act,' said senior SELC attorney Patrick Anderson in a press release. 'Over the last year, these turbines have pumped out pollution that threatens the health of Memphis families. This notice paves the way for a lawsuit that can hold xAI accountable for its unlawful refusal to get permits for its gas turbines.' The new permit from the health department allows the company to operate 15 turbines on the site until 2027. In June, Memphis mayor Paul Young wrote an op-ed in the Tennessee Commercial Appeal that noted xAI was currently operating 21 turbines. SELC says that aerial footage it took in April, however, showed as many as 35 turbines operating at the site. xAI did not immediately respond to WIRED's request for comment, including questions about how many turbines it is currently operating at the facility. Shelby County did not immediately respond to a request for comment. In May, Sharon Wilson, a certified optical gas imaging thermographer, traveled to Memphis to film emissions from the site with a special optical gas imaging camera that records usually-invisible emissions. Wilson tracks leaks from facilities in the Permian Basin, one of the world's most prolific oil and gas-producing regions, in Texas. She alleged to WIRED that what she saw in Memphis was one of the densest clouds of emissions she'd ever seen. 'I expected to see the typical power plant type of pollution that I see,' she says. 'What I saw was way worse than what I expected.' This is a developing story. Please check back for updates .

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store