logo
Was 2024 the tipping point for AI and Hollywood?

Was 2024 the tipping point for AI and Hollywood?

Yahoo07-02-2025
2024 saw cinematic controversies as multiple films were revealed to have used AI in various stages of their productions.
2023 saw Hollywood writers go on strike to fight for fair pay and job protection against the growing use of AI. Settlements with the Writers Guild of America (WGA) saw new contracts featuring guards on how the technology can be used in production processes.
Despite this, multiple films released in 2024 have been revealed to have experimented with AI at various stages of their production processes.
Early on in 2024, independent horror film Late Night with the Devil was released to critical and commercial success. During its release it was struck with controversy when it was revealed that images punctuating the 1970s late night TV setting of the film were generated using AI software.
Public response had this use of AI met with concern. Many highlighted the slippery slope of using AI in replacement of graphic designers. The directors of the film, brothers Colin and Cameron Cairnes, responded that AI was used 'in conjunction' with graphic and production designers.
Late Night with the Devil was made in 2022, before the WGA strike and when AI image generation was first entering the public sphere. Despite the controversy, the film still became a critical and independent box office hit, but was one of the first notable AI related controversies of this year.
Respeecher is a Ukrainian-based AI voice modulation tool. The software has been used in the dialogue editing process of multiple, high-profile films across the industry in 2024.
Robbie William's biopic Better Man used Respeecher for the dialogue and singing throughout the film. Respeecher has also made its way to the Oscars. Two of the big players in this years Oscar race, the staggering The Brutalist, and the continually controversial Emilia Perez, both used AI and the Respeecher software in their dialogue editing.
Emilia Perez has been hit with controversy over its subject matter and an onslaught of resurfaced racist tweets from its lead, Karla Sofía Gascón have completely blinded any controversy surrounding the use of Respeecher.
The Brutalist has arguably faced more controversy over its use of the tool than Emilia Perez. Hungarian dialogue performed by Adrien Brody and Felicity Jones were modulated to sound like perfect Hungarian vowel sounds. Rumours also hit The Brutalist over AI image generation being used for architectural drawings, but director Brady Corbet has denied these claims.
Regarding The Brutalist, many film fans hitting back at the use of AI have misunderstood the use of the software in the post-production editing process.
Outrage over lead actor Adrien Brody's accent being doctored and completely AI have been circling social media. Only one scene of Hungarian dialogue was fine-tuned and edited. Many films in post-production edit their dialogue mostly with additional dialogue recording.
The use of AI here could be taken two ways. On the one hand you could just see this as just an extension of editing software, the same sound engineers are performing their job in just a fraction of the time.
On the other hand, having AI be so present in changing vocals could lead to completely generated performances. Respeecher has already been used to create Darth Vader performances using the voice of James Earl Jones in recent Star Wars projects like the Obi-Wan Kenobi series. Editing Hungarian vowels is one step, creating a full performance is another.
Whilst some in the industry are ready to adopt AI into their filmmaking, others are taking a stand and vocalising their distaste towards the tool.
The directors behind horror hit The Heretic put a disclaimer in the credits of their film stating no generative AI was used in the production.
Recently, Nicolas Cage has spoken out against AI in an acceptance speech stating the tool will never replicate human emotion and dream like sensibility and that 'AI is interfering with your authentic and honest expression'.
"Was 2024 the tipping point for AI and Hollywood?" was originally created and published by Verdict, a GlobalData owned brand.
The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Amazon invests in Fable — the 'Netflix of AI' — where users can create TV shows with prompts
Amazon invests in Fable — the 'Netflix of AI' — where users can create TV shows with prompts

Geek Wire

time44 minutes ago

  • Geek Wire

Amazon invests in Fable — the 'Netflix of AI' — where users can create TV shows with prompts

The AI-generated Fable TV show 'Exit Valley' is an animated satire set in Sim Francisco, a simulated version of Silicon Valley. (Image via Fable/Showrunner) Amazon is backing a San Francisco startup behind a platform that allows users to create AI-generated scenes and episodes for TV shows by simply typing in a prompt. The amount of Amazon's investment in Fable was not revealed Wednesday in reports by Variety, The Wrap and others. Fable has called its Showrunner service the 'Netflix of AI,' where creators can use their own ideas and words to shape a story from scratch or inside a world someone else has already created. Visitors to the Showrunner website are directed to join Discord where they can watch and make episodes. The public launch features one original show called 'Exit Valley,' which is described as a 'Family Guy'-style satirical comedy set in Sim Francisco and taking on tech personalities such as Elon Musk and OpenAI's Sam Altman. Fable previously released nine AI-generated episodes based on 'South Park,' created using its proprietary AI model. The episodes have been viewed more than 80 million times, according to the company. Fable was co-founded by CEO Edward Saatchi, who previously co-founded Oculus Story Studios, a division of Oculus VR, which was acquired by Meta. 'Hollywood streaming services are about to become two-way entertainment: audiences watching a season of a show [and] loving it will now be able to make new episodes with a few words and become characters with a photo,' Saatchi told Variety. 'Our relationship to entertainment will be totally different in the next five years.' Showrunner is focused on animated content at the start because it requires much less processing power than realistic-looking video scenes, according to Variety. Saatchi told the magazine Fable wants to stay out of the 'knife fight' among big AI companies like OpenAI, Google and Meta that are racing to create photorealistic content.

How Google is working with Hollywood to bring AI to filmmaking
How Google is working with Hollywood to bring AI to filmmaking

Fast Company

timean hour ago

  • Fast Company

How Google is working with Hollywood to bring AI to filmmaking

In filmmaking circles, AI is an everpresent topic of conversation. While AI will change filmmaking economics and could greenlight more experimental projects by reducing production costs, it also threatens jobs, intellectual property, and creative integrity—potentially cheapening the art form. Google, having developed cutting-edge AI tools spanning script development to text-to-video generation, is positioned as a key player in AI-assisted filmmaking. At the center of Google's cinema ambitions is Mira Lane, the company's vice president of tech and society and its point person on Hollywood studio partnerships. I spoke with Lane about Google's role as a creative partner to the film industry, current Hollywood collaborations, and how artists are embracing tools like Google's generative video editing suite Flow for preproduction, previsualization, and prototyping. ​​This interview has been edited for length and clarity. Can you tell me about the team you're running and your approach to AI in film? I run a team called the Envisioning Studio. It sits within this group called Technology and Society. The whole ambition around the team is to showcase possibilities. . . . We take the latest technologies, latest models, latest products and we co-create with society because there's an ethos here that if you're going to disrupt society, you need to co-create with them, collaborate with them, and have them have a real say in the shape of the way that technology unfolds. I think too often a lot of technology companies will make something in isolation and then toss it over the fence, and then various parts of society are the recipients of it and they're reacting to it. I think we saw that with language models that came out three years ago or so where things just kind of went into the industry and into society and people struggled with engaging with them in a meaningful way. My team is very multidisciplinary. There are philosophers on the team, researchers, developers, product thinkers, designers, and strategists. What we've been doing with the creative industry, mostly film this year—last year we worked on music as well—is we've been doing fairly large collaborations. We bring filmmakers in, we show them what's possible, we make things with them, we embed with them sometimes, we hear their feedback. Then they get to shape things like Flow and Veo that have been launched. I think that we're learning a tremendous amount in that space because anything in the creative and art space right now has a lot of tension, and we want to be active collaborators there. Have you been able to engage directly with the writers' and actors' unions? We kind of work through the filmmakers on some of those. Darren Aronofsky, when we brought him in, actually engaged with the writers' unions and the actors' unions to talk about how he was going to approach filmmaking with Google—the number of staff and actors and the way they were going to have those folks embedded in the teams, the types of projects that the AI tools would be focused on. We do that through the filmmakers, and we think it's important to do it actually in partnership with the filmmakers because it's in context of what we're doing versus in some abstract way. That's a very important relationship to nurture. Tell me about one of the films you've helped create. Four weeks ago at Tribeca we launched a short film called Ancestra, created in partnership with Darren's production company, Primordial Soup. It's a hybrid type of model where there were live-action shots and AI shots. It's a story about a mother and a baby who's about to be born and the baby has a hole in its heart. It's a short about the universe coming together to help birth that baby and to make sure that it survives. It was based on a true story of the director being born with a hole in her heart. There are some scenes that are just really hard to shoot, and babies—you can't have infants younger than 6 months on set. So how do you show an accurate depiction of a baby? We took photos from when she was born and constructed an AI version of that baby, and then generated it being held within the arms of a live actress as well. When you watch that film, you'll see these things where it's an AI-generated baby. You can't tell that it's AI-generated, but the scene is actually composed of half of it being live action, the other half being AI-generated. We had 150 people, maybe close to 200 working on that short film—the same number of people you would typically have working on a [feature-length] film. We saw some shifts in roles and new types of roles being created. There may even be an AI unit that's part of these films. There's usually a CGI unit, and we think there's probably going to be an AI unit that's created as well. It sounds like you're trying to play a responsible role in how this impacts creators. What are the fruits of that approach? We want to listen and learn. It's very rare for a technology company to develop the right thing from the very beginning. We want to co-create these tools. because if they're co-created they're useful and they're additive and they're an extension and augmentation, especially in the creative space. We don't want people to have to contort around the technology. We want the technology to be situated relative to what they need and what people are trying to do. There's a huge aspect of advancing the science, advancing the latest and greatest model development, advancing tooling. We learn a lot from engaging with . . . filmmakers. For example, we launched Flow [a generative video editing suite] and as we were launching it and developing it, a lot of the feedback from our filmmakers was, 'Hey, this tool is really helpful, but we work in teams.' So how can you extend this to be a team-based tool instead of a tool that's for a single individual? We get a lot of really great feedback in terms of just core research and development, and then it becomes something that's actually useful. That's what we want to do. We want something that is helpful and useful and additive. We're having the conversations around roles and jobs at the same time. How is this technology empowering filmmakers to tell stories they couldn't before? In the film industry, they're struggling right now to get really innovative films out because a lot of the production studios want things that are guaranteed hits, and so you're starting to see certain patterns of movies coming out. But filmmakers want to tell richer stories. With the one that we launched at Tribeca, the director was like, 'I would never have been able to tell this story. No one would have funded it and it would have been incredibly hard to do. But now with these tools I can get that story out there.' We're seeing a lot of that—people generating and developing things that they would not have been funded for in the past, but now that gets great storytelling out the door as well. It's incredibly empowering. These tools are incredibly powerful because they reduce the costs of some of the things that are really hard to do. Certain scenes are very expensive. You want to do a car chase, for example—that's a really expensive scene. We've seen some people take these tools and create pitches that they can then take to a studio and say, 'Hey, would you fund this? Here's my concept.' They're really good at the previsualization stage, and they can kind of get you in the door. Whereas in the past, maybe you brought storyboards in or it was more expensive to create that pitch, now you can do that pretty quickly. Are we at the point where you can write a prompt and generate an entire film? I don't think the technology is there where you can write a prompt and generate an entire film and have it land in the right way. There is so much involved in filmmaking that is beyond writing a prompt. There's character development and the right cinematography. . . . There's a lot of nuance in filmmaking. We're pretty far from that. If somebody's selling that I think I would be really skeptical. What I would say is you can generate segments of that film that are really helpful and [AI] is great for certain things. For short films it's really good. For feature films, there's still a lot of work in the process. I don't think we're in the stage where you're going to automate out the artist in any way. Nobody wants that necessarily. Filmmaking and storytelling is actually pretty complex. You need good taste as well; there's an art to storytelling that you can't really automate. Is there a disconnect between what Silicon Valley thinks is possible and what Hollywood actually wants? I think everybody thinks the technology is further along than it is. There's a perception that the technology is much more capable. I think that's where some of the fear is actually, because they're imagining what this can do because of the stories that have been told about these technologies. We just put it in the hands of people and they see the contours of it and the edges and what it's good and bad at, and then they're a little less worried. They're like, 'Oh, I understand this now.' That said, I look at where the technology was two years ago for film and where it is now. The improvements have been remarkable. Two years ago every [generated] film had six fingers and everything was morphed and really not there—there was no photorealism. You couldn't do live-action shots. And in two years we've made incredible progress. I think in another two years, we're going to have another big step change. We have to recognize we're not as advanced as we think we are, but also that the technology is moving really fast. These partnerships are important because if we're going to have this sort of accelerated technology development, we need these parts of our society that are affected to be deeply involved and actively shaping it so that the thing we have in two years is what is actually useful and valuable in that industry. What kinds of scenes or elements are becoming easier to create with AI? Anything that is complex that you tend to see a lot of, those types of things start to get easier because we have a lot of training data around that. If you've seen lots of movies with car chases in them. There are scenes of the universe—we've got amazing photography from the Hubble telescope. We've got great microscopic photography. All of those types of things that are complicated and hard to do in real life, those you can generate a lot easier because we have lots of examples of those and it's been done in the past. The ones that are hard are ones where you want really strong eye contact between characters, and where the characters are showing a more complex range of emotions. How would you describe where we're at with the uptake of these tools in the industry? I think that we're in a state where there's a lot of experimentation. It's kind of that stage where there's something new that's been developed and what you tend to do when there's something new is you tend to try to re-create the past—what you used to do with [older] tools. We're in that stage where I think people are trying to use these new tools to re-create the same kinds of stories that they used to tell, but the real gem is when you jump past that and you do new types of things and new types of stories. I'll give you one example. Brian Eno did a set of generative films; every time you went to the theater you saw a different version of that film. It was generated, it was different, it was unique. It still had the same backbone but it was a different story every time you saw it. That's a new type of storytelling. I think we're going to see more types of things like that. But first we have to get through this phase of experimentation and understanding the tools, and then we'll get to all the new things we can do with it.

SOUNDRAW Launches Global 'Beat the Future' Contest with $2,500 Prize Pool and Free AI Music Access
SOUNDRAW Launches Global 'Beat the Future' Contest with $2,500 Prize Pool and Free AI Music Access

Associated Press

time2 hours ago

  • Associated Press

SOUNDRAW Launches Global 'Beat the Future' Contest with $2,500 Prize Pool and Free AI Music Access

AI music platform SOUNDRAW invites global artists to enter its 'Beat the Future' contest by customizing AI-generated beats. Winners will share $2,500 in prizes, plus spotlight features and SOUNDRAW subscriptions. Tokyo, Japan, July 31, 2025 -- SOUNDRAW, an ethical AI-powered music generator built by a team of professional producers in Japan, has announced the launch of 'Beat the Future,' a global contest designed to spotlight emerging artists across the music industry. The competition is open to producers, singers, and rappers worldwide, encouraging participants to create original music using SOUNDRAW's platform. The contest aims to demonstrate that with the right tools, anyone—regardless of background or experience—can create high-quality music. SOUNDRAW's platform enables fast, genre-specific beat generation using features such as stem downloads, instrument swapping, track length customization, and genre mixing. All music is fully copyright-safe, as it is generated from in-house content rather than scraped data. 'Beat the Future' builds on the success of SOUNDRAW's recent RAW RAP CHALLENGE series, which featured 10 emerging rappers performing on the popular show On The Radar. Contest Details: Prizes: The contest is free to enter and open to creators of all skill levels. SOUNDRAW is widely used by independent musicians, content creators, and YouTubers for generating royalty-free music tailored to their needs. 'With Beat the Future, we want to empower bedroom producers, TikTok stars, and undiscovered talents to show off what they can do with a little creativity and a powerful tool,' said Anita Baumgärtner, Head of Marketing at SOUNDRAW. To learn more or submit your entry, visit the official contest page at About SOUNDRAW SOUNDRAW is an AI-powered music generation platform designed to support artists, creators, and musicians in producing original tracks quickly and ethically. Unlike many AI tools, SOUNDRAW generates music from proprietary, in-house content, ensuring full copyright safety for commercial use. Contact Info: Name: SOUNDRAW Email: Send Email Organization: SOUNDRAW Address: Yoyogi 5-63-4 3F, Shibuya ward, Tokyo, Japan 151-0053 Website: Release ID: 89166128 If you encounter any issues, discrepancies, or concerns regarding the content provided in this press release, or if there is a need for a press release takedown, we urge you to notify us without delay at [email protected] (it is important to note that this email is the authorized channel for such matters, sending multiple emails to multiple addresses does not necessarily help expedite your request). Our expert team will be available to promptly respond within 8 hours – ensuring swift resolution of identified issues or offering guidance on removal procedures. Delivering accurate and reliable information is fundamental to our mission.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store