logo
Voice actors push back as AI threatens dubbing industry

Voice actors push back as AI threatens dubbing industry

Japan Times20 hours ago
Boris Rehlinger may not turn heads on the streets of Paris, but his voice is instantly recognizable to millions of French filmgoers.
As the French voice of Ben Affleck, Joaquin Phoenix, and even Puss in Boots, Rehlinger is a star behind the scenes — and now he is fighting to keep his craft alive in the age of AI.
"I feel threatened even though my voice hasn't been replaced by AI yet," said the actor, who is part of a French initiative, TouchePasMaVF, to protect human-created dubbing from artificial intelligence.
He said there was a team of professionals, including actors, translators, production directors, dialog adapters and sound engineers, to ensure audiences barely notice that the actor on screen is speaking a different language than they hear.
The rise of global streaming platforms such as Netflix, which relies heavily on dubbing to make global hits such as "Squid Game" and "Lupin," has amplified demand.
Consumer research firm GWI says 43% of viewers in Germany, France, Italy and Britain prefer dubbed content over subtitles.
The market is expected to grow to $4.3 billion in 2025, reaching $7.6 billion by 2033, according to Business Research Insights.
That growth could also amplify demand for the so-far nascent technology-based solutions, with platforms competing for subscribers and revenue, and seeking to win over advertisers from their rivals by emphasising their increasing reach.
But as AI-generated voices become more sophisticated and cost-effective, voice actor industry associations across Europe are calling on the EU to tighten regulations to protect quality, jobs and artists' back catalogs from being used to create future dubbed work.
"We need legislation," Rehlinger said. Just as after the car, which replaced the horse-drawn carriage, we need a highway code."
A sound engineer in a dubbing studio in Munich |
REUTERS
Worries over technology in the movie industry and whether it will replace the work of humans are not new. AI has been a flashpoint in Hollywood since the labor unrest of 2023, which resulted in new guidelines for the use of the technology.
Netflix co-CEO Ted Sarandos said this month that the company used generative AI to produce visual effects for the first time on screen in the original series "The Eternaut."
It has also tested generative AI to synchronize actors' lip movements with dubbed dialogue to improve the viewing experience, according to three sources familiar with the work.
These experiments rely on local voice actors to deliver the lines, rather than use AI to synthetically translate the on-screen performer's voice into another language.
Such use of AI for dubbing is permitted under the new SAG-AFTRA actors' union contract, which covers voiceover dubbing from foreign languages into English. It also requires that the actor rendering the dubbing service be paid.
Netflix declined to comment on its use of AI in dubbing.
Intellectual property
Such test runs by an industry giant will do little to allay the fears of dubbing actors.
In Germany, 12 well-known voice actors went viral on TikTok in March, garnering 8.7 million views, for their campaign saying "Let's protect artistic, not artificial, intelligence".
A petition from the VDS voice actors' association calling on German and EU lawmakers to push AI companies to obtain explicit consent when training the technology on artists' voices and fairly compensate them, as well as transparently label AI-generated content, gained more than 75,500 signatures.
When intellectual property is no longer protected, no one will produce anything anymore "because they think 'tomorrow it will be stolen from me anyway'," said Cedric Cavatore, a VDS member who has dubbed films and video games including the PlayStation game "Final Fantasy VII Remake."
VDS collaborates with United Voice Artists, a global network of over 20,000 voice actors advocating for ethical AI use and fair contracts.
In the United States, Hollywood video game voice and motion capture actors this month signed a new contract with video game studios focused on AI that SAG-AFTRA said represented important progress on protections against the tech.
Studios experiment
Some studios are already cautiously exploring AI.
Eberhard Weckerle, managing director of the Neue Tonfilm Muenchen studio, hopes AI and human dubbing can one day coexist.
"The fear is that AI will be used to make something as cheap as possible and then people will say, 'Okay, I'll accept that I'll have poorer quality.' And that would actually be the worst thing that could happen to us," said the sound engineer whose studio worked on the German version of "Conclave" and is currently dubbing Guy Ritchie's new film.
A synchronous speaker stands in a dubbing studio in Munich. |
REUTERS
Earlier this year, the German-dubbed version of streaming service Viaplay's Polish crime series "Murderesses" was removed after criticism from viewers about the monotony of its AI-generated dialog.
The streamer had decided to look into alternative dubbing options due to how prohibitively expensive going through the traditional channels can be in Germany.
The hybrid dubbing, created with Israeli startup DeepDub, used a mix of human and AI voices. DeepDub did not respond to an emailed request for comment.
"We'll continue offering subtitles and reserve dubbing for select content," said Vanda Rapti, the executive vice president of ViaPlay Group, ViaPlay Select & Content distribution.
Despite the disquiet over that series, other potential viewers seem more sanguine. According to GWI, nearly half of viewers said their opinion would not change if they learned that the content they liked was generated by AI.
Some 25% said they would like it slightly less, and only 3% said they would like it much more.
'Interest is huge'
Stefan Sporn, CEO of Audio Innovation Lab, which used AI to dub the Cannes Film Festival entry "Black Dog" from Chinese to German, believes AI will reshape, but not replace, voice work.
Humans will always be needed for emotion, scripting, and language nuance, he said, "just not to the same extent".
Audio Innovation Lab's technology alters the original actor's voice to match the target language, aiming for authenticity and efficiency.
"Interest is huge," said Sporn, adding that producers, studios and advertisers all want to know how well it works.
Another startup, Flawless AI, bills itself as an ethical AI company that works with local voice actors and uses its technology to match the on-screen actor's lip movements to the different languages.
"When AI technologies are used in the right way, they are a silver bullet to change how we can film-make in a new way," co-CEO Scott Mann said.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Can AI think – and should it? What it means to think, from Plato to ChatGPT
Can AI think – and should it? What it means to think, from Plato to ChatGPT

Japan Today

timean hour ago

  • Japan Today

Can AI think – and should it? What it means to think, from Plato to ChatGPT

By Ryan Leack In my writing and rhetoric courses, students have plenty of opinions on whether AI is intelligent: how well it can assess, analyze, evaluate and communicate information. When I ask whether artificial intelligence can 'think,' however, I often look upon a sea of blank faces. What is 'thinking,' and how is it the same or different from 'intelligence'? We might treat the two as more or less synonymous, but philosophers have marked nuances for millennia. Greek philosophers may not have known about 21st-century technology, but their ideas about intellect and thinking can help us understand what's at stake with AI today. The divided line Although the English words 'intellect' and 'thinking' do not have direct counterparts in the ancient Greek, looking at ancient texts offers useful comparisons. In 'Republic,' for example, Plato uses the analogy of a 'divided line' separating higher and lower forms of understanding. Plato, who taught in the fourth century BCE, argued that each person has an intuitive capacity to recognize the truth. He called this the highest form of understanding: 'noesis.' Noesis enables apprehension beyond reason, belief or sensory perception. It's one form of 'knowing' something – but in Plato's view, it's also a property of the soul. Lower down, but still above his 'dividing line,' is 'dianoia,' or reason, which relies on argumentation. Below the line, his lower forms of understanding are 'pistis,' or belief, and 'eikasia,' imagination. Pistis is belief influenced by experience and sensory perception: input that someone can critically examine and reason about. Plato defines eikasia, meanwhile, as baseless opinion rooted in false perception. In Plato's hierarchy of mental capacities, direct, intuitive understanding is at the top, and moment-to-moment physical input toward the bottom. The top of the hierarchy leads to true and absolute knowledge, while the bottom lends itself to false impressions and beliefs. But intuition, according to Plato, is part of the soul, and embodied in human form. Perceiving reality transcends the body – but still needs one. So, while Plato does not differentiate 'intelligence' and 'thinking,' I would argue that his distinctions can help us think about AI. Without being embodied, AI may not 'think' or 'understand' the way humans do. Eikasia – the lowest form of comprehension, based on false perceptions – may be similar to AI's frequent 'hallucinations,' when it makes up information that seems plausible but is actually inaccurate. Embodied thinking Aristotle, Plato's student, sheds more light on intelligence and thinking. In 'On the Soul,' Aristotle distinguishes 'active' from 'passive' intellect. Active intellect, which he called 'nous,' is immaterial. It makes meaning from experience, but transcends bodily perception. Passive intellect is bodily, receiving sensory impressions without reasoning. We could say that these active and passive processes, put together, constitute 'thinking.' Today, the word 'intelligence' holds a logical quality that AI's calculations may conceivably replicate. Aristotle, however, like Plato, suggests that to 'think' requires an embodied form and goes beyond reason alone. Aristotle's views on rhetoric also show that deliberation and judgment require a body, feeling and experience. We might think of rhetoric as persuasion, but it is actually more about observation: observing and evaluating how evidence, emotion and character shape people's thinking and decisions. Facts matter, but emotions and people move us – and it seems questionable whether AI utilizes rhetoric in this way. Finally, Aristotle's concept of 'phronesis' sheds further light on AI's capacity to think. In 'Nicomachean Ethics,' he defines phronesis as 'practical wisdom' or 'prudence.' 'Phronesis' involves lived experience that determines not only right thought, but also how to apply those thoughts to 'good ends,' or virtuous actions. AI may analyze large datasets to reach its conclusions, but 'phronesis' goes beyond information to consult wisdom and moral insight. 'Thinking' robots? Today, AI takes on many physical forms, from autonomous vehicles and humanoid robots to health care robots and drones. If intelligence requires embodiment, as Plato and Aristotle believed, it might seem as if AI is coming closer than ever to human thinking. AI does take on many forms, but they're still not bodies; they run on code, algorithms and datasets. Intuitive understanding, emotion, integrity and practical wisdom seem to require an embodied, perishable form that is moved by experience. Perhaps we should ask AI itself – so I did. When I prompted ChatGPT with the simple question, 'Can you think?' it responded: 'I can process information, analyze patterns, and generate responses based on data. So in some ways, it might 'seem' like I'm thinking. But it's not thinking like a human does. I don't have consciousness, emotions, intentions, or awareness. Everything I 'do' is based on patterns learned from huge amounts of text and a set of rules and probabilities. I can 'simulate' certain aspects of thinking, but I don't truly think or understand in the human sense.' That response aligns with Plato and Aristotle's views. For now. Ryan Leack is Assistant Professor of Writing, USC Dornsife College of Letters, Arts and Sciences. The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts. External Link © Conversation

Netflix is now using generative AI – but it risks leaving viewers and creatives behind
Netflix is now using generative AI – but it risks leaving viewers and creatives behind

Japan Today

time3 hours ago

  • Japan Today

Netflix is now using generative AI – but it risks leaving viewers and creatives behind

By Edward White Netflix's recent use of generative AI to create a building collapse scene in the sci-fi show "El Eternauta" (The Eternaut) marks more than a technological milestone. It reveals a fundamental psychological tension about what makes entertainment authentic. The sequence represents the streaming giant's first official deployment of text-to-video AI in final footage. According to Netflix, it was completed ten times faster than traditional methods would have allowed. Yet this efficiency gain illuminates a deeper question rooted in human psychology. When viewers discover their entertainment contains AI, does this revelation of algorithmic authorship trigger the same cognitive dissonance we experience when discovering we've been seduced by misinformation? The shift from traditional CGI (computer-generated imagery) to generative AI is the most significant change in visual effects (VFX) since computer graphics displaced physical effects. Traditional physical VFX requires legions of artists meticulously crafting mesh-based models, spending weeks perfecting each element's geometry, lighting and animation. Even the use of CGI with green screens demands human artists to construct every digital element from 3D models and programme the simulations. They have to manually key-frame each moment, setting points to show how things move or change. Netflix's generative AI approach marks a fundamental shift. Instead of building digital scenes piece by piece, artists simply describe what they want and algorithms generate full sequences instantly. This turns a slow, laborious craft into something more like a creative conversation. But it also raises tough questions. Are we seeing a new stage of technology – or the replacement of human creativity with algorithmic guesswork? "El Eternauta's" building collapse scene demonstrates this transformation starkly. What would once have demanded months of modeling, rigging and simulation work has been accomplished through text-to-video generation in a fraction of the time. The economics driving this transformation extend far beyond Netflix's creative ambitions. The text-to-video AI market is projected to be worth $1.77 billion by 2029. This reflects an industry looking to cut corners after the streaming budget cuts of 2022. In that year, Netflix's content spending declined 4.6%, while Disney and other major studios implemented widespread cost-cutting measures. AI's cost disruption is bewildering. Traditional VFX sequences can cost thousands per minute. As a result, the average CGI and VFX budget for U.S. films reached $33.7 million per movie in 2018. Generative AI could lead to cost reductions of 10% across the media industry, and as much as 30% in TV and film. This will enable previously impossible creative visions to be realized by independent filmmakers – but this increased accessibility comes with losses too. The OECD reports that 27% of jobs worldwide are at 'high risk of automation' due to AI. Meanwhile, surveys by the International Alliance of Theatrical Stage Employees have revealed that 70% of VFX workers do unpaid overtime, and only 12% have health insurance. Clearly, the industry is already under pressure. Power versus precision While AI grants filmmakers unprecedented access to complex imagery, it simultaneously strips away the granular control that defines directorial vision. As an experiment, film director Ascanio Malgarini spent a year creating an AI-generated short film called "Kraken" (2025). He used AI tools like MidJourney, Kling, Runway and Sora, but found that 'full control over every detail' was 'simply out of the question'. Malgarini described working more like a documentary editor. He assembled 'vast amounts of footage from different sources' rather than directing precise shots. And it's not just filmmakers who prefer the human touch. In the art world, studies have shown that viewers strongly prefer original artworks to pixel-perfect AI copies. Participants cited sensitivity to the creative process as fundamental to appreciation. When applied to AI-generated content, this bias creates fascinating contradictions. Recent research in Frontiers in Psychology found that when participants didn't know the origin, they significantly preferred AI-generated artwork to human-made ones. However, once AI authorship was revealed, the same content suffered reduced perceptions of authenticity and creativity. Hollywood's AI reckoning Developments in AI are happening amid a regulatory vacuum. While the U.S. Congress held multiple AI hearings in 2023, no comprehensive federal AI legislation exists to govern Hollywood's use. The stalled U.S. Generative AI Copyright Disclosure Act leaves creators without legal protections, as companies deploy AI systems trained on potentially copyrighted materials. The UK faces similar challenges, with the government launching a consultation in December 2024 on copyright and AI reform. This included a proposal for an 'opt-out' system, meaning creators could actively prevent their work from being used in AI training. The 2023 Hollywood strikes crystallised industry fears about AI displacement. Screenwriters secured protections ensuring AI cannot write or rewrite material, while actors negotiated consent requirements for digital replicas. Yet these agreements primarily cover the directors, producers and lead actors who have the most negotiating power, while VFX workers remain vulnerable. Copyright litigation is now beginning to dominate the AI landscape – over 30 infringement lawsuits have been filed against AI companies since 2020. Disney and Universal's landmark June 2025 lawsuit against Midjourney represents the first major studio copyright challenge, alleging the AI firm created a 'bottomless pit of plagiarism' by training on copyrighted characters without permission. Meanwhile, federal courts in the U.S. have delivered mixed rulings. A Delaware judge found against AI company Ross Intelligence for training on copyrighted legal content, while others have partially sided with fair use defenses. The industry faces an acceleration problem – AI advancement outpaces contract negotiations and psychological adaptation. AI is reshaping industry demands, yet 96% of VFX artists report receiving no AI training, with 31% citing this as a barrier to incorporating AI in their work. Netflix's AI integration shows that Hollywood is grappling with fundamental questions about creativity, authenticity and human value in entertainment. Without comprehensive AI regulation and retraining programs, the industry risks a future where technological capability advances faster than legal frameworks, worker adaptation and public acceptance can accommodate. As audiences begin recognizing AI's invisible hand in their entertainment, the industry must navigate not just economic disruption, but the cognitive biases that shape how we perceive and value creative work. Edward White is a PhD Candidate in Psychology, Kingston University, London. The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts. External Link © The Conversation

Will Trump's tech policies propel U.S. success against China?
Will Trump's tech policies propel U.S. success against China?

Japan Times

time16 hours ago

  • Japan Times

Will Trump's tech policies propel U.S. success against China?

Technology is the key to the confrontation between the United States and China, and the ability to innovate lies at the heart of this competition — especially in the ever-expanding and crucial field of artificial intelligence. However, an increasing emphasis on AI development at the expense of regulation raises concerns, given that rules were being strengthened to mitigate national security, human-rights and safety risks. A rollback that overlooks these issues could have consequences for the U.S. and the world. It may also put America at odds with Europe, which has prioritized regulation, thereby disrupting international cooperation on AI governance.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store