logo
Spoilers! Why 'M3GAN 2.0' is actually a 'redemption story'

Spoilers! Why 'M3GAN 2.0' is actually a 'redemption story'

USA Today21 hours ago

Spoiler alert! We're discussing major details about the ending of 'M3GAN 2.0' (in theaters now), so beware if you haven't seen it yet.
'You wouldn't give your child cocaine. Why would you give them a smartphone?'
That's the sardonic hypothetical posed by roboticist Gemma (Allison Williams) at the start of 'M3GAN 2.0,' a high-octane sequel to the 2023 hit horror comedy. When the new movie picks up, Gemma is tirelessly advocating for government oversight of artificial intelligence, after creating a bratty, pussy-bowed animatronic named M3GAN that killed four people and a dog in the original film.
'Honestly, Gemma has a point,' jokes Williams, the mother of a 3-year-old, Arlo, with actor Alexander Dreymon. 'Any time my son looks at my screen, I'm like, 'This does feel like the way people react to cocaine. This is not going to be easy to remove from his presence.' '
The first movie was an allegory about parenting and how technology is compromising the emotional human bonds that we share with one another. But in the action-packed follow-up, writer/director Gerard Johnstone wanted to explore the real-life ramifications of having M3GAN-like technology unleashed on the world.
'With the way AI was changing, and the conversation around AI was evolving, it opened up a door narratively to where we could go in the sequel,' Johnstone says.
How does 'M3GAN 2.0' end?
'M3GAN 2.0' introduces a new villain in Amelia (Ivanna Sakhno), a weapons-grade automaton built by the U.S. military using M3GAN's stolen programming. But when Amelia goes rogue on a lethal mission for AI to rule the world, Gemma comes to realize that M3GAN is the only one who can stop her.
Gemma reluctantly agrees to rebuild her impudent robot in a new body, and the sequel ends with an explosive showdown between Amelia and M3GAN, who nearly dies in a noble attempt to save Gemma and her niece, Cady (Violet McGraw).
'If Amelia walked out of that intact, that's a very different world we're all living in. M3GAN literally saves the world,' Williams says. 'When the first movie ends, you're like, 'Oh, she's a bad seed and I'm glad she's gone.' But by the end of this movie, you have completely different feelings about her. There's a feeling of relief when you realize she's still here, which is indicative of how much ground gets covered in this movie.'
M3GAN's willingness to sacrifice herself shows real growth from the deadpanning android that audiences fell in love with two years ago. But Johnstone has always felt 'a strong empathy' towards M3GAN and never wanted to make her an outright villain.
Even in the first film, 'everything she does is a result of her programming,' Johnstone says. 'As soon as she does something that Gemma disagrees with, Gemma tries to turn her off, erase her, reprogram her, and effectively kill her. So from that point of view, M3GAN does feel rightly short-changed.'
M3GAN's desire to prove herself, and take the moral high ground, is 'what this movie was really about,' Johnstone adds. 'I love redemption stories.'
Does 'M3GAN 2.0' set up a third movie?
For Williams, part of the appeal of a sequel was getting to play with how M3GAN exists in the world, after her doll exterior was destroyed in the first movie. M3GAN is offscreen for much of this film, with only her voice inhabiting everything from a sports car to a cutesy smart home assistant.
'She's just iterating constantly, which tore through a persona that we've come to know and love,' Williams says. 'It's an extremely cool exercise in a movie like this, where we get to end the movie with a much deeper understanding of who this character is. We've now interacted with her in so many different forms, and yet we still feel the consistency of who she 'is.' That's really the fun of it.'
In a way, 'she's like this digital poltergeist that's haunting them from another dimension,' Johnstone adds. 'It was a way to remind people she's more than a doll in a dress – she's an entity.'
In the final scene of 'M3GAN 2.0,' we see the character living inside Gemma's computer, in a nostalgic nod to the Microsoft Word paper clip helper. (As millennials, 'our relationship with Clippy was very codependent and very complicated,' Williams quips.)
But if there is a third 'M3GAN' movie, it's unlikely that you'll see her trapped in that virtual realm forever.
'M3GAN always needs to maintain a physical form,' Johnstone says. 'One aspect of AI philosophy that we address in this film is this idea of embodiment: If AI is ever going to achieve true consciousness, it has to have a physical form so it can feel anchored. So that's certainly M3GAN's point of view at the beginning of the movie: She feels that if she stays in this formless form for too long, she's going to fragment.
'M3GAN always has to be in a physical body that she recognizes – it's another reason why she won't change her face, even if it draws attention to herself. It's like, 'This is who I am and I'm not changing.' '

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The Future of AI-Powered Treatment Discovery
The Future of AI-Powered Treatment Discovery

Time Business News

time43 minutes ago

  • Time Business News

The Future of AI-Powered Treatment Discovery

The future of treatment discovery is changing fast with the help of artificial intelligence (AI). As technology improves, AI is becoming a powerful tool in the healthcare world, especially for finding new and better ways to treat diseases. With rising health challenges and complex conditions, AI has the potential to completely change how new medicines are developed. This article explains how AI is shaping the future of treatment discovery, the role of data science, and how people can prepare for these changes through a data science course in Hyderabad. AI is now playing a very important role in many industries, including healthcare. In the past, finding new treatments was a long, expensive, and often uncertain process. But with AI, this can become much faster and more accurate. Machine learning and deep learning tools can process huge amounts of information quickly, spotting patterns and connections that humans might miss. This ability is especially useful in discovering new therapies where a lot of biological and chemical data needs to be analyzed. AI is already making a big difference in the early stages of finding new treatments. Earlier, researchers often depended on trial and error to find chemical compounds that could help treat diseases. Now, AI allows this process to become more targeted and based on data. Machine learning models can predict how effective a compound might be against a specific disease. This helps save time and money compared to traditional methods. AI tools can also suggest possible side effects and point out which natural or lab-based compounds are most likely to work. This helps scientists focus only on the most promising options, improving the chances of success. Data science plays a key role in helping AI deliver useful results in treatment discovery. There's a massive amount of data involved — from clinical trials to genetic details — and managing it requires special skills. A data science course can teach individuals how to work with this type of information. These programs cover tools like machine learning and statistical analysis, which are critical for turning raw data into meaningful insights. One of the most exciting uses of AI is in personalized or precision medicine. This means creating treatments based on each person's unique genetic background, lifestyle, and health conditions. AI can study genetic data and predict which therapies are likely to work best for specific patients. This helps move away from the old one-size-fits-all method and brings in more customized care that works better and has fewer side effects. For AI to succeed in this area, skilled data scientists must be able to manage and understand large sets of health data, including medical history, clinical reports, and genetic information. One of the biggest advantages of AI is speed. Normally, it takes many years — sometimes decades — to bring a new treatment to market. It's a long and costly journey, and success is never guaranteed. AI can cut this timeline down dramatically. With its ability to quickly analyze large datasets, AI can find promising compounds in weeks or months. This is especially useful for finding cures for diseases that spread fast or don't yet have effective treatment options. Even though AI has great potential, there are challenges that need attention. One major issue is the availability and quality of data. AI systems need reliable, organized data to give correct predictions. Unfortunately, healthcare data is often scattered, incomplete, or unstructured, which makes things difficult for AI tools. Another challenge is the lack of skilled professionals. Working with AI in medicine needs people who understand machine learning, biology, and data science. That's why specialized training programs, like data science courses in Hyderabad, are becoming more important. As AI continues to change how treatments are discovered, the role of data scientists will become even more important. These professionals will design and improve the AI systems that lead to better medical solutions. They will also make sure that the data being used is accurate and helpful. To do this job well, data scientists need a strong understanding of both computer science and biology. They'll need to work closely with doctors, researchers, and scientists to turn medical questions into data-based answers. With this teamwork, they can help develop new medicines that could change lives. AI in treatment discovery is not limited to any one country. Around the world, AI is being used to solve health problems — even in places where access to traditional healthcare is limited. By making the development process faster and more efficient, AI can bring new treatments to markets that were often ignored. It's also helping researchers work on cures for major global diseases like cancer, Alzheimer's, and various infections. By studying worldwide health data, AI can uncover new solutions that might otherwise go unnoticed. As AI keeps improving, its effect on healthcare will be huge, helping millions by speeding up the creation of life-saving therapies. The future of AI in discovering and developing treatments looks very bright. AI can completely change how we create medicines, making the process faster, more affordable, and more precise. With the help of data science, researchers can find better solutions for serious health issues, giving hope to patients around the world. As technology continues to grow, we'll see even more progress in treatment discovery, leading to better care and healthier lives. The future of healthcare and AI is closely linked, and those ready to embrace it will help lead the way in medical innovation. ExcelR – Data Science, Data Analytics, and Business Analyst Course Training in HyderabadAddress: Cyber Towers, PHASE-2, 5th Floor, Quadrant-2, HITEC City, Hyderabad, Telangana 500081 Phone: 096321 56744

AI is learning to lie, scheme, and threaten its creators
AI is learning to lie, scheme, and threaten its creators

Yahoo

time7 hours ago

  • Yahoo

AI is learning to lie, scheme, and threaten its creators

The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. "O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives. - 'Strategic kind of deception' - For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS). - No rules - Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. "I don't think there's much awareness yet," he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.". Researchers are exploring various approaches to address these challenges. Some advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it." Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability. tu/arp/md

AI is learning to lie, scheme, and threaten its creators
AI is learning to lie, scheme, and threaten its creators

Yahoo

time8 hours ago

  • Yahoo

AI is learning to lie, scheme, and threaten its creators

The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. "O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives. - 'Strategic kind of deception' - For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS). - No rules - Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. "I don't think there's much awareness yet," he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.". Researchers are exploring various approaches to address these challenges. Some advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it." Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability. tu/arp/md

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store