
Mountainhead Review: A relevant, promising satire undone by heavy talk and blunted ideas
Review: 'Mountainhead' is a strange, slightly maddening film that wants to show us just how deluded tech billionaires can be. It's a drama, with flashes of black comedy, trying to get into the minds of the ultra-rich who genuinely believe they're here to shape the world — maybe even save it. The film revolves around four central characters and sets up an intriguing premise, but it never quite takes off. It often feels stuck in its own head, and the characters speak in such lofty, philosophical riddles that you begin to wonder who, exactly, this is for. Coming from Jesse Armstrong, the creator of the brilliant 'Succession,' it's hard not to feel let down by a film that could've had so much bite.
The plot revolves around a group of four super wealthy tech friends who call themselves the 'Brewsters' and have gathered at a luxury mountain retreat in Utah called Mountainhead — a not-so-subtle nod to Ayn Rand's 'Fountainhead.' There's Venis (Cory Michael Smith), who runs a social media platform called Traam that's accidentally spreading AI-generated deepfakes across the globe. Then there's Jeff (Ramy Youssef), whose AI tech is spiraling into misuse, and Randall (Steve Carell), a powerful investor now grappling with terminal cancer. Their reunion, hosted by Souper (Jason Schwartzman), starts off with some banter and passive-aggression but soon shifts into something darker. They turn their moral compass toward Jeff, eventually coming to the conclusion that his invention is a threat to humanity. All of this unfolds while the world burns outside, and they continue sipping rare whisky, as if the apocalypse were just another business issue to debate.
Armstrong treads familiar ground — the obscenely rich, cocooned from consequence — but where 'Succession' was sharp, messy, and emotionally alive, 'Mountainhead' is colder and more abstract. It also draws directly from real life: Venis' denial of responsibility for Traam's impact echoes Zuckerberg's detachment during the 2016 US presidential election, while Randall's fixation on cheating death recalls Peter Thiel. A close watch will reveal glimpses of Elon Musk, Sam Altman, and Sam Bankman-Fried in the characters too. There are clever moments, especially when the film leans into satire — like when Souper writes everyone's net worth in lipstick on their bare chests or when Jeff's wealth overtakes Randall's by some obscure metric. But those flashes of absurdity don't carry through the whole film. Much of the dialogue is dense and philosophical, peppered with Kant and Plato, and after a while it stops feeling smart and starts feeling like noise.
The performances, though, are solid across the board. Jason Schwartzman, Steve Carell, Cory Michael Smith, and Ramy Youssef do what they can with characters who are often more like ideas than real people. The film does find a bit of momentum toward the end, when the outside world's chaos finally seeps into Mountainhead and shakes the group out of their bubble. It's the only point where the story feels like it has real stakes. Until then, it mostly meanders, unsure whether it wants to be a satire, a character study, or a tech-world fable.
In the end, 'Mountainhead' is more of a warning sign than a fully formed film. It has some compelling ideas — and certainly no shortage of ambition — but it's weighed down by its own cleverness. It wants to say something urgent about power, tech, and the people shaping our future, but it often gets lost in its own intellectual fog. There are moments that stick, but not enough to make the whole thing land. In the end, it comes across as a sermon disguised as a satire.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Scroll.in
2 hours ago
- Scroll.in
Can ChatGPT ‘rot' your brain as MIT study claims?
Since ChatGPT appeared almost three years ago, the impact of artificial intelligence technologies on learning has been widely debated. Are they handy tools for personalised education, or gateways to academic dishonesty? Most importantly, there has been concern that using AI will lead to a widespread 'dumbing down', or decline in the ability to think critically. If students use AI tools too early, the argument goes, they may not develop basic skills for critical thinking and problem-solving. Is that really the case? According to a recent study by scientists from MIT, it appears so. Using ChatGPT to help write essays, the researchers say, can lead to 'cognitive debt' and a 'likely decrease in learning skills'. So what did the study find? Brain vs AI Over the course of four months, the MIT team asked 54 adults to write a series of three essays using either AI (ChatGPT), a search engine, or their own brains ('brain-only' group). The team measured cognitive engagement by examining electrical activity in the brain and through linguistic analysis of the essays. The cognitive engagement of those who used AI was significantly lower than the other two groups. This group also had a harder time recalling quotes from their essays and felt a lower sense of ownership over them. Interestingly, participants switched roles for a final, fourth essay (the brain-only group used AI and vice versa). The AI-to-brain group performed worse and had engagement that was only slightly better than the other group's during their first session, far below the engagement of the brain-only group in their third session. The authors claim this demonstrates how prolonged use of AI led to participants accumulating 'cognitive debt'. When they finally had the opportunity to use their brains, they were unable to replicate the engagement or perform as well as the other two groups. Cautiously, the authors note that only 18 participants (six per condition) completed the fourth, final session. Therefore, the findings are preliminary and require further testing. Does AI really make us stupider? These results do not necessarily mean that students who used AI accumulated 'cognitive debt'. In our view, the findings are due to the particular design of the study. The change in neural connectivity of the brain-only group over the first three sessions was likely the result of becoming more familiar with the study task, a phenomenon known as the familiarisation effect. As study participants repeat the task, they become more familiar and efficient, and their cognitive strategy adapts accordingly. When the AI group finally got to 'use their brains', they were only doing the task once. As a result, they were unable to match the other group's experience. They achieved only slightly better engagement than the brain-only group during the first session. To fully justify the researchers' claims, the AI-to-brain participants would also need to complete three writing sessions without AI. Similarly, the fact the brain-to-AI group used ChatGPT more productively and strategically is likely due to the nature of the fourth writing task, which required writing an essay on one of the previous three topics. As writing without AI required more substantial engagement, they had a far better recall of what they had written in the past. Hence, they primarily used AI to search for new information and refine what they had previously written. What are the implications? To understand the current situation with AI, we can look back to what happened when calculators first became available. Back in the 1970s, their impact was regulated by making exams much harder. Instead of doing calculations by hand, students were expected to use calculators and spend their cognitive efforts on more complex tasks. Effectively, the bar was significantly raised, which made students work equally hard (if not harder) than before calculators were available. The challenge with AI is that, for the most part, educators have not raised the bar in a way that makes AI a necessary part of the process. Educators still require students to complete the same tasks and expect the same standard of work as they did five years ago. In such situations, AI can indeed be detrimental. Students can for the most part offload critical engagement with learning to AI, which results in ' metacognitive laziness '. However, just like calculators, AI can and should help us accomplish tasks that were previously impossible – and still require significant engagement. For example, we might ask teaching students to use AI to produce a detailed lesson plan, which will then be evaluated for quality and pedagogical soundness in an oral examination. In the MIT study, participants who used AI were producing the 'same old' essays. They adjusted their engagement to deliver the standard of work expected of them. The same would happen if students were asked to perform complex calculations with or without a calculator. The group doing calculations by hand would sweat, while those with calculators would barely blink an eye. Learning how to use AI Current and future generations need to be able to think critically and creatively and solve problems. However, AI is changing what these things mean. Producing essays with pen and paper is no longer a demonstration of critical thinking ability, just as doing long division is no longer a demonstration of numeracy. Knowing when, where and how to use AI is the key to long-term success and skill development. Prioritising which tasks can be offloaded to an AI to reduce cognitive debt is just as important as understanding which tasks require genuine creativity and critical thinking. Vitomir Kovanovic is Associate Professor and Associate Director of the Centre for Change and Complexity in Learning (C3L), Education Futures, University of South Australia. Rebecca Marrone.


Time of India
2 hours ago
- Time of India
Samsung TV Plus adds four B4U Channels for free
Samsung TV Plus, the free ad-supported streaming television (FAST) platform, has expanded its content lineup with the addition of four popular channels from the B4U Network : B4U Movies, B4U Music, B4U Kadak, and B4U Bhojpuri. With this partnership, Samsung TV Plus now offers over 125 FAST channels. 'Our mission is to deliver unmatched access and exceptional value,' said Kunal Mehta, Head of Partnerships at Samsung TV Plus India. 'By introducing new FAST channels from B4U, we're enhancing access to the latest in entertainment while supporting advertisers with a premium, scalable platform.' The B4U Network, which reaches audiences in over 100 countries, is known for its extensive library of Hindi cinema, regional content, and music programming. The collaboration taps into India's growing Connected TV (CTV) market, where viewers are increasingly turning to smart TVs and streaming platforms for curated content. 'CTV is transforming how India consumes entertainment,' said Johnson Jain, Chief Revenue Officer at B4U. 'Our partnership with Samsung TV Plus allows us to reach broader audiences with top-tier movies and music, delivered seamlessly on a premium platform.' The new channels are available immediately on Samsung Smart TVs and compatible Galaxy devices, offering viewers a richer, more localized streaming experience—completely free of charge. AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Mint
7 hours ago
- Mint
Going nuclear will be the only way to keep the lights on as AI guzzles ever more electricity
Nishant Sahdev Artificial intelligence consumes energy in such bulk that its rise has thrown the world into an infrastructure emergency. Thankfully, nuclear power is not just viable, its risks have been on the decline. It's the only way out now. Nuclear energy is the only scalable source of clean electricity in existence that runs 24/7. Gift this article Recently, I was in a conversation with MIT researchers on artificial intelligence (AI) and nuclear energy. While discussing the subject, we saw a video clip of a data centre that looked like a giant fridge but buzzed like a jet engine. Inside, thousands of AI chips were training a new language model—one that could write poems, analyse genomes or simulate the weather on Mars. Recently, I was in a conversation with MIT researchers on artificial intelligence (AI) and nuclear energy. While discussing the subject, we saw a video clip of a data centre that looked like a giant fridge but buzzed like a jet engine. Inside, thousands of AI chips were training a new language model—one that could write poems, analyse genomes or simulate the weather on Mars. What struck me wasn't the intelligence of this machine. It was the sheer energy it was devouring. The engineer said, 'This one building consumes as much power as a small town." That's when the magnitude of the challenge hit me: If AI is our future, how on earth will we power it? Also Read: AI as infrastructure: India must develop the right tech All that intelligence takes energy. A lot of it. More than most people realize. And as someone who's spent years studying the physics of energy systems, I believe we are about to hit a hard wall. To be blunt: AI is growing faster than our ability to power it. And unless we confront this, the very tools meant to build our future could destabilize our energy systems—or drag us backward on climate. One solution has been pinpointed by the AI industry: nuclear energy. Most people don't associate AI with power plants. But every chatbot and image generator is backed by vast data centres full of servers, fans and GPUs running day and night. These machines don't sip power. They guzzle it. In 2022, data centres worldwide consumed around 460 terawatt-hours. But that's just the baseline. Goldman Sachs projects that by 2030, AI data centres will use 165% more electricity than they did in 2023. And it's not just about scale. It's about reliability. AI workloads can't wait for the sun to shine or wind to blow. They need round-the-clock electricity, without fluctuations or outages. That rules out intermittent renewables for a large share of the load—at least for now. Also Read: Rely on modern geothermal energy to power our AI ambitions Can power grids handle it?: The short answer: not without big changes. In the US, energy planners are already bracing for strain. States like Virginia and Georgia are seeing huge surges in electricity demand from tech campuses. One recent report estimated that by 2028, America will need 56 gigawatts of new power generation capacity just for data centres. That's equivalent to building 40 new large power plants in less than four years. The irony? AI is often promoted as a solution to climate change. But without clean and scalable energy, its growth could have the opposite effect. For example, Google's carbon emissions rose 51% from 2019 to 2024 by its own assessment, largely on account of AI's appetite for power. This is an infrastructure emergency. Enter nuclear energy—long seen as a relic of the Cold War or a post-Chernobyl nightmare. But in a world hungry for carbon-free baseload power, nuclear power is making a quiet comeback. Let's be clear: nuclear energy is the only scalable source of clean electricity in existence that runs 24/7. A single large reactor can power multiple data centres without emitting carbon or depending on weather conditions. Also Read: India should keep all its nuclear power options in play Tech companies are already acting: Microsoft signed a deal to reopen part of the Three Mile Island nuclear plant to power its AI operations. Google is investing in small modular reactors (SMRs). These are compact next-generation nuclear units that are designed to be safer, faster to build and considered ideal for campuses. They're early signs of a strategic shift: AI companies are realizing that if they want to build the future, they'll have to power it themselves. As a physicist, I've always been fascinated by nuclear energy's elegance. A single uranium pellet—smaller than a fingertip—holds the same energy as a tonne of coal. The energy density is unmatched. But it's not just about big reactors anymore. The excitement stems from advanced reactors. SMRs can be built in factories, shipped by truck and installed near tech campuses or even remote towns. Molten salt reactors and micro-reactors promise even greater safety and efficiency, with lower waste. New materials and AI-assisted monitoring make this technology far safer than past generations. For the first time in decades, nuclear power is both viable and vital. But let's talk about the risks: I'm not naïve. Nuclear still carries a stigma—and poses real challenges. Take cost and time; building or reviving reactors takes years and billions of dollars. Even Microsoft's project will face regulatory hurdles. Or waste; we still need better systems for storing radioactive materials over the long-term. Or consider control; if tech giants start building private nuclear plants, will public utilities fall behind? Who gets priority during shortages? And of course, we must be vigilant about safety and non-proliferation. The last thing we want is a tech-driven nuclear revival that ignores the hard lessons of history. But here's the bigger risk: doing nothing. Letting power demand explode while we rely on fossil fuels to catch up would be a disaster. We live in strange times. Our brightest engineers are teaching machines to think. But they still haven't solved how to power those machines sustainably. As a physicist, I believe we must act quickly—not just to make AI smarter, but to make its foundation stronger. Nuclear energy may not be perfect. But in the race to power our most powerful technology yet, it may be the smartest bet we've got. The AI revolution can't run on good intentions. It will be run on electricity. But where will it come from? The author is a theoretical physicist at the University of North Carolina at Chapel Hill, United States. He posts on X @NishantSahdev Topics You May Be Interested In