logo
Google's Sanjay Gupta Talks AI's Top Two Challenges And Importance Of ‘On-The-Go' Content In Asia — APOS

Google's Sanjay Gupta Talks AI's Top Two Challenges And Importance Of ‘On-The-Go' Content In Asia — APOS

Yahoo4 days ago

Google APAC president Sanjay Gupta called on the media and entertainment industries to embrace AI's opportunities, while acknowledging concerns around the protection of talent and creativity.
Speaking to audiences at APOS, currently held in Bali, Gupta referred to this year's Google AI immersive production of 'The Wizard of Oz' at the Las Vegas Sphere as an example of AI's expansive opportunities for production and media.
More from Deadline
Viu & SBS' 'Taxi Driver' Season 3 Set To Air In Q4; 'My Youth' Starring Song Joong-Ki Will Premiere In Q3 - APOS
UK Actors Union Says "Thousands Of Performers" Have Been "Digitally Scanned On Set Without Their Informed Consent" In Open Letter To Pact Signed By Emma D'Arcy, Tamsin Greig, Nicola Walker
Warner Bros. Pictures Unveils Release Date For Chinese Co-Production 'Tom And Jerry: Forbidden Compass'
Gupta called AI both a 'profound pivot' and a 'magic wand,' calling on audiences to imagine the creation of films that 'everybody can watch in real time in different languages.'
However, he also acknowledged two major concerns about AI across the industry: the protection of talent, as well as creativity.
'The first concern is talent and what happens to talent,' said Gupta. 'We must think of AI as a tool that is augmenting us, that is a multiplier.'
Gupta added that the second concern is about the protection of creativity, which he said requires extended engagement with stakeholders.
He also highlighted that amid these technological developments, there is a sense of unprecedented times, even for him. 'This pace of change, I've never experienced before even in my decades of experience,' said Gupta.
Beyond AI, Gupta also discussed the increase in screens and screen time around Asia, as well as the need to increase provision of 'on-the-go' content and formats.
Gupta shared that the Asian region has 'four billion people each watching over seven hours of stories today across 5 billion screens.'
He also noted a big shift towards watching 'on-the-go' content and across multiple screens, with Asia seeing a growth from roughly 2 billion screens around a decade ago, to 5 billion screens today.
'We are watching multiple genres and in a way that feels more and more personal. We are seeking stories for on-the-go consumption,' said Gupta. 'Throughout the day, we are switching screens.'
He added that there is still a lot of room for growth in the region, with APAC contributing to around 15% of revenues globally.
In creation and production, Gupta also acknowledged that creatives have increasingly used multiple formats and non-traditional media to tell their stories.
'They tell the story that they want to tell through videos long and short, through audio, images or games,' added Gupta.
He added that there will be further integration between the digital and physical worlds, through augmented reality, smart glasses and other forms of technology.
'Digital will blend even more seamlessly with the physical world,' added Gupta.
Best of Deadline
Everything We Know About 'My Life With The Walter Boys' Season 2 So Far
Everything We Know About The 'Reminders of Him' Movie So Far
Everything We Know About The 'Hunger Games: Sunrise On The Reaping' Movie So Far

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI is learning to lie, scheme, and threaten its creators
AI is learning to lie, scheme, and threaten its creators

Yahoo

time26 minutes ago

  • Yahoo

AI is learning to lie, scheme, and threaten its creators

The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. "O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives. - 'Strategic kind of deception' - For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS). - No rules - Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. "I don't think there's much awareness yet," he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.". Researchers are exploring various approaches to address these challenges. Some advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it." Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability. tu/arp/md

AI is learning to lie, scheme, and threaten its creators
AI is learning to lie, scheme, and threaten its creators

Yahoo

time2 hours ago

  • Yahoo

AI is learning to lie, scheme, and threaten its creators

The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. "O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives. - 'Strategic kind of deception' - For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS). - No rules - Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. "I don't think there's much awareness yet," he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.". Researchers are exploring various approaches to address these challenges. Some advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it." Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability. tu/arp/md

Why CoreWeave Stock Plummeted This Week
Why CoreWeave Stock Plummeted This Week

Yahoo

time2 hours ago

  • Yahoo

Why CoreWeave Stock Plummeted This Week

CoreWeave saw a substantial valuation pullback this week, despite a bullish backdrop for AI stocks and the market at large. Investors sold out of the stock in response to analyst coverage and news that Nvidia is making a bigger push in cloud computing. Investors also appear to be concerned that CoreWeave could overpay to acquire Core Scientific. 10 stocks we like better than CoreWeave › Despite strong gains for the broader market, CoreWeave (NASDAQ: CRWV) stock closed out this week's trading down by double digits. The artificial intelligence (AI) specialist's share price fell 12.8% over the stretch. Meanwhile, the S&P 500 index rose 3.4%. CoreWeave stock lost ground this week following fresh analyst coverage and news that Nvidia (NASDAQ: NVDA) is making a bigger push in the cloud computing space. The company's valuation was also pressured by reports that a big acquisition move could be in the works. Before the market opened on Wednesday, H.C. Wainwright published its first rating on CoreWeave. The investment firm set a neutral rating on the tech specialist's stock, with analyst Kevin Dede raising some valuation concerns despite also acknowledging that CoreWeave had demonstrated its computing strengths. The Wall Street Journal also published a report on Wednesday stating that Nvidia plans to ramp up its own cloud-computing business. Nvidia's advanced graphics processing units (GPUs) have been the key hardware at the center of the AI data center revolution, but the tech leader is also in the relatively early stages of building its own AI-as-a-service (AIaaS) business. The company is a financial backer of CoreWeave, but some investors are worried that the tech giant could move in on the smaller player's turf. On Thursday, WSJ reported that CoreWeave is negotiating a deal to acquire Core Scientific (NASDAQ: CORZ). According to the report, a buyout could be finalized within weeks and is expected to assign a Core Scientific substantial valuation premium. Based on subsequent trading for CoreWeave, the reaction from investors appears to be mixed. Analysts are also split on what the buyout valuation might look like. Jeffries put forward a lower-end target, estimating that CoreWeave could pay between $16 per share and $23 per share to purchase Core Scientific. Cantor Fitzgerald put the potential buyout price at above $30 per share, and Roth Capital expects the company could pay as much as $38 per share in an all-stock deal. Given that CoreWeave tried to acquire Core Scientific at a price of $5.75 per share last year, some investors may be worried that the company is at risk of overpaying in the potential buyout. Before you buy stock in CoreWeave, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and CoreWeave wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $713,547!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $966,931!* Now, it's worth noting Stock Advisor's total average return is 1,062% — a market-crushing outperformance compared to 177% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 23, 2025 Keith Noonan has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Nvidia. The Motley Fool has a disclosure policy. Why CoreWeave Stock Plummeted This Week was originally published by The Motley Fool

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store