logo
Google's new AI tool creates the most realistic videos yet

Google's new AI tool creates the most realistic videos yet

9 News03-06-2025
Your web browser is no longer supported. To improve your experience update it here BREAKING Major Sydney tunnel closed after crane truck rolls over Google's new AI video generator Veo 3 - announced just last month - is causing shockwaves online with its photorealistic content.  The AI video generator can create eight-second videos within moments after receiving a prompt. Content like short films, fake street interviews, sci-fi, action and other prompts are appearing all over social media; you may have seen them already without even knowing it's AI. Google's own prompt of an "old sailor" showcases just how impressive the quality of the AI is. (Google) Google's "state-of-the-art video generation model" claims it stands out from other AI video generators due to its innovations that were "designed for greater control". Notably, the tool now produces realistic soundscapes featuring audio and dialogue and can be fine-tuned to keep consistent characters in different video clips. Users can also precisely adjust framing and movement. Veo 3 can even use videos of yourself as a reference point for animating facial expressions and emotions. The tool has been available through Google's paid AI plans and is accessible through its AI chatbot Gemini and new creative tool Flow. The technology is set to cause an even greater shift within the creative industry as filmmakers experiment with the technology . Some social media users noted the "uncanny" nature of the AI videos. (Google) AI and its future use were the topic of the Australian Financial Review 's AI summit held today, with Google Australia and New Zealand managing director Melanie Silva saying the technology will be critical to lift Australia's productivity. "Everything we know of can be faster and easier," she told the AFR summit. "If we put a productivity lens around it and think about how Australia might solve the 10-year productivity slump that we are in, this is by far one of the biggest solutions." Google's senior vice president for research, labs, technology and society James Manyika said Australia would need to capitalise on opportunities provided by AI. "It's fundamentally important in our minds to have a very vibrant research and AI ecosystem," Manyika told the summit. "I think Australia has a great starting point." "The CSIRO is an extraordinary entity ... but more may actually be needed." Technology
Artificial Intelligence
Google
Tech
World CONTACT US
Property News: 'Stressful': Perth mum's dilemma after rental mix-up.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Tech giants fail to tackle heinous crimes against kids
Tech giants fail to tackle heinous crimes against kids

The Advertiser

time3 hours ago

  • The Advertiser

Tech giants fail to tackle heinous crimes against kids

Tech giants are failing to track the reports of online child sexual abuse despite figures suggesting more than 16 million photos and videos were found on the platforms. An eSafety report has revealed that Apple, Google, Meta, Microsoft, Discord, WhatsApp, Snapchat and Skype aren't doing enough to crackdown on online child sexual abuse despite repeated calls for action. It comes three years after the Australian watchdog found the platforms weren't proactively detecting stored abuse material or using measures to find live-streams of child harm. "While there are a couple of bright spots, basically, most of the big ones are not lifting their game when it comes to the most heinous crimes against children," Commissioner Julie Inman Grant told ABC radio on Wednesday. The latest report revealed Apple and Google's YouTube weren't tracking the number of user reports about child sexual abuse, nor could they say how long it took to respond to the allegations. The companies also didn't provide the number of trust and safety staff. The US National Centre for Missing and Exploited Children suggests there were tip-offs about more than 18 million unique images and eight million videos of online sexual abuse in 2022. "What worries me is when companies say, 'We can't tell you how many reports we've received' ... that's bollocks, they've got the technology," Ms Inman Grant said. "What's happening is we're seeing a winding back of content moderation and trust and safety policies and an evisceration of trust and safety teams, so they're de-investing rather than re-upping." It comes as YouTube has been arguing against being included in a social media ban for under-16-year-olds on the basis that it is not a social media platform but rather is often used as an educational resource. The watchdog commissioner had recommended YouTube be included based on research that showed children were exposed to harmful content on the platform more than on any other. Meanwhile, other findings in the new report include that none of the giants had deployed tools to detect child sexual exploitation livestreaming on their services, three years after the watchdog first raised the alarm. A tool called hash matching, where copies of previously identified sexual abuse material can be detected on other platforms, wasn't being used by most of the companies, and they are failing to use resources to detect grooming or sexual extortion. There were a few positive improvements with Discord, Microsoft and WhatsApp generally increasing hash-matching tools and the number of sources to inform that technology. "While we welcome these improvements, more can and should be done," Ms Inman Grant said. This report is part of legislation passed last year that legally enforces periodic transparency notices to tech companies, meaning they must report to the watchdog every six months for two years about tackling child sexual abuse material. The second report will be available in early 2026. Tech giants are failing to track the reports of online child sexual abuse despite figures suggesting more than 16 million photos and videos were found on the platforms. An eSafety report has revealed that Apple, Google, Meta, Microsoft, Discord, WhatsApp, Snapchat and Skype aren't doing enough to crackdown on online child sexual abuse despite repeated calls for action. It comes three years after the Australian watchdog found the platforms weren't proactively detecting stored abuse material or using measures to find live-streams of child harm. "While there are a couple of bright spots, basically, most of the big ones are not lifting their game when it comes to the most heinous crimes against children," Commissioner Julie Inman Grant told ABC radio on Wednesday. The latest report revealed Apple and Google's YouTube weren't tracking the number of user reports about child sexual abuse, nor could they say how long it took to respond to the allegations. The companies also didn't provide the number of trust and safety staff. The US National Centre for Missing and Exploited Children suggests there were tip-offs about more than 18 million unique images and eight million videos of online sexual abuse in 2022. "What worries me is when companies say, 'We can't tell you how many reports we've received' ... that's bollocks, they've got the technology," Ms Inman Grant said. "What's happening is we're seeing a winding back of content moderation and trust and safety policies and an evisceration of trust and safety teams, so they're de-investing rather than re-upping." It comes as YouTube has been arguing against being included in a social media ban for under-16-year-olds on the basis that it is not a social media platform but rather is often used as an educational resource. The watchdog commissioner had recommended YouTube be included based on research that showed children were exposed to harmful content on the platform more than on any other. Meanwhile, other findings in the new report include that none of the giants had deployed tools to detect child sexual exploitation livestreaming on their services, three years after the watchdog first raised the alarm. A tool called hash matching, where copies of previously identified sexual abuse material can be detected on other platforms, wasn't being used by most of the companies, and they are failing to use resources to detect grooming or sexual extortion. There were a few positive improvements with Discord, Microsoft and WhatsApp generally increasing hash-matching tools and the number of sources to inform that technology. "While we welcome these improvements, more can and should be done," Ms Inman Grant said. This report is part of legislation passed last year that legally enforces periodic transparency notices to tech companies, meaning they must report to the watchdog every six months for two years about tackling child sexual abuse material. The second report will be available in early 2026. Tech giants are failing to track the reports of online child sexual abuse despite figures suggesting more than 16 million photos and videos were found on the platforms. An eSafety report has revealed that Apple, Google, Meta, Microsoft, Discord, WhatsApp, Snapchat and Skype aren't doing enough to crackdown on online child sexual abuse despite repeated calls for action. It comes three years after the Australian watchdog found the platforms weren't proactively detecting stored abuse material or using measures to find live-streams of child harm. "While there are a couple of bright spots, basically, most of the big ones are not lifting their game when it comes to the most heinous crimes against children," Commissioner Julie Inman Grant told ABC radio on Wednesday. The latest report revealed Apple and Google's YouTube weren't tracking the number of user reports about child sexual abuse, nor could they say how long it took to respond to the allegations. The companies also didn't provide the number of trust and safety staff. The US National Centre for Missing and Exploited Children suggests there were tip-offs about more than 18 million unique images and eight million videos of online sexual abuse in 2022. "What worries me is when companies say, 'We can't tell you how many reports we've received' ... that's bollocks, they've got the technology," Ms Inman Grant said. "What's happening is we're seeing a winding back of content moderation and trust and safety policies and an evisceration of trust and safety teams, so they're de-investing rather than re-upping." It comes as YouTube has been arguing against being included in a social media ban for under-16-year-olds on the basis that it is not a social media platform but rather is often used as an educational resource. The watchdog commissioner had recommended YouTube be included based on research that showed children were exposed to harmful content on the platform more than on any other. Meanwhile, other findings in the new report include that none of the giants had deployed tools to detect child sexual exploitation livestreaming on their services, three years after the watchdog first raised the alarm. A tool called hash matching, where copies of previously identified sexual abuse material can be detected on other platforms, wasn't being used by most of the companies, and they are failing to use resources to detect grooming or sexual extortion. There were a few positive improvements with Discord, Microsoft and WhatsApp generally increasing hash-matching tools and the number of sources to inform that technology. "While we welcome these improvements, more can and should be done," Ms Inman Grant said. This report is part of legislation passed last year that legally enforces periodic transparency notices to tech companies, meaning they must report to the watchdog every six months for two years about tackling child sexual abuse material. The second report will be available in early 2026. Tech giants are failing to track the reports of online child sexual abuse despite figures suggesting more than 16 million photos and videos were found on the platforms. An eSafety report has revealed that Apple, Google, Meta, Microsoft, Discord, WhatsApp, Snapchat and Skype aren't doing enough to crackdown on online child sexual abuse despite repeated calls for action. It comes three years after the Australian watchdog found the platforms weren't proactively detecting stored abuse material or using measures to find live-streams of child harm. "While there are a couple of bright spots, basically, most of the big ones are not lifting their game when it comes to the most heinous crimes against children," Commissioner Julie Inman Grant told ABC radio on Wednesday. The latest report revealed Apple and Google's YouTube weren't tracking the number of user reports about child sexual abuse, nor could they say how long it took to respond to the allegations. The companies also didn't provide the number of trust and safety staff. The US National Centre for Missing and Exploited Children suggests there were tip-offs about more than 18 million unique images and eight million videos of online sexual abuse in 2022. "What worries me is when companies say, 'We can't tell you how many reports we've received' ... that's bollocks, they've got the technology," Ms Inman Grant said. "What's happening is we're seeing a winding back of content moderation and trust and safety policies and an evisceration of trust and safety teams, so they're de-investing rather than re-upping." It comes as YouTube has been arguing against being included in a social media ban for under-16-year-olds on the basis that it is not a social media platform but rather is often used as an educational resource. The watchdog commissioner had recommended YouTube be included based on research that showed children were exposed to harmful content on the platform more than on any other. Meanwhile, other findings in the new report include that none of the giants had deployed tools to detect child sexual exploitation livestreaming on their services, three years after the watchdog first raised the alarm. A tool called hash matching, where copies of previously identified sexual abuse material can be detected on other platforms, wasn't being used by most of the companies, and they are failing to use resources to detect grooming or sexual extortion. There were a few positive improvements with Discord, Microsoft and WhatsApp generally increasing hash-matching tools and the number of sources to inform that technology. "While we welcome these improvements, more can and should be done," Ms Inman Grant said. This report is part of legislation passed last year that legally enforces periodic transparency notices to tech companies, meaning they must report to the watchdog every six months for two years about tackling child sexual abuse material. The second report will be available in early 2026.

Tech giants fail to tackle heinous crimes against kids
Tech giants fail to tackle heinous crimes against kids

Perth Now

time4 hours ago

  • Perth Now

Tech giants fail to tackle heinous crimes against kids

Tech giants are failing to track the reports of online child sexual abuse despite figures suggesting more than 16 million photos and videos were found on the platforms. An eSafety report has revealed that Apple, Google, Meta, Microsoft, Discord, WhatsApp, Snapchat and Skype aren't doing enough to crackdown on online child sexual abuse despite repeated calls for action. It comes three years after the Australian watchdog found the platforms weren't proactively detecting stored abuse material or using measures to find live-streams of child harm. "While there are a couple of bright spots, basically, most of the big ones are not lifting their game when it comes to the most heinous crimes against children," Commissioner Julie Inman Grant told ABC radio on Wednesday. The latest report revealed Apple and Google's YouTube weren't tracking the number of user reports about child sexual abuse, nor could they say how long it took to respond to the allegations. The companies also didn't provide the number of trust and safety staff. The US National Centre for Missing and Exploited Children suggests there were tip-offs about more than 18 million unique images and eight million videos of online sexual abuse in 2022. "What worries me is when companies say, 'We can't tell you how many reports we've received' ... that's bollocks, they've got the technology," Ms Inman Grant said. "What's happening is we're seeing a winding back of content moderation and trust and safety policies and an evisceration of trust and safety teams, so they're de-investing rather than re-upping." It comes as YouTube has been arguing against being included in a social media ban for under-16-year-olds on the basis that it is not a social media platform but rather is often used as an educational resource. The watchdog commissioner had recommended YouTube be included based on research that showed children were exposed to harmful content on the platform more than on any other. Meanwhile, other findings in the new report include that none of the giants had deployed tools to detect child sexual exploitation livestreaming on their services, three years after the watchdog first raised the alarm. A tool called hash matching, where copies of previously identified sexual abuse material can be detected on other platforms, wasn't being used by most of the companies, and they are failing to use resources to detect grooming or sexual extortion. There were a few positive improvements with Discord, Microsoft and WhatsApp generally increasing hash-matching tools and the number of sources to inform that technology. "While we welcome these improvements, more can and should be done," Ms Inman Grant said. This report is part of legislation passed last year that legally enforces periodic transparency notices to tech companies, meaning they must report to the watchdog every six months for two years about tackling child sexual abuse material. The second report will be available in early 2026.

eSafety sommissioner says Youtube turning blind eye to child abuse
eSafety sommissioner says Youtube turning blind eye to child abuse

ABC News

time11 hours ago

  • ABC News

eSafety sommissioner says Youtube turning blind eye to child abuse

Australia's internet watchdog has accused the world's biggest social media firms of still "turning a blind eye" to online child sex abuse material on their platforms, and said YouTube in particular had been unresponsive to its enquiries. In a report released on Wednesday, the eSafety Commissioner said YouTube, along with Apple, failed to track the number of user reports it received of child sex abuse appearing on their platforms and also could not say how long it took them to respond to such reports. The federal government decided last week to include YouTube in its world-first social media ban for teenagers, following the commissioner's advice to overturn its planned exemption for the Alphabet-owned Google's video-sharing site. "When left to their own devices, these companies aren't prioritising the protection of children and are seemingly turning a blind eye to crimes occurring on their services," eSafety Commissioner Julie Inman Grant said in a statement. "No other consumer-facing industry would be given the licence to operate by enabling such heinous crimes against children on their premises, or services." Google has said previously that abuse material has no place on its platforms and that it uses a range of industry-standard techniques to identify and remove such material. Meta — owner of Facebook, Instagram and Threads, three of the biggest platforms with more than three billion users worldwide — has said it prohibits graphic videos. The eSafety Commissioner, an office set up to protect internet users, has mandated Apple, Discord, Google, Meta, Microsoft, Skype, Snap and WhatsApp report on the measures they take to address child exploitation and abuse material in Australia. The report on their responses so far found a "range of safety deficiencies on their services which increases the risk that child sexual exploitation and abuse material and activity appear on the services". Safety gaps included failures to detect and prevent live-streaming of the material or block links to known child abuse material, as well as inadequate reporting mechanisms. It said platforms were also not using hash-matching technology on all parts of their services to identify images of child sexual abuse by checking them against a database. Google has maintained its anti-abuse measures include hash-matching technology and artificial intelligence. The Australian regulator said some providers had not made improvements to address these safety gaps on their services despite it putting them on notice in previous years. "In the case of Apple services and Google's YouTube, they didn't even answer our questions about how many user reports they received about child sexual abuse on their services or details of how many trust and safety personnel Apple and Google have on-staff," Ms Inman Grant said. Reuters

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store