logo
#

Latest news with #ProjectAstra

How to try out Google's stunning Veo 3 AI movie maker for free
How to try out Google's stunning Veo 3 AI movie maker for free

Yahoo

time6 days ago

  • Business
  • Yahoo

How to try out Google's stunning Veo 3 AI movie maker for free

If you purchase an independently reviewed product or service through a link on our website, BGR may receive an affiliate commission. Google had several exciting AI announcements at I/O 2025, including new Project Astra features that will be available in Gemini Live in the near future, a new AI Mode for Google Search that will change the way you shop, and hardware demos of the first AR/AI smart glasses it plans to ship to consumers later this year. But there's no denying that Veo 3 (along with Flow) was one of the most exciting things Google unveiled this year. Veo 3 is Google's next-gen AI video generation software, and it's a massive upgrade over its predecessor. Veo 3 can produce ambient noise, sound, and even dialogue for your AI-generated movies. All you need to do is enter a text prompt that includes all the details, and Veo 3 will generate the desired results. Today's Top Deals Best deals: Tech, laptops, TVs, and more sales Best Ring Video Doorbell deals Memorial Day security camera deals: Reolink's unbeatable sale has prices from $29.98 Veo 3 will sync spoken language to the clip and will let you preserve characters from one scene to the next. Veo 3 went viral online after I/O 2025, and we saw a few amazing videos and short movies created with it. Also, an actual movie playing in theaters will incorporate shots made with Veo 3. The biggest problem with Veo 3 is that it's expensive to access. You need a $249.99/month Google AI Ultra subscription to make the most of Veo 3. You can also gain access to Veo 3 from various third-party services, including Adobe's Firefly, Freepik, and LTX Studio. But you'll still have to pay a subscription fee in order to use Veo 3. A more cost-effective alternative, coming June 27, is to go to Google's all-in-one AI cloud tool Vertex AI and use Veo 3 in public preview. This will let you try Veo 3 before you determine whether you need this AI video generator for personal use or work. Google said in a blog post that all Google Cloud customers and partners can access Veo 3 in public preview on Vertex AI. According to the company, Veo 3 brings these three features to AI video generation in Vertex AI: Fluid, natural videos that synchronize video with audio and dialogue. Veo Renato can synchronize your audio and visuals in a single pass. The model produces rich soundscapes containing everything from dialogue and ambient noise, to sound effects and background music. Cinematic video that captures creative nuances. Veo 3 makes it easy to capture creative nuances and detailed scene interactions in your prompt, from the shade of the sky to the precise way the sun hits water in the afternoon light, and produces high-definition video. Realistic movement that simulates real-world physics. To create believable scenes, Veo 3 simulates real-world physics. This results in realistic water movement, accurate shadows connected with objects and characters, and natural human motion. Google also posted a few Veo 3 samples, including the short clip it first showed at I/O 2025. Google shared additional Veo 3 clips made via third-party tools like Freepik and LTX Studio, alongside comments from these companies, which are working with Google Cloud to create AI videos for various purposes: Here's a sample made via the Freepik AI Video Generator: This one was made via LTX Studio: Finally, the next clip is a new brand created with Gemini, Imagen, and Veo 3: To try Veo 3 in Vertex AI, you can get started at this link. All you need is a Google Cloud account. Don't Miss: Today's deals: Nintendo Switch games, $5 smart plugs, $150 Vizio soundbar, $100 Beats Pill speaker, more More Top Deals Amazon gift card deals, offers & coupons 2025: Get $2,000+ free See the

Meet Google Martha, the company's Android XR smart glasses prototype
Meet Google Martha, the company's Android XR smart glasses prototype

Android Authority

time23-06-2025

  • Android Authority

Meet Google Martha, the company's Android XR smart glasses prototype

TL;DR Google demoed its Android XR smart glasses prototype at Google I/O 2025. We now learn that this Android XR prototype is called 'Google Martha.' Its companion app handles connected features like notifications, settings access, video recording from the user's perspective, and more. After over a year of teasing with Project Astra, Google showed off its Android XR glasses on stage at Google I/O 2025. My colleague C. Scott Brown even got to try them on, and he was impressed with the demo. Since these are prototype glasses and not meant for retail sale, there's not a lot of information on them, but we've now spotted their codename. Meet Google Martha, Google's name for its smart glasses prototype. App developer Sayed Ali Alkamel shared a photo of the companion app of the Android XR prototype glasses (h/t SERoundtable), which shows off a few settings and features of the connected smart glasses. I've rotated the image and edited the perspective to give us a better look at what's on the phone: As we can see, the connected Android XR smart glasses prototype is called 'Google Martha.' The companion app has entries for Notifications and Settings, but unfortunately, we don't get to see the entries within. The app also has a Record your view entry, letting the wearer capture a video of their view and the glasses' UI. There are also entries for feedback and reporting a problem. From Google I/O 2025, we know these prototype smart glasses run on the Android XR platform, opening up several Gemini-oriented use cases, such as real-world identification and querying, live translation, and more. Google Martha has a screen in only the right lens by design, though other smart glasses can have a dual-lens screen, or even none at all, and rely only on audio. If you want to get your hands on Google Martha, you will likely be disappointed. A report from earlier in the year noted that Google and Samsung were jointly developing Android XR glasses that are seemingly scheduled for consumer release in 2026, but Google did not confirm or corroborate or confirm these plans at Google I/O when it showed off Google Martha. This pair of smart glasses is unlikely to reach consumers since it's just a prototype, but the door is open for future smart glasses based on Martha to become available for you and me eventually. Until then, you can look forward to XReal's Project Aura or even Samsung's Project Moohan. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.

Google debuts interactive charts in AI Mode to help make you a finance whiz
Google debuts interactive charts in AI Mode to help make you a finance whiz

Android Authority

time06-06-2025

  • Business
  • Android Authority

Google debuts interactive charts in AI Mode to help make you a finance whiz

TL;DR Google has started testing a new AI Mode feature that generates interactive graphs for finance queries. The feature makes it easy to compare and analyze information over time. Interactive graphs in AI Mode are currently available as a Search Labs experiment. Google debuted Search's AI Mode as a Labs experiment to select users earlier this year. At I/O, the company expanded availability to all US users, added Deep Search and Project Astra capabilities to the feature, and previewed some upcoming features, including AI Mode's ability to generate interactive graphics for complex datasets. Google has now started testing this feature for finance-related queries. Google announced the new feature in a recent blog post, highlighting how AI Mode can help users compare and analyze financial information over a specific period with a custom-made interactive graph based on their query. As illustrated in the following clip, when asked to 'compare the stock performance of blue chip CPG companies in 2024,' AI Mode generates a stock price comparison graph and a table that dynamically updates the stock prices when you interact with the graph. Users can also ask follow-up questions to get additional information or refine their queries. Google says the feature uses 'advanced models to understand the intent of the question, tap into real-time and historical information and intelligently determine how to present information to help you make sense of it.' At the moment, the feature only works for finance queries related to stocks and mutual funds. However, Google will expand support for other topics in the future. If you wish to try it out, you can enable the experiment in Search Labs, but note that it's only available in the US. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.

Google begins testing 'Search Live' in AI Mode: Here is what it can do
Google begins testing 'Search Live' in AI Mode: Here is what it can do

Business Standard

time06-06-2025

  • Business Standard

Google begins testing 'Search Live' in AI Mode: Here is what it can do

Google has started testing a new voice-powered feature called 'Search Live' within its AI Mode for Search. Initially previewed at Google I/O 2025, the feature is now being rolled out to select users in the US through the Google app on Android and iOS. Unlike Gemini's assistant-specific interface, Search Live is integrated directly into Google Search, offering users a new way to interact with information through natural, real-time conversation. What is Search Live? Search Live allows users to speak their queries and receive spoken responses from Google. Instead of typing a search and sifting through links, users can ask questions out loud and get collated answers read back to them. The feature also allows follow-up questions in a conversational flow, enhancing the experience of natural language interaction. If users prefer, they can mute audio and read the search result as a transcript. The tool is powered by Project Astra, which processes spoken input in real time—acting like a smart assistant built into Search. Live in Gemini app is another feature powered by Project Astra. When available, users will see a sparkle-badged waveform icon under the Search bar. Tapping it activates the voice interaction feature, offering four voice settings: Cosmo, Neso, Terra, and Cassini. The addition replaces the previous Google Lens shortcut that used to open the gallery. What is Google AI Mode? AI Mode is Google's reimagined Search experience. It transforms the traditional query-and-results model into a dynamic conversation. Instead of scanning through blue links, users get direct answers, summaries, visual aids, and follow-up suggestions. AI Mode listens to entire questions, understands context, and provides relevant, concise information. Over time, Google plans to expand AI Mode to support live camera feeds, making Search even more interactive—though that feature is not yet live. ALSO READ:

Gemini Live camera and screen sharing rolling out to all users- Here's how it works
Gemini Live camera and screen sharing rolling out to all users- Here's how it works

Hindustan Times

time02-06-2025

  • Hindustan Times

Gemini Live camera and screen sharing rolling out to all users- Here's how it works

Google's popular Gemini Live camera and screen sharing feature is officially rolling out to all Android and iOS users, bringing advanced AI capabilities at the touch of a button. The feature was first teased as part of Project Astra back at Google I/O 2024. Later, the feature was rolled out to Gemini Advanced users for Pixel 9 and Samsung Galaxy S25 series users. While it's available to several Android users from mid-April, Google confirmed the Gemini Live camera and screen sharing capabilities for iOS at the recent developer's event. Now, the feature has started to roll out to all users, including the free-tier Gemini users. Therefore, if you want to use powerful AI capabilities, then know how this new Gemini Live feature works on smartphones Gmini Live was introduced last year as a conversational model with an audio-based interface. Now, Google has expanded its capabilities to share the camera or screen within Gemini Live conversation. With these features, users can simply point the camera at to object, place, or monument and ask questions. Gemini will be able to analyse the object for relevant responses. Google also says that, 'Gemini will provide real-time feedback based on the new skill you're learning or task you're completing.' In April 2025, the feature was rolled out to several Android devices, now it's also available to iOS users. Additionally, Google has also expanded its access from subscription users to free-tier users. However, we are yet to know about the limitations Step 1: Open the Gemini app on your Android or iOS device. Step 2: Click on the Live waveform icon, placed in the right corner, to activate Gemini Live Step 3: Once activated, you'll find the camera and screen share option at the bottom left side. This way, users can interact with Gemini Live via the camera or by sharing their screen to resolve queries and ask related questions. As of now, the feature is not available on iOS devices. However, we expect it to be rolled out soon. Google is also expanding the Gemini Live feature to other apps such as Calendar, Keep, Maps, and Tasks. This integration is expected to roll out in the upcoming weeks.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store