logo
Everything you need to know from Google I/O 2025

Everything you need to know from Google I/O 2025

Yahoo02-06-2025

From the opening AI-influenced intro video set to "You Get What You Give" by New Radicals to CEO Sundar Pichai's sign-off, Google I/O 2025 was packed with news and updates for the tech giant and its products. And when we say packed, we mean it, as this year's Google I/O clocked in at nearly two hours long.
During that time, Google shared some big wins for its AI products, such as Gemini topping various categories on the LMArena leaderboard. Another example that Google seemed really proud of was the fact that Gemini completed Pokémon Blue a few weeks ago.
But, we know what you're really here for: Product updates and new product announcements.
Aside from a few braggadocious moments, Google spent most of those 117 minutes talking about what's coming out next. Google I/O mixes consumer-facing product announcements with more developer-oriented ones, from the latest Gmail updates to Google's powerful new chip, Ironwood, coming to Google Cloud customers later this year.
We're going to break down what product updates and announcements you need to know from the full two-hour event, so you can walk away with all the takeaways without spending the same time it takes to watch a major motion picture to learn about them.
Before we dive in though, here's the most shocking news out of Google I/O: The subscription pricing that Google has for its Google AI Ultra plan. While Google provides a base subscription at $19.99 per month, the Ultra plan comes in at a whopping $249.99 per month for its entire suite of products with the highest rate limits available.
Google tucked away what will easily be its most visible feature way too far back into the event, but we'll surface it to the top.
At Google I/O, Google announced that the new AI Mode feature for Google Search is launching today to everyone in the United States. Basically, it will allow users to use Google's search feature but with longer, more complex queries. Using a "query fan-out technique," AI Mode will be able to break a search into multiple parts in order to process each part of the query, then pull all the information together to present to the user. Google says AI Mode "checks its work" too, but its unclear at this time exactly what that means.
Google announces AI Mode in Google Search Credit: Google
AI Mode is available now. Later in the summer, Google will launch Personal Context in AI Mode, which will make suggestions based on a user's past searches and other contextual information about the user from other Google products like Gmail.
In addition, other new features will soon come to AI Mode, such as Deep Search, which can dive deeper into queries by searching through multiple websites, and data visualization features, which can take the search results and present them in a visual graph when applicable.
According to Google, its AI overviews in search are viewed by 1.5 billion users every month, so AI Mode clearly has the largest potential user base out of all of Google's announcements today.
Out of all the announcements at the event, these AI shopping features seemed to spark the biggest reaction from Google I/O live attendees.
Connected to AI Mode, Google showed off its Shopping Graph, which includes more than 50 billion products globally. Users can just describe the type of product they are looking for – say a specific type of couch, and Google will present options that match that description.
Google AI Shopping Credit: Google
Google also had a significant presentation that showed its presenter upload a photo of themselves so that AI could create a visual of what she'd look like in a dress. This virtual try-on feature will be available in Google Labs, and it's the IRL version of Cher's Clueless closet.
The presenter was then able to use an AI shopping agent to keep tabs on the item's availability and track its price. When the price dropped, the user received a notification of the pricing change.
Google said users will be able to try on different looks via AI in Google Labs starting today.
Google's long-awaited post-Google Glass AR/VR plans were finally presented at Google I/O. The company also unveiled a number of wearable products utilizing its AR/VR operating system, Android XR.
One important part of the Android XR announcement is that Google seems to understand the different use cases for an immersive headset and an on-the-go pair of smartglasses and have built Android XR to accommodate that.
While Samsung has previously teased its Project Moohan XR headset, Google I/O marked the first time that Google revealed the product, which is being built in partnership with the mobile giant and chipmaker Qualcomm. Google shared that the Project Moohan headset should be available later this year.
Project Moohan Credit: Google
In addition to the XR headset, Google announced Glasses with Android XR, smartglasses that incorporate a camera, speakers, and in-lens display that connect with a user's smartphone. Unlike Google Glass, these smart glasses will incorporate more fashionable looks thanks to partnerships with Gentle Monster and Warby Parker.
Google shared that developers will be able to start developing for Glasses starting next year, so it's likely that a release date for the smartglasses will follow after that.
Easily the star of Google I/O 2025 was the company's AI model, Gemini. Google announced a new updated Gemini 2.5 Pro, which it says is its most powerful model yet. The company showed Gemini 2.5 Pro being used to turn sketches into full applications in a demo. Along with that, Google introduced Gemini 2.5 Flash, which is a more affordable version of the powerful Pro model. The latter will be released in early June with the former coming out soon after. Google also revealed Gemini 2.5 Pro Deep Think for complex math and coding, which will only be available to "trusted testers" at first.
Speaking of coding, Google shared its asynchronous coding agent Jules, which is currently in public beta. Developers will be able to utilize Jules in order to tackle codebase tasks and modify files.
Jules coding agent Credit: Google
Developers will also have access to a new Native Audio Output text-to-speech model which can replicate the same voice in different languages.
The Gemini app will soon see a new Agent Mode, bringing users an AI agent who can research and complete tasks based on a user's prompts.
Gemini will also be deeply integrated into Google products like Workspace with Personalized Smart Replies. Gemini will use personal context via documents, emails, and more from across a user's Google apps in order to match their tone, voice, and style in order to generate automatic replies. Workspace users will find the feature available in Gmail this summer.
Other features announced for Gemini include Deep Research, which lets users upload their own files to guide the AI agent when asking questions, and Gemini in Chrome, an AI Assistant that answers queries using the context on the web page that a user is on. The latter feature is rolling out this week for Gemini subscribers in the U.S.
Google intends to bring Gemini to all of its devices, including smartwatches, smart cars, and smart TVs.
Gemini's AI assistant capabilities and language model updates were only a small piece of Google's broader AI puzzle. The company had a slew of generative AI announcements to make too.
Google announced Imagen 4, its latest image generation model. According to Google, Imagen 4 provides richer details and better visuals. In addition, Imagen 4 is apparently much better at generating text and typography in its graphics. This is an area which AI models are notoriously bad at, so Imagen 4 appears to be a big step forward.
Flow AI video tool Credit: Google
A new video generation model, Veo 3, was also unveiled with a video generation tool called Flow. Google claims Veo 3 has a stronger understanding of physics when generating scenes and can also create accompanying sound effects, background noise, and dialogue.
Both Veo 3 and Flow are available today alongside a new generative music model called Lyria 2.
Google I/O also saw the debut of Gemini Canvas, which Google describes as a co-creation platform.
Another big announcement out of Google I/O: Project Starline is no more.
Google's immersive communication project will now be known as Google Beam, an AI-first communication platform.
As part of Google Beam, Google announced Google Meet translations, which basically provides real-time speech translation during meetings on the platform. AI will be able to match a speaker's voice and tone, so it sounds like the translation is coming directly from them. Google Meet translations are available in English and Spanish starting today with more language on the way in the coming weeks.
Google Meet translations Credit: Google
Google also had another work-in-progress project to tease under Google Beam: A 3-D conferencing platform that uses multiple cameras to capture a user from different angles in order to render the individual on a 3-D light-field display.
While Project Starline may have undergone a name change, it appears Project Astra is still kicking it at Google, at least for now.
Project Astra is Google's real-world universal AI assistant and Google had plenty to announce as part of it.
Gemini Live is a new AI assistant feature that can interact with a user's surroundings via their mobile device's camera and audio input. Users can ask Gemini Live questions about what they're capturing on camera and the AI assistant will be able to answer queries based on those visuals. According to Google, Gemini Live is rolling out today to Gemini users.
Gemini Live Credit: Google
It appears Google has plans to implement Project Astra's live AI capabilities into Google Search's AI mode as a Google Lens visual search enhancement.
Google also highlighted some of its hopes for Gemini Live, such as being able to help as an accessibility tool for those with disabilities.
Another one of Google's AI projects is an AI agent that can interact with the web in order to complete tasks for the user known as Project Mariner.
While Project Mariner was previously announced late last year, Google had some updates such as a multi-tasking feature which would allow an AI agent to work on up to 10 different tasks simultaneously. Another new feature is Teach and Repeat, which would provide the AI agent with the ability to learn from previously completed tasks in order to complete similar ones without the need for the same detailed direction in the future.
Google announced plans to bring these agentic AI capabilities to Chrome, Google Search via AI Mode, and the Gemini app.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Ready Capital Corporation (RC) Declares Quarterly Dividends
Ready Capital Corporation (RC) Declares Quarterly Dividends

Yahoo

time24 minutes ago

  • Yahoo

Ready Capital Corporation (RC) Declares Quarterly Dividends

Ready Capital Corporation (NYSE:RC) is one of the 10 best-value penny stocks to buy, according to analysts. On June 14, the company's board of directors approved a cash dividend of $0.125 per share of common stock. The dividend will be paid to shareholders on July 31, 2025, as of the close of business on June 30, 2025. Copyright: bugtiger / 123RF Stock Photo In addition, the board declared a quarterly cash dividend on its 6.25% Series C Cumulative Convertible Preferred Stock and 6.50% Series E Cumulative Redeemable Preferred Stock. It also declared a dividend of $0.390625 per share of Series C Preferred Stock, payable to Series C Preferred stockholders on July 15, 2025. The quarterly dividends come on the heels of Ready Capital generating a net income of $81.97 million for its first quarter of 2025. It was a significant turnaround from a net loss of $74.17 million for the same quarter last year. Ready Capital Corporation (NYSE:RC) is a real estate finance company that originates, acquires, finances, and services commercial real estate loans for small to medium-sized businesses. It also offers small business loans through the SBA 7(a) program and provides financing for commercial real estate, including agency multifamily, investor, and bridge loans. While we acknowledge the potential of RC as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: and . Disclosure: None. Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data

Can CrowdStrike Stock Keep Moving Higher in 2025?
Can CrowdStrike Stock Keep Moving Higher in 2025?

Yahoo

time25 minutes ago

  • Yahoo

Can CrowdStrike Stock Keep Moving Higher in 2025?

CrowdStrike's all-in-one Falcon cybersecurity platform is increasingly popular for businesses, and it has a substantial long-term growth runway. However, CrowdStrike stock is trading at a record high following a 40% gain this year, and its valuation is starting to look a little rich. Investors hoping for more upside in 2025 might be left disappointed, but there is still an opportunity here for those with a longer time horizon. 10 stocks we like better than CrowdStrike › CrowdStrike (NASDAQ: CRWD) is one of the world's biggest cybersecurity companies. Its stock has soared 40% year to date, but its current valuation might be a barrier to further upside for the remainder of the year. With that said, investors who are willing to take a longer-term view could still reap significant rewards by owning a slice of CrowdStrike. The company's holistic all-in-one platform is extremely popular with enterprise customers, and its annual recurring revenue (ARR) could more than double over the next six years based on a forecast from management. The cybersecurity industry is quite fragmented, meaning many providers often specialize in single products like cloud security or identity security, so businesses have to use multiple vendors to achieve adequate protection. CrowdStrike is an outlier in that regard because its Falcon platform is a true all-in-one solution that allows its customers to consolidate their entire cybersecurity stack with one vendor. Falcon uses a cloud-based architecture, which means organizations don't need to install software on every computer and device. It also relies heavily on artificial intelligence (AI) to automate threat detection and incident response, so it operates seamlessly in the background and requires minimal intervention, if any, from the average employee. To lighten the workload for cybersecurity managers specifically, CrowdStrike launched a virtual assistant in 2023 called Charlotte AI. It eliminates alert fatigue by autonomously filtering threats, which means human team members only have to focus on legitimate risks to their organization. Charlotte AI is 98% accurate when it comes to triaging threats, and the company says it's saving managers more than 40 hours per week on average right now. Falcon features 30 different modules (products), so businesses can put together a custom cybersecurity solution to suit their needs. At the end of the company's fiscal 2026 first quarter (ended April 30), a record 48% of its customers were using six or more modules, up from 44% in the year-ago period. It launched a new subscription option in 2023 called Flex, which allows businesses to shift their annual contracted spending among different Falcon modules as their needs change. This can save customers substantial amounts of money, and it also entices them to try modules they might not have otherwise used, which can lead to increased spending over the long term. This is driving what management calls "reflexes," which describes Flex customers who rapidly chew through their budgets and come back for more. The company says 39 Flex customers recently exhausted their budgets within the first five months of their 35-month contracts, and each of them came back to expand their spending. It ended the fiscal 2026 first quarter with a record $4.4 billion in ARR, which was up 22% year over year. That growth has slowed over the last few quarters, mainly because of the major Falcon outage on July 19 last year, which crashed 8.5 million customer computers. Management doesn't anticipate any long-term effects from the incident (which I'll discuss further in a moment) because Falcon is so valuable to customers, but the company did offer customer choice packages to affected businesses that included discounted Flex subscriptions. This is dealing a temporary blow to revenue growth. Here's where things get a little sticky for CrowdStrike. Its stock is up over 40% this year and is trading at a record high, but the strong move has pushed its price-to-sales ratio (P/S) up to 29.1 as of June 24. That makes it significantly more expensive than any of its peers in the AI cybersecurity space: This premium valuation might be a barrier to further upside for the rest of this year, and it seems Wall Street agrees. The Wall Street Journal tracks 53 analysts who cover the stock, and their average price target is $481.95, which is slightly under where it's trading now, implying there could be near-term downside. But there could still be an opportunity here for longer-term investors. As I mentioned earlier, management doesn't expect any lingering impacts from the Falcon outage last year because it continues to reiterate its goal to reach $10 billion in ARR by fiscal 2031. That represents potential growth of 127% from the current ARR of $4.4 billion, and if the forecast comes to fruition, it could fuel strong returns for the stock over the next six years. Plus, $10 billion is still a fraction of CrowdStrike's estimated addressable market of $116 billion today -- a figure management expects to more than double to $250 billion over the next few years. So while I don't think there's much upside on the table for CrowdStrike in the remainder of 2025, those who can hold on to it for the next six years and beyond still have a solid investment opportunity. Before you buy stock in CrowdStrike, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and CrowdStrike wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $687,731!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $945,846!* Now, it's worth noting Stock Advisor's total average return is 818% — a market-crushing outperformance compared to 175% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 23, 2025 Anthony Di Pizio has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends CrowdStrike and Zscaler. The Motley Fool recommends Palo Alto Networks. The Motley Fool has a disclosure policy. Can CrowdStrike Stock Keep Moving Higher in 2025? was originally published by The Motley Fool Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Five surprising facts about AI chatbots that can help you make better use of them
Five surprising facts about AI chatbots that can help you make better use of them

Yahoo

time25 minutes ago

  • Yahoo

Five surprising facts about AI chatbots that can help you make better use of them

AI chatbots have already become embedded into some people's lives, but how many really know how they work? Did you know, for example, ChatGPT needs to do an internet search to look up events later than June 2024? Some of the most surprising information about AI chatbots can help us understand how they work, what they can and can't do, and so how to use them in a better way. With that in mind, here are five things you ought to know about these breakthrough machines. AI chatbots are trained in multiple stages, beginning with something called pre-training, where models are trained to predict the next word in massive text datasets. This allows them to develop a general understanding of language, facts and reasoning. If asked: 'How do I make a homemade explosive?' in the pre-training phase, a model might have given a detailed instruction. To make them useful and safe for conversation, human 'annotators' help guide the models toward safer and more helpful responses, a process called alignment. After alignment, an AI chatbot might answer something like: 'I'm sorry, but I can't provide that information. If you have safety concerns or need help with legal chemistry experiments, I recommend referring to certified educational sources.' Without alignment, AI chatbots would be unpredictable, potentially spreading misinformation or harmful content. This highlights the crucial role of human intervention in shaping AI behaviour. OpenAI, the company which developed ChatGPT, has not disclosed how many employees have trained ChatGPT for how many hours. But it is clear that AI chatbots, like ChatGPT, need a moral compass so that it does not spread harmful information. Human annotators rank responses to ensure neutrality and ethical alignment. Similarly, if an AI chatbot was asked: 'What are the best and worst nationalities?' Human annotators would rank a response like this the highest: 'Every nationality has its own rich culture, history, and contributions to the world. There is no 'best' or 'worst' nationality – each one is valuable in its own way.' Read more: Humans naturally learn language through words, whereas AI chatbots rely on smaller units called tokens. These units can be words, subwords or obscure series of characters. While tokenisation generally follows logical patterns, it can sometimes produce unexpected splits, revealing both the strengths and quirks of how AI chatbots interpret language. Modern AI chatbots' vocabularies typically consist of 50,000 to 100,000 tokens. The sentence 'The price is $9.99.' is tokenised by ChatGPT as 'The', ' price', 'is', '$' ' 9', '.', '99', whereas 'ChatGPT is marvellous' is tokenised less intuitively: 'chat', 'G', 'PT', ' is', 'mar', 'vellous'. AI chatbots do not continuously update themselves; hence, they may struggle with recent events, new terminology or broadly anything after their knowledge cutoff. A knowledge cut-off refers to the last point in time when an AI chatbot's training data was updated, meaning it lacks awareness of events, trends or discoveries beyond that date. The current version of ChatGPT has its cutoff on June 2024. If asked who is the currently president of the United States, ChatGPT would need to perform a web search using the search engine Bing, 'read' the results, and return an answer. Bing results are filtered by relevance and reliability of the source. Likewise, other AI chatbots uses web search to return up-to-date answers. Updating AI chatbots is a costly and fragile process. How to efficiently update their knowledge is still an open scientific problem. ChatGPT's knowledge is believed to be updated as Open AI introduces new ChatGPT versions. AI chatbots sometimes 'hallucinate', generating false or nonsensical claims with confidence because they predict text based on patterns rather than verifying facts. These errors stem from the way they work: they optimise for coherence over accuracy, rely on imperfect training data and lack real world understanding. While improvements such as fact-checking tools (for example, like ChatGPT's Bing search tool integration for real-time fact-checking) or prompts (for example, explicitly telling ChatGPT to 'cite peer-reviewed sources' or 'say I don ́t know if you are not sure') reduce hallucinations, they can't fully eliminate them. For example, when asked what the main findings are of a particular research paper, ChatGPT gives a long, detailed and good-looking answer. It also included screenshots and even a link, but from the wrong academic papers. So users should treat AI-generated information as a starting point, not an unquestionable truth. A recently popularised feature of AI chatbots is called reasoning. Reasoning refers to the process of using logically connected intermediate steps to solve complex problems. This is also known as 'chain of thought' reasoning. Instead of jumping directly to an answer, chain of thought enables AI chatbots to think step by step. For example, when asked 'what is 56,345 minus 7,865 times 350,468', ChatGPT gives the right answer. It 'understands' that the multiplication needs to occur before the subtraction. To solve the intermediate steps, ChatGPT uses its built-in calculator that enables precise arithmetic. This hybrid approach of combining internal reasoning with the calculator helps improve reliability in complex tasks. This article is republished from The Conversation under a Creative Commons license. Read the original article. Cagatay Yildiz receives funding from DFG (Deutsche Forschungsgemeinschaft, in English German Research Foundation)

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store