Google I/O 2025 recap: AI updates, Android XR, Google Beam and everything else announced at the annual keynote
The bulk of the Android news was revealed last week, during a special edition of The Android Show. However, Tuesday's keynote still included a ton of stuff including, of course, a pile of AI-related news. We covered the event in real-time in our live blog, which includes expert commentary (and even some jokes!) from our team.
If you're on the hunt for a breakdown of everything Google announced at the I/O keynote, though, look no further. Here are all the juicy details worth knowing about: Your Yahoo privacy setting is blocking social media and third-party content You can Allow your personal information to be shared and sold. Something went wrong. Try again. You can update your choice anytime by going to your privacy controls, which are linked to throughout our sites and apps. This page will now refresh.
Quelle surprise , Google is continuing to shove more generative AI features into its core products. AI Mode, which is what the company is calling a new chatbot, will soon be live in Search for all US users.
AI Mode is in a separate tab and it's designed to handle more complex queries than people have historically used Search for. You might use it to compare different fitness trackers or find the most affordable tickets for an upcoming event. AI Mode will soon be able to whip up custom charts and graphics related to your specific queries too. It can also handle follow-up questions.
The chatbot now runs on Gemini 2.5. Google plans to bring some of its features into the core Search experience by injecting them into AI Overviews. Labs users will be the first to get access to the new features before Google rolls them out more broadly.
Meanwhile, AI Mode is powering some new shopping features. You'll soon be able to upload a single picture of yourself to see what a piece of clothing might look like on a virtual version of you.
Also, similar to the way in which Google Flights keeps an eye out for price drops, Google will be able to let you know when an item you want (in its specific size and color) is on sale for a price you're willing to pay. It can even complete the purchase on your behalf if you want.
AI Overviews, the Gemini-powered summaries that appear at the top of search results and have been buggy to say the least, are seen by more than 1.5 billion folks every month, according to Google. The "overwhelming majority" of people interact with these in a meaningful way, the company said — this could mean clicking on something in an overview or keeping it on their screen for a while (presumably to read through it).
Still, not everyone likes the AI Overviews and would rather just have a list of links to the information they're looking for. You know, like Search used to be. As it happens, there are some easy ways to declutter the results. Your Yahoo privacy setting is blocking social media and third-party content You can Allow your personal information to be shared and sold. Something went wrong. Try again. You can update your choice anytime by going to your privacy controls, which are linked to throughout our sites and apps. This page will now refresh.
We got our first peek at Project Astra, Google's vision for a universal AI assistant, at I/O last year and the company provided more details this time around. A demo showed Astra carrying out a number of actions to help fix a mountain bike, including diving into your emails to find out the bike's specs, researching information on the web and calling a local shop to ask about a replacement part.
It already feels like a culmination of Google's work in the AI assistant and agent space, though elements of Astra (such as granting it access to Gmail) might feel too intrusive for some. In any case, Google aims to transform Gemini into a universal AI assistant that can handle everyday tasks. The Astra demo is our clearest look yet at what that might look like in action. Your Yahoo privacy setting is blocking social media and third-party content You can Allow your personal information to be shared and sold. Something went wrong. Try again. You can update your choice anytime by going to your privacy controls, which are linked to throughout our sites and apps. This page will now refresh.
Gemini 2.5 is here with (according to Google) improved functionality, upgraded security and transparency, extra control and better cost efficiency. Gemini 2.5 Pro is bolstered by a new enhanced reasoning mode called Deep Think. The model can do things like turn a grid of photos into a 3D sphere of pictures, then add narration for each image. Gemini 2.5's text-to-speech feature can also change up languages on the fly. There's much more to it than that, of course, and we've got more details in our Gemini 2.5 story.
You know those smart replies in Gmail that let you quickly respond to an email with an acknowledgement? Google is now going to offer personalized versions of those so that they better match your writing style. For this to work, Gemini looks at your emails and Drive documents. Gemini will need your permission before it plunders your personal information. Subscribers will be able to use this feature in Gmail starting this summer.
Google Meet is getting a real-time translation option, which should come in very useful for some folks. A demo showed Meet being able to match the speaker's tone and cadence while translating from Spanish to English.
Subscribers on the Google AI Pro and Ultra (more on that momentarily) plans will be able to try out real-time translations between Spanish and English in beta starting this week. This feature will soon be available for other languages.
Gemini Live, a tool Google brought to Pixel phones last month, is coming to all compatible Android and iOS devices in the Gemini app (which already has more than 400 million monthly active users). This allows you to ask Gemini questions about screenshots, as well as live video that your phone's camera is capturing. Google is rolling out Gemini Live to the Gemini iOS and Android app starting today.
Google Search Live is a similar-sounding feature. You'll be able to have a "conversation" with Search about what your phone's camera can see. This will be accessible through Google Lens and AI Mode.
A new filmmaking app called Flow, which builds on VideoFX, includes features such as camera movement and perspective controls; options to edit and extend existing shots; and a way to fold AI video content generated with Google's Veo model into projects. Flow is available to Google AI Pro and Ultra subscribers in the US starting today. Google will expand availability to other markets soon. Your Yahoo privacy setting is blocking social media and third-party content You can Allow your personal information to be shared and sold. Something went wrong. Try again. You can update your choice anytime by going to your privacy controls, which are linked to throughout our sites and apps. This page will now refresh.
Speaking of Veo, that's getting an update. The latest version, Veo 3, is the first iteration that can generate videos with sound (it probably can't add any soul or actual meaning to the footage, though). The company also suggests that its Imagen 4 model is better at generating photorealistic images and handling fine details like fabrics and fur than earlier versions.
Handily, Google has a tool it designed to help you determine if a piece of content was generated using its AI tools. It's called SynthID Detector — naturally, it's named after the tool that applies digital watermarks to AI-generated material.
According to Google, SynthID Detector can scan an image, piece of audio, video or text for the SynthID watermark and let you know which parts are likely to have a watermark. Early testers will be able to to try this out starting today. Google has opened up a waitlist for researchers and media professionals. (Gen AI companies should offer educators a version of this tech ASAP.)
To get access to all of its AI features, Google wants you to pay 250 American dollars every month for its new AI Ultra plan. There's really no other way to react to this other than "LOL. LMAO." I rarely use either of those acronyms, which highlights just how absurd this is. What are we even doing here? That's obscenely expensive.
Anyway, this plan includes early access to the company's latest tools and unlimited use of features that are costly for Google to run, such as Deep Research. It comes with 30TB of storage across Google Photos, Drive and Gmail. You'll get YouTube Premium as well — arguably the Google product that's most worth paying for.
Google is offering new subscribers 50 percent off an AI Ultra subscription for the first three months. Woohoo. In addition, the AI Premium plan is now known as Google AI Pro.
As promised during last week's edition of The Android Show, Google offered another look at Android XR. This is the platform that the company is working on in the hope of doing for augmented reality, mixed reality and virtual reality what Android did for smartphones. After the company's previous efforts in those spaces, it's now playing catchup to the likes of Meta and Apple.
The initial Android XR demo at I/O didn't offer much to get too excited about for now. It showed off features like a mini Google Map that you can access on a built-in display and a way to view 360-degree immersive videos. We're still waiting for actual hardware that can run this stuff.
As it happens, Google revealed the second Android XR device. Xreal is working on Project Aura, a pair of tethered smart glasses. We'll have to wait a bit longer for more details on Google's own Android XR headset, which it's collaborating with Samsung on. That's slated to arrive later this year.
A second demo of Android XR was much more interesting. Google showed off a live translation feature for Android XR with a smart glasses prototype that the company built with Samsung. That seems genuinely useful, as do many of the accessibility-minded applications of AI. Gentle Monster and Warby Parker are making smart glasses with Android XR too. Just don't call it Google Glass (or do, I'm not your dad).
Google is giving the Chrome password manager a very useful weapon against hackers. It will be able to automatically change passwords on accounts that have been compromised in data breaches. So if a website, app or company is infiltrated, user data is leaked and Google detects the breach, the password manager will let you generate a new password and update a compatible account with a single click.
The main sticking point here is that it only works with websites that are participating in the program. Google's working with developers to add support for this feature. Still, making it easier for people to lock down their accounts is a definite plus. (And you should absolutely be using a password manager if you aren't already.) Your Yahoo privacy setting is blocking social media and third-party content You can Allow your personal information to be shared and sold. Something went wrong. Try again. You can update your choice anytime by going to your privacy controls, which are linked to throughout our sites and apps. This page will now refresh.
On the subject of Chrome, Google is stuffing Gemini into the browser as well. The AI assistant will be able to answer questions about the tabs you have open. You'll be able to access it from the taskbar and a new menu at the top of the browser window.
It's been a few years since we first heard about Project Starline, a 3D video conferencing project. We tried this tech out at I/O 2023 and found it to be an enjoyable experience.
Now, Google is starting to sell this tech, but only to enterprise customers (i.e. big companies) for now. It's got a new name for all of this too: Google Beam. And it's probably not going to be cheap. HP will reveal more details in a few weeks.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Business of Fashion
2 hours ago
- Business of Fashion
AI Shopping Is Here. Will Retailers Get Left Behind?
AI doesn't care about your beautiful website. Visit any fashion brand's homepage and you'll see all sorts of dynamic or interactive elements from image carousels to dropdown menus that are designed to catch shoppers' eyes and ease navigation. To the large language models that underlie ChatGPT and other generative AI, many of these features might as well not exist. They're often written in the programming language JavaScript, which for the moment at least most AI struggles to read. This giant blindspot didn't matter when generative AI was mostly used to write emails and cheat on homework. But a growing number of startups and tech giants are deploying this technology to help users shop — or even make the purchase themselves. ADVERTISEMENT 'A lot of your site might actually be invisible to an LLM from the jump,' said A.J. Ghergich, global vice president of Botify, an AI optimisation company that helps brands from Christian Louboutin to Levi's make sure their products are visible to and shoppable by AI. The vast majority of visitors to brands' websites are still human, but that's changing fast. US retailers saw a 1,200 percent jump in visits from generative AI sources between July 2024 and February 2025, according to Adobe Analytics. Salesforce predicts AI platforms and AI agents will drive $260 billion in global online sales this holiday season. Those agents, launched by AI players such as OpenAI and Perplexity, are capable of performing tasks on their own, including navigating to a retailer's site, adding an item to cart and completing the checkout process on behalf of a shopper. Google's recently introduced agent will automatically buy a product when it drops to a price the user sets. This form of shopping is very much in its infancy; the AI shopping agents available still tend to be clumsy. Long term, however, many technologists envision a future where much of the activity online is driven by AI, whether that's consumers discovering products or agents completing transactions. To prepare, businesses from retail behemoth Walmart to luxury fashion labels are reconsidering everything from how they design their websites to how they handle payments and advertise online as they try to catch the eye of AI and not just humans. 'It's in every single conversation I'm having right now,' said Caila Schwartz, director of consumer insights and strategy at Salesforce, which powers the e-commerce of a number of retailers, during a roundtable for press in June. 'It is what everyone wants to talk about, and everyone's trying to figure out and ask [about] and understand and build for.' From SEO to GEO and AEO As AI joins humans in shopping online, businesses are pivoting from SEO — search engine optimisation, or ensuring products show up at the top of a Google query — to generative engine optimisation (GEO) or answer engine optimisation (AEO), where catching the attention of an AI responding to a user's request is the goal. That's easier said than done, particularly since it's not always clear even to the AI companies themselves how their tools rank products, as Perplexity's chief executive, Aravind Srinivas, admitted to Fortune last year. AI platforms ingest vast amounts of data from across the internet to produce their results. ADVERTISEMENT Though there are indications of what attracts their notice. Products with rich, well-structured content attached tend to have an advantage, as do those that are the frequent subject of conversation and reviews online. 'Brands might want to invest more in developing robust customer-review programmes and using influencer marketing — even at the micro-influencer level — to generate more content and discussion that will then be picked up by the LLMs,' said Sky Canaves, a principal analyst at Emarketer focusing on fashion, beauty and luxury. Ghergich pointed out that brands should be diligent with their product feeds into programmes such as Google's Merchant Center, where retailers upload product data to ensure their items appear in Google's search and shopping results. These types of feeds are full of structured data including product names and descriptions meant to be picked up by machines so they can direct shoppers to the right items. One example from Google reads: Stride & Conquer: Original Google Men's Blue & Orange Power Shoes (Size 8). Ghergich said AI will often read this data before other sources such as the HTML on a brand's website. These feeds can also be vital for making sure the AI is pulling pricing data that's up to date, or as close as possible. As more consumers turn to AI and agents, however, it could change the very nature of online marketing, a scenario that would shake even Google's advertising empire. Tactics that work on humans, like promoted posts with flashy visuals, could be ineffective for catching AI's notice. It would force a redistribution of how retailers spend their ad budgets. Emarketer forecasts that spending on traditional search ads in the US will see slower growth in the years ahead, while a larger share of ad budgets will go towards AI search. OpenAI, whose CEO, Sam Altman, has voiced his distaste for ads in the past, has also acknowledged exploring ads on its platform as it looks for new revenue streams. 'The big challenge for brands with advertising is then how to show up in front of consumers when traditional ad formats are being circumvented by AI agents, when consumers are not looking at advertisements because agents are playing a bigger role,' said Canaves. Bots Are Good Now Retailers face another set of issues if consumers start turning to agents to handle purchases. On the one hand, agents could be great for reducing the friction that often causes consumers to abandon their carts. Rather than going through the checkout process themselves and stumbling over any annoyances, they just tell the agent to do it and off it goes. ADVERTISEMENT But most websites aren't designed for bots to make purchases — exactly the opposite, in fact. Bad actors have historically used bots to snatch up products from sneakers to concert tickets before other shoppers can buy them, frequently to flip them for a profit. For many retailers, they're a nuisance. 'A lot of time and effort has been spent to keep machines out,' said Rubail Birwadker, senior vice president and global head of growth at Visa. If a site has reason to believe a bot is behind a transaction — say it completes forms too fast — it could block it. The retailer doesn't make the sale, and the customer is left with a frustrating experience. Payment players are working to create methods that will allow verified agents to check out on behalf of a consumer without compromising security. In April, Visa launched a programme focused on enabling AI-driven shopping called Intelligent Commerce. It uses a mix of credential verification (similar to setting up Apple Pay) and biometrics to ensure shoppers are able to checkout while preventing opportunities for fraud. 'We are going out and working with these providers to say, 'Hey, we would like to … make it easy for you to know what's a good, white-list bot versus a non-whitelist bot,'' Birwadker said. Of course the bot has to make it to checkout. AI agents can stumble over other common elements in webpages, like login fields. It may be some time before all those issues are resolved and they can seamlessly complete any purchase. Consumers have to get on board as well. So far, few appear to be rushing to use agents for their shopping, though that could change. In March, Salesforce published the results of a global survey that polled different age groups on their interest in various use cases for AI agents. Interest in using agents to buy products rose with each subsequent generation, with 63 percent of Gen-Z respondents saying they were interested. Canaves of Emarketer pointed out that younger generations are already using AI regularly for school and work. Shopping with AI may not be their first impulse, but because the behaviour is already ingrained in their daily lives in other ways, it's spilling over into how they find and buy products. More consumers are starting their shopping journeys on AI platforms, too, and Schwartz of Salesforce noted that over time this could shape their expectations of the internet more broadly, the way Google and Amazon did. 'It just feels inevitable that we are going to see a much more consistent amount of commerce transactions originate and, ultimately, natively happen on these AI agentic platforms,' said Birwadker.


Tom's Guide
2 hours ago
- Tom's Guide
I tested the AI transcription tools for iPhone vs Samsung Galaxy vs Google Pixel — here's the winner
This article is part of our AI Phone Face-Off. If you're interested in our other comparisons, check out the links below. Long before AI was a buzzword included in every handset's marketing material, a few lucky phones already offered automatic transcripts of voice recordings. But the arrival of on-device AI has extended that feature to more phones and more apps, including the Phone app itself, while also adding auto-generated summary features to the mix. All three of the major smartphone makers — Apple, Google and Samsung — offer some type of voice recording app on their flagship phones with real-time transcription as part of the feature set. Those phones now record and transcribe phone calls, too. And summary tools that tap into AI to produce recaps of conversations, articles, recordings and more have become commonly available on iPhones, Pixels and Galaxy S devices alike. But which phone offers the most complete set of transcription and summarization tools? To find out, I took an iPhone 15 Pro, Pixel 9 and Galaxy S25 Plus loaded with the latest available version of their respective operating systems, and put each device through a series of tests. If you need a phone that can turn your speech into text or cut through a lengthy recording to bring you the highlights, here's which phone is most up to the job. I wrote out a scripted phone call, handed one copy to my wife and then scurried outside to call her three separate times from the iPhone, Pixel and Galaxy S device. By scripting out our conversation, we could see which on-board AI provided a more accurate transcript. And after each call, I took a look at the AI-generated summary to see if it accurately followed our discussion of rental properties in the San Francisco Bay Area. The iPhone's transcript was the most muddled of the three, with more instances of incorrect words and a lack of proper punctuation. The biggest misstep, though, was mixed up words that my wife and I had said, as if we had been talking over each other. (We had not.) Because I was calling someone in my Contacts, though, the iPhone did helpfully add names to each speaker — a nice touch. The transcripts from the Pixel 9 and Galaxy S25 Plus were equally accurate when compared to each other. Samsung displays its transcripts as if you're looking at a chat, with different text bubbles representing each speaker. Google's approach is to label the conversation with 'you' and 'the speaker.' I prefer the look of Google's transcript, though I appreciate that when my wife and I talked expenses, Galaxy AI successfully put that in dollar amounts. Google's Gemini just used numbers without dollar designations. As for the summaries, the one provided by iPhone accurately summed up the information I requested from my wife. The Galaxy AI summary was accurate, too, but left out the budget amount, which was one of the key points of our discussion. Google's summary hit the key points — the budget, the dates and who was going on the trip — and also put the summary in second person ('You called to ask about a rental property…"). I found that to be a personal touch that put Google's summary over the top. I will point out that the iPhone and Galaxy S25 Plus summaries appeared nearly instantly after the call. It took a bit for the Pixel 9 to generate its summary — not a deal-breaker, but something to be aware of. Winner: Google — The Pixel 9 gave me one of the more accurate transcripts in a pleasing format, and it personalized a summary while highlighting the key points of the conversation. I launched the built-in recording apps on each phone all at the same time so that they could simultaneously record me reading the Gettysburg Address. By using a single recording, I figured I could better judge which phone had the more accurate transcript before testing the AI-generated summary. The transcript from Samsung's Voice Recorder app suffered from some haphazard capitalization and oddly inserted commas that would require a lot of clean-up time if you need to share the transcript. Google Recorder had the same issue and, based on the transcript, seemed to think that two people were talking. The iPhone's Voice Memos app had the cleanest transcript of the three, though it did have a handful of incorrectly transcribed words. All three recording apps had issues with me saying 'nobly advanced,' with the Galaxy S25 Plus thinking I had said 'nobleek, advanced' and the iPhone printing that passage as 'no league advanced.' Still, the iPhone transcript had the fewest instances of misheard words. As for summaries, the Galaxy AI-generated version was fairly terse, with just three bullet points. Both the Pixel and the iPhone recognized my speech as the Gettysburg Address and delivered accurate summaries of the key points. While getting a summary from the iPhone takes some doing — you have to share your recording with the iOS Notes app and use the summary tool there — I preferred how concise its version was to what the Gemini AI produced for the Pixel. Winner: Apple — Not only did the iPhone have the best-looking transcript of the three phones, its summary was also accurate and concise. That said, the Pixel was a close second with its summarization feature, and would have won this category had it not heard those phantom speakers when transcribing the audio. Why keep testing the transcription feature when we've already put the recording apps through their paces? Because there could come a time when you need to record a meeting where multiple people are talking and you'll want a transcript that recognizes that. You may be in for a disappointing experience if the transcripts of me and my wife recreating the Black Knight scene from 'Monty Python and the Holy Grail' are anything to go by. Both the Galaxy and Pixel phones had problems recognizing who was speaking, with one speaker's words bleeding into the next. The Pixel 9 had more than its share of problems here, sometimes attributing an entire line to the wrong speaker. The Galaxy had more incorrectly transcribed words, with phrases like 'worthy adversary' and 'I've had worse' becoming 'where the adversary is' and '5 had worse,' respectively. The Pixel had a few shockers of its own, but its biggest issue remained the overlapping dialogue At least, those phones recognized two people were talking. Apple Intelligence's transcript ran everything together, so if you're working off that recording, you've got a lot of editing in your future. With this test, I was less interested in the summarization features, though the Pixel did provide the most accurate one, recognizing that the dialogue was 'reminiscent' of 'Monty Python and the Holy Grail.' The Galaxy AI-generated summary correctly deduced that the Black Knight is a stubborn person who ignores his injuries, but wrongly concluded that both speakers had agreed the fight was a draw. The iPhone issued a warning that the summarization tool wasn't designed for an exchange like this and then went on to prove it with a discombobulated summary in which the Black Knight apparently fought himself. Winner: Samsung — Galaxy AI had easier-to-correct errors with speakers' lines bleeding into each other. The Gemini transcript was more of a mess, but the summary nearly salvaged this test for Google. Of all the promised benefits of AI on phones, few excite me more than the prospect of a tool that can read through email chains and surface the relevant details so that I don't have to pick through each individual message. And much to my delight, two of the three phones I've tested stand out in this area. I'm sad to say it isn't the Galaxy S25 Plus. I found the feature a bit clunky to access, as I had to use the built-in Internet app to go to the web version of Gmail to summarize an exchange between me and two friends where we settled on when and where to meet for lunch. Galaxy AI's subsequent summary included the participants and what we were talking about, but it failed to mention the date and location we agreed upon. Both the Pixel and the iPhone fared much better. Gemini AI correctly listed the date, time and location of where we were going to meet for lunch. It even spotted a follow-up email I had sent en route warning the others that I was running late. Apple Intelligence also got this feature right in the iPhone's built-in Mail app. I think the Pixel has the better implementation, as getting a summary simply requires you to tap the Gemini button for all the key points to appear in a window. iOS Mail's summary feature lives at the top of the email conversation so you've got to scroll all the way up to access your summary. Winner: Google — The Pixel and the iPhone summarized the message chain equally well, but Google's implementation is a lot easier to access. In theory, a summary tool for web pages would help you get the key points of an article quickly. The concern, though, is that the summary proves to be superficial or, even worse, not thorough enough to recognize all the key points. So how do you know how accurate the summary is? I figured to find out, I'd run one of my own articles through the summary features of each phone — this article about the push to move iPhone manufacturing to the U.S., specifically. I mean, I know what I wrote, so I should be in a good position to judge if the respective summary features truly got the gist of it. Galaxy AI did, sort of, with its summary consisting of two broadly correct points that the Trump administration wants to move phone manufacturing to the U.S. and that high labor costs and global supply chain automation are the big roadblocks. That's not horribly inaccurate, but it is incomplete, as the article talked more about the lack of dedicated assembly plants and equipment in the U.S. The iPhone's summary — appearing as a tappable option in the menu bar of Safari — was a little bit more detailed on the key roadblock, while also noting the potential for rising prices of U.S.-built phones. However, the summary provided via Gemini AI is far and away the most substantive. It specifically calls out a push for reshoring, notes what Apple already produces in the U.S., and highlights multiple bullet points on the difficulties of U.S. phone manufacturing. Winner: Google — Summaries don't always benefit from being brief, and the Galaxy AI-generated summation of my article hits key points without sacrificing critical details and explanations. You can read that summary and skip my article — please don't, it would hurt my feelings — and still get a good grip on what I had written. Sometimes, notes can be so hastily jotted down, you might have a hard time making sense of them. An ideal AI summary tool would be able to sort through those thoughts and produce a good overview of the ideas you were hoping to capture. If you remember from our AI Writing Tools test, I had some notes on the new features in iOS 26 that I used to try out auto-formatting features provided by each phone's on-device AI. This time around, I tried out the summary features and found them to be generally OK, with one real standout. Both Galaxy AI and Apple Intelligence turned out decent summaries. When I selected the Key Points options in Writing Tools for iOS Notes, the iPhone featured a good general summation of changes in iOS 26, with particular attention paid to the Safari and FaceTime enhancements. Other descriptions in the Apple Intelligence-produced summary were a bit too general for my tastes. I did like the concise descriptions in the Galaxy AI summary, where my lengthy notes were boiled down to two bullet points summing up the biggest additions. It's not the most detailed explanation, but it would work as an at-a-glance synopsis before you dive into the meat of the notes themselves. Gemini AI on board the Pixel 9 struck the best overall mix between brevity and detail. Google's AI took the bullet points of my original notes and turned them into brief descriptions of each feature — a helpful overview that gets to the heart of what I'd be looking for in a summary. Winner: Google — While Galaxy AI scores points for getting right to the point in its summary, the more useful recap comes from Gemini AI's more detailed write-up. If we had restricted these tests to transcripts, it might have been a closer fight, as both Apple and Samsung held their own against Google in converting recordings to text. But throw summaries into the mix, and Google is the clear winner, taking the top spot in four of our six tests. Even in the tests where the Pixel was bested by either the iPhone or the Galaxy S25 Plus, it didn't lag that far behind. Some of this comes down to what you prefer in a summarization tool. If it's concise summaries, you may be more favorably inclined to Galaxy AI than I was. Apple Intelligence also shows some promise that would benefit from fine-tuning to make its tools easier to access. But for the best experience right now, Google is clearly the best at transcription and summarization.


Android Authority
3 hours ago
- Android Authority
Spotify Jam comes to Android Auto so your whole car can DJ
Google TL;DR Spotify's latest Android Auto update brings the Jam feature to car displays, letting friends join in and contribute music. The feature allows passengers to join a shared music queue by scanning a QR code on the in-car screen. The latest update for Spotify on Android Auto also improves offline listening and adds a floating search button for easier access. Spotify's Android Auto app just got an important upgrade, adding the popular Jam feature to the in-car experience. Jam allows multiple Spotify users to listen together in real time and is now available directly from Spotify's Now Playing screen on Android Auto. When music is playing, the Android Auto display shows a QR code that passengers can scan to join the Jam session and add tracks to the shared queue. The driver acts as the host and retains control of the Jam session, with the ability to remove any contributors at any time. This marks the first time Spotify Jam is available on a car interface. Spotify previously made Jam available to desktop users. It's also important to note that you need to be a Spotify Premium subscriber to start or host a Jam. However, free users can join and add songs to the Jam. Google announced the Android Auto redesign for Spotify as part of its new in-car experiences at I/O 2025, and confirmed the feature will also arrive on vehicles with Google built-in at a later stage. Apart from Jam, the redesigned Spotify app on Android Auto also brings some other enhancements, including a more prominent 'Downloads' section to make offline playback easier, especially useful when driving through areas with poor connectivity. There's also a new floating Search shortcut that gives users quicker access to Spotify's search interface. All of these features are reportedly (via 9to5Google) part of Spotify's latest update — version 9.0.58.596 — which rolled out to Android users just a few days ago. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.