&w=3840&q=100)
Tech Wrap May 20: Google I/O 2025, Nothing Phone 3 launch, NotebookLM app
BS Tech New Delhi
Google I/O starts today: Will it be all AI, or will hardware share stage?
Google is kicking off its annual developer conference, Google I/O, on May 20 with a keynote address. Although 'The Android Show: I/O Edition' has already detailed changes in Android 16, the main event is likely to emphasize enhancements to Gemini and its integration throughout Google's product range. Still, the possibility of an unexpected hardware announcement remains.
Nothing's 'true flagship' Phone 3 is set to launch in July
UK-based tech brand Nothing has officially announced that its upcoming smartphone, the Nothing Phone 3, will make its global debut in July 2025. Described by the company as its first genuine flagship device, the Phone 3 is expected to feature high-end materials, improved performance, and a software experience that 'genuinely levels things up.'
Google has brought its NotebookLM app to Android and iOS, broadening access to its AI-enhanced note-taking and research tool. Previously exclusive to desktop since its 2023 release, the mobile version now enables users to generate summaries, analyze documents, and ask questions directly on their smartphones.
At its Build 2025 developer conference held on May 19, Microsoft introduced a wave of AI-based enhancements across its ecosystem. Though many of the new features target developers, several reveal how AI will soon influence daily user interactions with Microsoft services and web browsing.
At Computex 2025, MSI introduced a new version of its Claw A8 handheld gaming device, now powered by AMD's Ryzen Z2 Extreme chip. This model supports up to 24GB of DDR5 RAM, slightly less than the 32GB included in the Intel-powered version. HP expands OmniBook 5 series with Snapdragon chip-powered AI PCs
HP has added new models to its OmniBook 5 series, this time featuring AI-enabled PCs powered by Qualcomm's Snapdragon X-series processors. This follows a previous release in the same line-up that used AMD chips, and the latest additions include Snapdragon X and X Plus-based Copilot Plus PCs.
Google is introducing an update to its Translate app, allowing iPhone users to designate it as their default translation service—replacing Apple's built-in Translate. This update comes following iOS 18.4, which added support for setting default apps for tasks like navigation, music playback, and translation.
Huawei has unveiled its latest device, the MateBook Fold, in China—a foldable laptop featuring an all-screen design. The laptop is just 7.3mm thick when open and 14.9mm when closed. It includes an 18-inch flexible OLED display in the footprint of a 13-inch laptop, runs on HarmonyOS 5, and offers 32GB RAM with 2TB SSD storage. The base model is priced at CNY 23,999.
Google is said to be introducing a new feature in its Gemini app, allowing users to search through previous chats by keyword or topic. As per a report from 9to5Google, the 'Search for chats' feature will make it easier for users to revisit earlier questions and interactions.
Apple is reportedly developing its own AI chatbot to keep pace with advancements in artificial intelligence. As cited by 9To5Mac via Bloomberg, internal testing has shown major progress in the chatbot's capabilities over the past six months. Some Apple insiders now consider the tool to be 'on par with recent versions of ChatGPT.'
MediaTek plans to begin manufacturing its smallest chip yet—based on 2-nanometer technology—starting this September, a company executive said. At present, the smallest chips in production use a 3-nanometer process. 'We are now moving into 2 nanometers. We will be taking out our first 2 nanometers device in September this year. Of course, this is a high-volume chip,' said MediaTek Vice Chairman and CEO Rick Tsai at Computex.
Qualcomm, the American chipmaker, has officially announced that it will unveil its next-generation Snapdragon 8 Elite processor along with other new products by the end of September 2025. This marks an earlier reveal compared to last year, when the company's annual event took place in October. The updated timeline was shared during Qualcomm's presentation at Computex 2025.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hans India
12 minutes ago
- Hans India
Google's AI Now Makes Calls and Checks Prices for You
Google is taking another big step in putting artificial intelligence to work for everyday tasks. The tech giant has now made its AI-powered calling feature available to everyone in the US through Search. With this new option, you can ask Google's AI to call local businesses — like pet groomers, auto shops, or dry cleaners — to check prices or availability, so you don't have to pick up the phone yourself. The feature first began testing in January and is aimed at simplifying small but often time-consuming tasks. Now, when you search for certain services on Google, you'll see a 'have AI check pricing' option under the business listing. Once selected, Google's AI will ask you a few details, like your pet's breed or the service you need, your preferred time, and whether you'd like updates by email or text. According to Robby Stein, vice president of product for Google Search, the AI uses Google's Duplex technology combined with Gemini to handle the calls. 'Gemini, with Duplex tech, will be able to make calls on your behalf,' Stein told The Verge. 'The calling tool will announce itself as an AI from Google trying to get information on behalf of a customer and get your info and details conveyed so that you don't have to spend all of this time doing this.' Once the AI gets what it needs, it will send you a text update about prices or available slots. This tool could be especially useful for younger people who, as studies show, often dislike making phone calls. Business owners who prefer not to receive AI calls can easily opt out in their Google profile settings. To manage usage, Google's AI Pro and AI Ultra subscribers get 'higher limits' for how often they can use the feature. Alongside this wider rollout, Google is also experimenting with its more advanced Gemini 2.5 Pro model in AI Mode — an AI-powered search tool that launched across the US in May. Users with AI Pro and AI Ultra subscriptions who've joined the AI Mode test in Google Labs can switch to Gemini 2.5 Pro, which Stein says excels at 'advanced reasoning, math, and code.' Google is also testing Deep Search within AI Mode, which can build detailed reports about a query by reasoning, asking additional questions, and double-checking its results through multi-step searches. This deeper layer of AI search is another sign of how Google is expanding its AI ecosystem to save users time and effort in daily tasks.


Time of India
42 minutes ago
- Time of India
OpenAI lists Google as cloud partner amid growing demand for computing capacity
OpenAI has included Alphabet's Google Cloud among its suppliers to meet escalating demands for computing capacity, according to an updated list published on the ChatGPT maker's website. The artificial-intelligence giant also relies on services from Microsoft, Oracle , and CoreWeave . The deal with Google, finalized in May after months of discussions, was first reported by Reuters citing a source in June. The arrangement underscores how massive computing demands to train and deploy AI models are reshaping the competitive dynamics in AI, and marks OpenAI's latest move to diversify its compute sources beyond its major supporter Microsoft, including its high-profile Stargate data centre project. Earlier this year, OpenAI partnered with SoftBank and Oracle on the $500 billion Stargate infrastructure program and signed multi-billion-dollar agreements with CoreWeave to bolster computing capacity. The partnership with Google is the latest of several maneuvers made by OpenAI to reduce its dependency on Microsoft whose Azure cloud service had served as the ChatGPT maker's exclusive data centre infrastructure provider until January. Google and OpenAI discussed an arrangement for months but were previously blocked from signing a deal due to OpenAI's lock-in with Microsoft, a source had told Reuters.


Hans India
42 minutes ago
- Hans India
Smarter Assistance Unlocked: Microsoft Copilot Now Scans Your Entire Desktop
Microsoft will soon be expanding Microsoft Copilot Vision AI Insiders by allowing the AI assistant to view your full screen, and not just two apps at a time. Windows 11 AI features can now take in whatever you have open – whether a browser or a whole app window – and can help with offering real-time coaching, suggestions, tips or help on any of the things you are viewing. One of the big tech giants to offer a Windows AI experience, Microsoft has upped its Copilot Vision, the AI tool that is available for Windows computers. The current functionality of Copilot Vision allows it to view and analyse two apps at a time. But this latest beta update for Windows Insiders will let it see not just two but any open app and browser, and even your whole desktop. Microsoft is taking a big leap with this Copilot screen scanning update by making Copilot Vision, an intelligent, intuitive, and responsive AI assistant that can help with viewing your full screen instead of just two apps or windows. How does it work? It's like screen sharing, but smarter. Microsoft's Recall feature automatically takes snapshots of your screen, but Copilot Vision is not automatic. Users will have to manually click the glasses icon on the Copilot app, and select what they want the AI to view. It works the same way as when you share your screen, but with a video call. Users have more control with Microsoft Copilot Vision use cases, since they can choose what they want to share with the AI. Microsoft Copilot Vision: Smarter assistance in real time Microsoft says Copilot Vision Edge will also be smarter with its real-time assistance. The AI will offer suggestions by analysing visual content and also be able to coach you aloud. Whether you are making changes to your resume, working on a Copilot full desktop access, or even learning a new game, the AI assistant will now be able to guide you with improvements, offer insights, and live feedback. Copilot Vision also first appeared in testing as an Edge browser assistant, which could see and respond to anything on the web.