
Google's Gemini AI Now Powers Robots Without Internet Access
New Delhi: In a major leap for edge robotics, Google DeepMind has introduced Gemini Robotics On-Device, a new AI model that enables robots to function without needing an internet connection. This development brings greater autonomy, speed, and data privacy to real-world robotics, especially in locations where connectivity is limited or restricted.
Carolina Parada, head of robotics at Google DeepMind, described the release as a practical shift toward making robots more independent. 'It's small and efficient enough to run directly on a robot,' she told The Verge. 'I would think about it as a starter model or as a model for applications that just have poor connectivity.'
Despite being a more compact version of its cloud-based predecessor, the on-device variant is surprisingly robust. 'We're actually quite surprised at how strong this on-device model is,' Parada added, pointing to its effectiveness even with minimal training.
The model can perform tasks almost immediately after deployment and requires only 50 to 100 demonstrations to learn new ones. Initially developed using Google's ALOHA robot, it has since been adapted to other robotic systems including Apptronik's Apollo humanoid and the dual-armed Franka FR3.
Tasks such as folding laundry or unzipping bags can now be executed entirely on-device, without latency caused by cloud interaction. This is a key differentiator compared to other advanced systems like Tesla's Optimus, which still rely on cloud connectivity for processing.
The local processing aspect is a highlight for sectors that prioritize data security, such as healthcare or sensitive industrial settings. 'When we play with the robots, we see that they're surprisingly capable of understanding a new situation,' Parada noted, emphasizing the model's flexibility and adaptability.
However, Google acknowledges some trade-offs. Unlike the cloud-based Gemini Robotics suite, the on-device model lacks built-in semantic safety tools. Developers are encouraged to implement safety mechanisms independently, using APIs like Gemini Live and integrating with low-level robotic safety systems. 'With the full Gemini Robotics, you are connecting to a model that is reasoning about what is safe to do, period,' said Parada.
This announcement follows Google's recent launch of the AI Edge Gallery, an Android-based app that lets users run generative AI models offline using the compact Gemma 3 1B model. Much like Gemini Robotics On-Device, this app focuses on privacy-first, low-latency experiences using frameworks like TensorFlow Lite and open-source models from Hugging Face.
Together, these launches signal Google's broader move to decentralize AI, bringing high-performance intelligence directly to user devices—be it phones or robots.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
26 minutes ago
- Mint
Google's AI charge: How Sergey Brin is taking on the might of OpenAI
New Delhi/Mountain View, California: In Mountain View, California, right next to Google's three million square-feet Googleplex headquarters, is a satellite office. While, from the outside, there's nothing seemingly special about it, the building currently houses an elite team of specialist engineers who have been tasked with only one thing: build the best foundational artificial intelligence (AI) model in the world. At the centre of its biggest room sits a man who many in Silicon Valley refer to as a living legend—Sergey Brin, Google's co-founder. Brin retired in December 2019 but returned to the company last year to lead a light brigade of over 300 engineers, all of whom are charging at OpenAI's GPT models, Google's primary rival in a high stakes battle. OpenAI's GPT models are disrupting the way people search, posing an existential threat to Alphabet Inc., Google's parent company. Brin is spearheading the development of Gemini, Google's suite of foundational AI models. Gemini's success, or failure, would impact two major areas within Alphabet—Search, and the nascent space of video generation. For one, Search currently accounts for 56% of Alphabet's annual revenue of $350 billion. Search is also a matter of personal pride for Brin and Larry Page, Google's second founder. Giving up its market dominance in Search means letting go of the duo's legacy—their entire life's work. Alongside Search, Brin was also concerned about Sora, OpenAI's video generation model. Last year, Google briefly showcased Veo, its video-generating foundational model. However, the market found Veo to be an effort from Google to catch-up with OpenAI. 'This prompted Brin's efforts to create Google Flow this year and launch the AI subscription plans—all a part of his efforts to show that Google, in fact, is still the behemoth as far as Big Tech is concerned," said a senior executive working on the integration of AI in Google's cloud offerings. He didn't want to be identified. At I/O 2025, an annual developer conference held in May this year, Google launched Flow, a video generation and editing platform that lets users create films with dialogue and background music, without needing any camera, audio and editing setup at all. A second executive, who also didn't want to be identified, said that much of Google's AI showcase at the conference was driven by what Brin's team has been up to. 'The core task that Brin is leading right now is to prove that Google is not following OpenAI's lead in AI—it is ready to lead innovation for others to follow. Last year, announcements that Google made were all either work in progress, or an iteration of what OpenAI had already showcased. This year, we've largely undone that," the executive, who works with Google's worldwide developer relations teams, said. A legacy at risk Much of Google's success, thus far, lies in the 'PageRank' algorithm that made Search the global behemoth that it is today. While the algorithm's patent is owned by Stanford University—Brin's alma mater—he, along with Page, were the ones who invented it. After failing to sell its algorithm to then-market leader Yahoo twice between 1998 and 2002, Google went on to lead the market globally. In 2021, Yahoo was sold to investment fund Apollo Global Management at $4.88 billion. Alphabet, in 2024, generated $350 billion in annual revenue. Page, to be sure, is no longer involved with Google's everyday operations, even though he retains a board seat. Instead, Page is focusing on a new AI venture, Dynatomics, which seeks to use generative AI to automate design-led manufacturing of products. In June 2017, a Stanford University research paper titled 'Attention is all you need', gave birth to the technology behind the transformer model, the fundamental architecture that underpins 'foundational' models. These models, trained on massive troves of data, today crossing trillions, aim to understand, think, calculate and feel like humans. This paper, and the study behind it, was funded by Google. But Google essentially squandered a technology that it believes it should rightfully lead. In November 2022, OpenAI—still not well-known back then—introduced ChatGPT, taking the world by storm and causing futurists to predict the doom of human jobs the way we know it today. Others predicted the nascent technology to have spurred into action an 'AI revolution', a seismic shift in the socio-economic balance akin to the industrial revolution of the 18th century. Alongside OpenAI's shortcut to global stardom, other big tech firms started cashing in on the AI overload. Microsoft was the first to pounce on the opportunity, investing nearly $14 billion in OpenAI and striking various forms of exclusive partnerships. Meta went the open-source way, appearing as a surprise early mover with its Llama family of foundational AI models. By December 2024, Amazon had announced its own family of 'Nova' foundational AI models, even though among Big Tech firms, its direct exposure to AI's algorithmic excellence was the least (Amazon earns its core revenue from e-commerce and cloud services). Apart from Google, only Apple has so far come off worse. The latter's implementation of AI is yet to see any response of enthusiasm from its customers—and analysts remain sceptical about its ability to keep up with the Big Tech fellows. Too big, too slow Analysts state that much of Google's sluggish start in generative AI is attributable to the company's way of functioning. Jayanth N. Kolla, cofounder and partner at consultancy firm Convergence Catalyst, said that at one point, there were concerns internally within senior Google staff that the company was becoming like IBM. 'Too big for its own good, too complacent, and too slow to move on anything," he said. In 2023, Google shared an internal note following the hype and surge of ChatGPT and OpenAI, asking all its employees to use its internal generative AI platform as much as possible. 'The idea was to maximize the usage hours and mine as much data as possible to bring it up to a certain scale," said a third executive who is with Google's software engineering teams. 'Bard and PaLM (the precursors to Gemini), however, underperformed, which spurred Brin to start taking increasing interest in Google's AI progress," the executive added. Brin, who turns 52 this August, isn't being strictly shy about his role. At I/O 2025, he made a surprise appearance at a fireside chat with DeepMind chief and Nobel laureate Demis Hassabis. DeepMind, an AI research laboratory, is a subsidiary of Alphabet. Speaking about why he came out of retirement, Brin said, 'As a computer scientist, it's a very unique time in history. Honestly, anyone who's a computer scientist should not be retired right now, and be working on AI." He added that he intends to make Gemini 'the world's first AGI, before 2030." AGI stands for artificial general intelligence, which is loosely defined as an algorithm that mimics the functioning of the human brain, capable of structuring randomized thought, emotion and empathy—qualities that machines lack. Google showcased more than 16 new products and launches at I/O 2025. The list includes its foundational model's new reasoning capabilities; a 3D video conferencing platform called Google Beam; an always-on version of Gemini Live; a production variant of Project Astra, a multi-modal, all-purpose AI assistant, and Android XR, a new platform for wearable devices. The headlines, however, were made by Search introducing a new 'AI mode', showcasing for the first time a chat-based interface that changes the way Google's search engine has worked since being incorporated in 1998. Beating OpenAI Insiders Mint spoke to said that over the past 12 months, Brin has a single-minded focus—beating OpenAI. A fourth executive working on product management at Google said that the transformer model 'should be rightfully our area of expertise and leadership." Since 2024, Brin has also been showing up personally at I/O—entering product demos without a prior warning to check on audience feedback. Executives and analysts believe that Brin's urgency lies in Google's own history. In turn, the executive's return has had a major role in shifting the company's focus—and channeling its focus. 'Sergey has been back since 2023. He's been at work every day focused on AI and Gemini. Another key player is Peter Danenberg who is the godfather of Gemini. In general, the existential threat from Microsoft and Open AI galvanized the entirety of Google to focus on AI," said Ray 'R' Wang, chief executive of US-based tech consulting firm Constellation Research. Busy Pichai Brin is bringing unfazed focus to Gemini, Search and Veo, as Sundar Pichai, the CEO of Google and Alphabet, has multiple areas to focus on—lawsuits, global businesses, government relations, cloud, Android and more, the first executive cited above said. 'In the long run, Google foresees its ability to use video generation as a platform to rope in advertisers worldwide, and eventually, establish market dominance in this field," he added. Pichai, for the longest term, has been viewed as a conservative leader, steering Google's ship with 'one eye on the rear-view mirror," said an analyst who didn't want to be identified. 'For Brin, that's too safe a stance at a time when Silicon Valley is going to war with each other over AI dominance. Plus, Pichai has too much to deal with. Brin's view is that AI today needs undivided attention and he's clearly right, as Google's spate of product launches and share price movement shows," the analyst added. In the past year, the company's shares are down over 6%, compared to Microsoft's rise of nearly 10%. While there is no indication that Pichai, who will complete 10 years as the CEO of Google this August (he took over as Alphabet's chief in December 2019), is on his way out, the leadership directives seem to be clearly divided. Google did not respond to Mint's request for a comment on Brin's recent involvements. Narrowing gap? Brin's work may be showing early results. At a pre-keynote session with journalists during the developer conference, chief executive Pichai said that the Gemini developer platform currently had over seven million developers using its code to create AI applications. This is significant because as of this year, OpenAI's official statistics pegs its outreach at around three million developers. Earlier this year, at an antitrust lawsuit in a US court, Google conceded that while its developer count is higher than OpenAI's, the latter is still outpacing Google in its monthly active users count. As per filings, OpenAI's ChatGPT platform had over 600 million monthly active users, to Gemini's 350 million. Gemini's numbers, though, are a huge improvement—a year ago, ChatGPT had 400 million monthly active users, in comparison to Gemini's 9 million. Some analysts do believe that the tide is turning. 'Google is clearly in the lead for AI right now. However, search and ads and mass personalization is about to become more targeted, more actionable, and more intelligent. AI native companies will disrupt existing companies, because intelligence (in business systems) is doubling every seven months—and these AI native companies deliver on exponential efficiency," Constellation's Wang said. Phil Fersht, chief executive of New York-based tech analysis firm HFS Research, said that Google is 'sitting in an unbelievable position to win the enterprise AI war—if it can get its business model right." 'Net-net, the firm needs to be prepared to cannibalize half of its legacy search business and insert Gemini onto as many enterprises and individual users as possible. It has the resources, talent, and user base to take on OpenAI, Microsoft and Anthropic," he said. Speed wins GenAI startups such as OpenAI, Anthropic and Perplexity are known to move fast. They deploy features super quick, reach out to developers and serve a broad variety of AI use cases. Google, in contrast, is viewed to be slower, like Kolla of Convergence Catalyst hinted. Pichai, speaking with journalists a day ahead of I/O 2025, underlined a new way of working—with speed. 'Typically, we don't make announcements leading up to our big day at I/O each year, but this time it's different. Right now, we're launching products in very frequent intervals, and making technological progress at a rapid pace like never before," he said. Then, at a post-event chat, Pichai reiterated that Google is now making AI announcements to the world 'within an hour or two" of the DeepMind team showcasing the latest advancements in Gemini. 'In the end, agility and appeal to developers will play the biggest role," said Kashyap Kompella, founder of tech consultancy and research firm RPA2AI Research. 'There's no denying that its rivals are moving fast, and there are clear indications within the industry that Google's AI products are not the first choice for developers and end-users," he added. The hope is that Brin's startup-style approach, coupled with Google's inherent strength garnered over almost three decades, could be the company's trump card, says Thomas Reuner, principal analyst at UK-based tech consultancy firm PAC. 'Brin might help shore up Google's advertising business in the short term, but its biggest strategic assets are threefold: the vast data assets from the search business, data integration at scale and the unique IP of DeepMind," he said. 'Given the market noise around generative and agentic AI, these assets don't always make the headlines but provide the moat that so many startups are lacking," he added. Sitting in that satellite office in Mountain View, Brin may be hoping that this moat could firmly establish Gemini, akin to his PageRank moment 29 years ago.


Time of India
2 hours ago
- Time of India
Samsung Galaxy M36 5G sale date revealed: Launched with 6.7-inch AMOLED screen, 120Hz refresh rate, 5,000mAh battery, and more
Samsung Galaxy M36 5G sale date: Samsung Galaxy M36 5G launch in India is generating significant buzz, arriving just in time for the summer season. Poised to shake up the sub‑₹20,000 mid-range market, this latest M-series offering brings 50 MP triple rear cameras, a vibrant 6.7‑inch Super AMOLED display with 120 Hz refresh rate, and a robust Exynos 1380 processor . Enhanced with intelligent tools like Circle to Search, Gemini AI features, and AI Photo Editing suite, Samsung is embracing the mobile AI revolution. With up to 6,000 mAh battery and 25 W fast charging, it's designed to deliver all-day performance. If you're hunting for a feature-packed smartphone that blends power, AI smarts, and affordability, the Galaxy M36 5G could be your next big catch. Read on to uncover why it's poised to redefine value in 2025. Samsung Galaxy M36 5G specifications Six generations of Android upgrades and six years of security updates are confirmed for the Galaxy M36 5G, which runs One UI 7 based on Android 15. The phone has a 6.7-inch Super AMOLED display with a refresh rate of 120 Hz that is full-HD+ (1,080x2,340 pixels). Corning Gorilla Glass Victus Plus protects the screen. It has an Exynos 1380 processor, up to 256GB of internal storage, and 8GB of RAM. With a 50-megapixel primary sensor that supports OIS, the Galaxy M36 5G boasts a triple camera array on the rear. A 5-megapixel macro camera and a 12-megapixel ultrawide camera are also part of the arrangement. The phone includes a 12-megapixel selfie camera on the front. 4K video recording is supported by both the front and back cameras. Object Eraser, Image Clipper, and Edit Suggestions are AI image editing features on the Galaxy M36 5G. AI Select and Google's Circle-to-Search function are also included. For security, it provides the Knox Vault feature. Samsung's Galaxy M36 5G boasts a 5,000mAh battery that supports 45W rapid charging. The thickness of this item is 7.7 mm. Samsung Galaxy M36 5G price The Samsung Galaxy M36 5G with 6GB RAM and 128GB storage costs Rs. 22,999. The phone is available for a discounted price of Rs. 16,999 when bank discounts are used. The prices of the 8GB + 128GB and 8GB + 256GB variants, including bank discount, are Rs. 17,999 and Rs. 20,999, respectively. It is available in Velvet Black, Orange Haze, and Serene Green. Samsung Galaxy M36 5G sale date Beginning on July 12, it will be available for purchase in India on the websites of Amazon, Samsung India, and a few physical retailers. For the latest and more interesting tech news, keep reading Indiatimes Tech.


Indian Express
3 hours ago
- Indian Express
YouTube rolls out AI search results for Premium users: Will it impact views, engagement?
Google is bringing AI-generated search results to YouTube as part of its broader efforts to reinvent the traditional search experience of users by integrating generative AI across its entire ecosystem. The AI-generated search results on the video sharing platform will appear at the top of the results page. It will feature multiple YouTube videos along with an AI-generated summary of each video. Users can tap on the thumbnails of the videos to begin playing them directly from the search results. The AI-generated summary accompanying each video will include information that is most relevant to the user's search query. However, the AI-powered search experience on the platform is currently limited to YouTube Premium subscribers. It is an opt-in feature, which means that Premium subscribers will have to manually enable the feature by visiting YouTube's experimental page. The move signals Google's shift towards generative AI-based search and discovery with AI-summarised answers replacing traditional links. Similar to AI Overviews in Google Search, this feature is designed to appear above organic search results as part of the big tech company's strategy to have more of its users engage with its AI systems. 'In the coming days, our conversational AI tool will be expanding to some non-Premium users in the US. Premium members already love it for getting more info, recommendations, and even quizzing themselves on key concepts in academic videos,' YouTube said in a blog post published on June 26. While only YouTube Premium subscribers can currently choose to see AI-generated search results on the platform, it is likely that Google will expand access to all users in the future. By showing AI-generated summaries of videos, YouTube users might be less inclined to open videos and watch them on the platform. The feature could also have an impact on engagement as fewer users might comment, subscribe, and generally interact with content creators. Something similar is already happening in web search. Multiple studies have shown that people are increasingly looking for information by asking questions to chatbots like ChatGPT or Gemini as opposed to using web browsers like Safari. This defection away from traditional search engines towards generative AI has negative consequences, especially for publishers and websites that have relied on search traffic to generate revenue. A recent study by content licensing platform TollBit found that news sites and blogs receive 96 per cent less referral traffic from generative AI-driven search engines than from traditional Google Search. When asked about publishers seeing a dip in traffic coming from Search, Elizabeth Reid, the head of Google Search, previously told 'We see that the clicks to web pages when AI Overviews exist are of higher quality. People spend more time on these pages and engage more. They are expressing higher satisfaction with the responses when we show the AI Overviews.' Even though the video is just one tap away, the AI-generated summary in YouTube search results will probably give users an idea of all the relevant parts of the video. This could potentially make it harder for YouTube channels to grow and earn revenue. In addition, YouTube is bringing its Veo 3 AI video generation model to YouTube Shorts in the coming months, according to CEO Neal Mohan. The AI model capable of generating cinematic-level visuals with complete sound and dialogue, was reportedly trained on subsets of the 20-billion video library uploaded on YouTube.