Latest news with #GeminiRobotics


Entrepreneur
20 hours ago
- Business
- Entrepreneur
AI This Week: Offline Models, Nuclear Systems, and a Surge in Industrial Automation
You're reading Entrepreneur India, an international franchise of Entrepreneur Media. This week in artificial intelligence (AI), developments highlighted a shift towards edge computing, offline capability, and strategic investments across infrastructure, industry, and enterprise. Google Pushes Offline AI with Gemma 3n and Gemini Robotics Google formally launched Gemma 3n, a multimodal, open-source AI model capable of running entirely offline on devices with as little as 2GB RAM. Built on a new architecture called MatFormer, it allows scalable performance across hardware constraints and supports tasks involving text, video, image and audio. Key innovations such as KV Cache Sharing and Per-Layer Embeddings reduce memory load and speed up real-time applications like voice assistants and video analysis. Gemma 3n supports over 140 languages and processes content in 35, without requiring any cloud connection,"offering a significant advantage for privacy-sensitive and remote environments. Meanwhile, Google DeepMind unveiled Gemini Robotics On-Device, a lightweight AI model designed to run locally on robots such as Franka FR3 and Apollo. Trained initially on ALOHA systems, Gemini can understand natural language, execute complex multi-step commands, and perform dexterous tasks like folding clothes and unzipping bags all without internet access. The model is built for latency-sensitive, low-compute environments, allowing robotic systems to operate independently in the field. Palantir to Develop AI Operating System for Nuclear Construction Palantir Technologies entered into a USD 100 million, five-year agreement with a Kentucky-based nuclear energy company to co-develop a Nuclear Operating System (NOS). The project follows recent executive orders in the United States aimed at accelerating domestic nuclear plant construction amid soaring demand from AI data centres and cryptocurrency mining operations. The NOS is intended to reduce costs, simplify planning, and accelerate construction timelines for new nuclear reactors. The initiative underscores how AI is now being applied to complex, regulated infrastructure sectors. HCLTech Strengthens Enterprise AI Play through Salesforce and Global Alliances HCLTech expanded its partnership with Salesforce to drive adoption of agentic AI solutions using the Salesforce Agentforce platform across sectors such as healthcare, manufacturing, retail, and financial services. The Indian IT major also announced a strategic alliance with AMD to accelerate digital transformation globally and signed a long-term agreement with European energy firm to modernise its cloud infrastructure using AI. Additionally, HCLTech secured an engineering services contract with Volvo Group to support automotive innovation from its global centres. Despite these strategic moves, analysts maintain a cautious view on the stock, with a consensus rating of 'Hold' and an average price target of INR 1,670, suggesting a potential downside of approximately 3 per cent from current levels. Apptronik Launches Elevate Robotics to Advance Industrial Automation In a bid to go beyond humanoid AI systems, Apptronik launched Elevate Robotics, a wholly owned subsidiary focused on developing industrial-grade mobile manipulation systems. The spin-off aims to build "superhuman" robots designed for heavy-duty industrial tasks. Elevate will operate independently, leveraging Apptronik's decade of expertise and recent funding,"bringing the company's total Series A funding to USD 403 million, with backing from investors such as Google, Mercedes-Benz, and Japan Post Capital. The move reflects a broader trend of AI being integrated into large-scale, labour-intensive sectors. Google Launches Doppl: An AI-Powered Virtual Try-On App Google also released Doppl, an experimental AI app under Google Labs, which lets users virtually try on outfits by uploading a full-length photo. The app uses AI to simulate how clothing would look on a person's body, offering both static visuals and animated try-ons. Currently available in the United States on iOS and Android, the app has not yet been rolled out internationally. Meta in Talks to Raise USD 29 Billion for AI Data Centres According to media reports, Meta is in discussions to raise USD 29 billion,comprising USD 3 billion in equity and USD 26 billion in debts from private equity firms including Apollo Global Management, KKR, Brookfield, Carlyle, and PIMCO. The funding will support Meta's ongoing expansion of AI-focused data centres. The move is part of Meta's broader AI strategy, with CEO Mark Zuckerberg stating earlier this year that the company plans to spend up to USD 65 billion on AI infrastructure in 2025.


Time of India
3 days ago
- Time of India
Google launches Gemini model for robots to run without internet connectivity
Google has launched a new Gemini model, which is specifically designed to run on robots without requiring internet connectivity. The tech giant describes the "Gemini Robotics On-Device" model as an efficient, on-device robotics model offering 'general-purpose dexterity' and faster task adaptation. This new version builds on the Gemini Robotics VLA (vision language action) model, which was introduced in March and brought Gemini 2.0's multimodal reasoning and real-world understanding to physical applications. By operating independently of a data network, the on-device model will support latency-sensitive applications and ensure stronger environments with unreliable or no connectivity. Google is also providing a Gemini Robotics SDK to assist developers. This SDK will allow them to evaluate Gemini Robotics On-Device for their specific tasks and environments, test the model within Google's MuJoCo physics simulator, and adapt it to new domains with a limited number of demonstrations (as few as 50 to 100). Developers can gain access to the SDK by signing up for Google's trusted tester program. Google's new Gemini model for robots: Capabilities and performance Google claims that Gemini Robotics On-Device is a lightweight robotics foundation model designed for bi-arm robots that enables advanced dexterous manipulation with minimal computational overhead. Built on the capabilities of Gemini Robotics, it supports rapid experimentation, fine-tuning for new tasks, and local low-latency inference. The company also promises that the model demonstrates strong generalisation across visual, semantic, and behavioural tasks, effectively following natural language instructions and completing complex actions like unzipping bags or folding clothes, all while operating directly on the robot. In tests, Gemini Robotics On-Device outperforms other on-device models, especially in challenging, out-of-distribution and multi-step scenarios. It can be fine-tuned with just 50–100 demonstrations, making it highly adaptable to new applications. Originally trained on ALOHA robots, the model was successfully adapted to the bi-arm Franka FR3 and Apollo humanoid robot, completing tasks like folding dresses and belt assembly. This marks the first availability of a VLA model for on-device fine-tuning, offering powerful robotics capabilities without cloud dependency. Redmi Pad 2: Know these Things Before Buying!


Mint
4 days ago
- Mint
Google Gemini AI model brings real-time intelligence to bi-arm robots
Google DeepMind has announced the launch of a new artificial intelligence model tailored for robotics, capable of functioning entirely on a local device without requiring an active data connection. NamedGemini Robotics On-Device, the advanced model is designed to enable bi-arm robots to carry out complex tasks in real-world environments by combining voice, language and action (VLA) processing. In a blog post, Carolina Parada, Senior Director and Head of Robotics at Google DeepMind, introduced the new model, highlighting its low-latency performance and flexibility. As it operates independently of the cloud, the model is especially suited to latency-sensitive environments and real-time applications where constant internet connectivity is not feasible. Currently, access to the model is restricted to participants of Google's trusted tester programme. Developers can experiment with the AI system through the Gemini Robotics software development kit (SDK) and the company's MuJoCo physics simulator. Although Google has not disclosed specific details about the model's architecture or training methodology, it has outlined the model's robust capabilities. Designed for bi-arm robotic platforms, Gemini Robotics On-Device requires minimal computing resources. Remarkably, the system can adapt to new tasks using only 50 to 100 demonstrations, a feature that significantly accelerates deployment in diverse settings. In internal trials, the model demonstrated the ability to interpret natural language commands and perform a wide array of sophisticated tasks, from folding clothes and unzipping bags to handling unfamiliar objects. It also successfully completed precision tasks such as those found in industrial belt assembly, showcasing high levels of dexterity. Though originally trained on ALOHA robotic systems, Gemini Robotics On-Device has also been adapted to work with other bi-arm robots including Franka Emika's FR3 and Apptronik's Apollo humanoid robot. According to the American tech giant, the model exhibited consistent generalisation performance across different platforms, even when faced with out-of-distribution tasks or multi-step instructions.


Hans India
4 days ago
- Hans India
Google's Gemini AI Now Powers Robots Without Internet Access
New Delhi: In a major leap for edge robotics, Google DeepMind has introduced Gemini Robotics On-Device, a new AI model that enables robots to function without needing an internet connection. This development brings greater autonomy, speed, and data privacy to real-world robotics, especially in locations where connectivity is limited or restricted. Carolina Parada, head of robotics at Google DeepMind, described the release as a practical shift toward making robots more independent. 'It's small and efficient enough to run directly on a robot,' she told The Verge. 'I would think about it as a starter model or as a model for applications that just have poor connectivity.' Despite being a more compact version of its cloud-based predecessor, the on-device variant is surprisingly robust. 'We're actually quite surprised at how strong this on-device model is,' Parada added, pointing to its effectiveness even with minimal training. The model can perform tasks almost immediately after deployment and requires only 50 to 100 demonstrations to learn new ones. Initially developed using Google's ALOHA robot, it has since been adapted to other robotic systems including Apptronik's Apollo humanoid and the dual-armed Franka FR3. Tasks such as folding laundry or unzipping bags can now be executed entirely on-device, without latency caused by cloud interaction. This is a key differentiator compared to other advanced systems like Tesla's Optimus, which still rely on cloud connectivity for processing. The local processing aspect is a highlight for sectors that prioritize data security, such as healthcare or sensitive industrial settings. 'When we play with the robots, we see that they're surprisingly capable of understanding a new situation,' Parada noted, emphasizing the model's flexibility and adaptability. However, Google acknowledges some trade-offs. Unlike the cloud-based Gemini Robotics suite, the on-device model lacks built-in semantic safety tools. Developers are encouraged to implement safety mechanisms independently, using APIs like Gemini Live and integrating with low-level robotic safety systems. 'With the full Gemini Robotics, you are connecting to a model that is reasoning about what is safe to do, period,' said Parada. This announcement follows Google's recent launch of the AI Edge Gallery, an Android-based app that lets users run generative AI models offline using the compact Gemma 3 1B model. Much like Gemini Robotics On-Device, this app focuses on privacy-first, low-latency experiences using frameworks like TensorFlow Lite and open-source models from Hugging Face. Together, these launches signal Google's broader move to decentralize AI, bringing high-performance intelligence directly to user devices—be it phones or robots.


Indian Express
4 days ago
- Business
- Indian Express
Google's new Gemini Robotics On-Device AI model runs directly on robots: Watch it in action
Google's DeepMind division, on Tuesday, June 24, released a new large language model called Gemini Robotics On-Device that runs locally on robotic devices. In a blog post, Google says that the new AI model has been optimised to efficiently run on the robot and shows 'strong general-purpose dexterity and task generalisation.' The new offline AI model builds on the company's Gemini Robotics model, which the tech giant unveiled earlier this year in March. The Gemini Robotics On-Device model can control a robot's movement and, like ChatGPT, can understand natural language prompts. Since it works without an active internet connection, Google says it is really useful for latency-sensitive applications or in areas where there is zero connectivity. Designed for robots with two arms, Google explains that Gemini Robotics On-Device is engineered in such a way that it requires 'minimal computational resources' and can complete highly dexterous tasks like folding clothes and unzipping bags, to name a few. You can watch the new AI model in action in the video below. Compared to other on-device alternatives, Google claims that Gemini Robotics On-Device outperforms the competition when it comes to completing complex multi-step instructions and challenging out-of-distribution tasks. Coming to the benchmarks, it looks like Google's new offline model comes close to its cloud-based offering. Initially trained for work with ALOHA robots, the company says its new model has been adapted and successfully worked on a bi-arm Franka FR3 robot and an Apollo humanoid as well. The tech giant said that on the bi-arm Franka FR3, the model was able to follow general-purpose instructions and handle previously unseen objects and scenes, like executing industrial belt assembly. As for Apollo, the model was allowed the humanoid robot manipulate different and unseen objects in a general manner. Developers can try out Gemini Robotics On-Device using the software development kit (SDK). Google isn't the only tech giant working on AI models for robots. At GTC 2025, NVIDIA unveiled Groot N1, an AI model for humanoid robots, while Hugging Face is working on developing its very own robot powered by an in-house developed open-sourced model.