Latest news with #GeminiRoboticsOn-Device


Time of India
4 days ago
- Time of India
Google launches Gemini model for robots to run without internet connectivity
Google has launched a new Gemini model, which is specifically designed to run on robots without requiring internet connectivity. The tech giant describes the "Gemini Robotics On-Device" model as an efficient, on-device robotics model offering 'general-purpose dexterity' and faster task adaptation. This new version builds on the Gemini Robotics VLA (vision language action) model, which was introduced in March and brought Gemini 2.0's multimodal reasoning and real-world understanding to physical applications. By operating independently of a data network, the on-device model will support latency-sensitive applications and ensure stronger environments with unreliable or no connectivity. Google is also providing a Gemini Robotics SDK to assist developers. This SDK will allow them to evaluate Gemini Robotics On-Device for their specific tasks and environments, test the model within Google's MuJoCo physics simulator, and adapt it to new domains with a limited number of demonstrations (as few as 50 to 100). Developers can gain access to the SDK by signing up for Google's trusted tester program. Google's new Gemini model for robots: Capabilities and performance Google claims that Gemini Robotics On-Device is a lightweight robotics foundation model designed for bi-arm robots that enables advanced dexterous manipulation with minimal computational overhead. Built on the capabilities of Gemini Robotics, it supports rapid experimentation, fine-tuning for new tasks, and local low-latency inference. The company also promises that the model demonstrates strong generalisation across visual, semantic, and behavioural tasks, effectively following natural language instructions and completing complex actions like unzipping bags or folding clothes, all while operating directly on the robot. In tests, Gemini Robotics On-Device outperforms other on-device models, especially in challenging, out-of-distribution and multi-step scenarios. It can be fine-tuned with just 50–100 demonstrations, making it highly adaptable to new applications. Originally trained on ALOHA robots, the model was successfully adapted to the bi-arm Franka FR3 and Apollo humanoid robot, completing tasks like folding dresses and belt assembly. This marks the first availability of a VLA model for on-device fine-tuning, offering powerful robotics capabilities without cloud dependency. Redmi Pad 2: Know these Things Before Buying!


Mint
4 days ago
- Mint
Google Gemini AI model brings real-time intelligence to bi-arm robots
Google DeepMind has announced the launch of a new artificial intelligence model tailored for robotics, capable of functioning entirely on a local device without requiring an active data connection. NamedGemini Robotics On-Device, the advanced model is designed to enable bi-arm robots to carry out complex tasks in real-world environments by combining voice, language and action (VLA) processing. In a blog post, Carolina Parada, Senior Director and Head of Robotics at Google DeepMind, introduced the new model, highlighting its low-latency performance and flexibility. As it operates independently of the cloud, the model is especially suited to latency-sensitive environments and real-time applications where constant internet connectivity is not feasible. Currently, access to the model is restricted to participants of Google's trusted tester programme. Developers can experiment with the AI system through the Gemini Robotics software development kit (SDK) and the company's MuJoCo physics simulator. Although Google has not disclosed specific details about the model's architecture or training methodology, it has outlined the model's robust capabilities. Designed for bi-arm robotic platforms, Gemini Robotics On-Device requires minimal computing resources. Remarkably, the system can adapt to new tasks using only 50 to 100 demonstrations, a feature that significantly accelerates deployment in diverse settings. In internal trials, the model demonstrated the ability to interpret natural language commands and perform a wide array of sophisticated tasks, from folding clothes and unzipping bags to handling unfamiliar objects. It also successfully completed precision tasks such as those found in industrial belt assembly, showcasing high levels of dexterity. Though originally trained on ALOHA robotic systems, Gemini Robotics On-Device has also been adapted to work with other bi-arm robots including Franka Emika's FR3 and Apptronik's Apollo humanoid robot. According to the American tech giant, the model exhibited consistent generalisation performance across different platforms, even when faced with out-of-distribution tasks or multi-step instructions.


Time of India
4 days ago
- Time of India
Google launches Gemini Robotics model capable of running locally on robots
Synopsis Google DeepMind has introduced Gemini Robotics On-Device. This model allows robots to function independently, even without internet. It's designed for quick responses and use in areas with limited connectivity. The model has been tested on various robots, including the Apollo humanoid. Developers can now evaluate Gemini Robotics On-Device using Google's software development kit.


Hans India
4 days ago
- Hans India
Google's Gemini AI Now Powers Robots Without Internet Access
New Delhi: In a major leap for edge robotics, Google DeepMind has introduced Gemini Robotics On-Device, a new AI model that enables robots to function without needing an internet connection. This development brings greater autonomy, speed, and data privacy to real-world robotics, especially in locations where connectivity is limited or restricted. Carolina Parada, head of robotics at Google DeepMind, described the release as a practical shift toward making robots more independent. 'It's small and efficient enough to run directly on a robot,' she told The Verge. 'I would think about it as a starter model or as a model for applications that just have poor connectivity.' Despite being a more compact version of its cloud-based predecessor, the on-device variant is surprisingly robust. 'We're actually quite surprised at how strong this on-device model is,' Parada added, pointing to its effectiveness even with minimal training. The model can perform tasks almost immediately after deployment and requires only 50 to 100 demonstrations to learn new ones. Initially developed using Google's ALOHA robot, it has since been adapted to other robotic systems including Apptronik's Apollo humanoid and the dual-armed Franka FR3. Tasks such as folding laundry or unzipping bags can now be executed entirely on-device, without latency caused by cloud interaction. This is a key differentiator compared to other advanced systems like Tesla's Optimus, which still rely on cloud connectivity for processing. The local processing aspect is a highlight for sectors that prioritize data security, such as healthcare or sensitive industrial settings. 'When we play with the robots, we see that they're surprisingly capable of understanding a new situation,' Parada noted, emphasizing the model's flexibility and adaptability. However, Google acknowledges some trade-offs. Unlike the cloud-based Gemini Robotics suite, the on-device model lacks built-in semantic safety tools. Developers are encouraged to implement safety mechanisms independently, using APIs like Gemini Live and integrating with low-level robotic safety systems. 'With the full Gemini Robotics, you are connecting to a model that is reasoning about what is safe to do, period,' said Parada. This announcement follows Google's recent launch of the AI Edge Gallery, an Android-based app that lets users run generative AI models offline using the compact Gemma 3 1B model. Much like Gemini Robotics On-Device, this app focuses on privacy-first, low-latency experiences using frameworks like TensorFlow Lite and open-source models from Hugging Face. Together, these launches signal Google's broader move to decentralize AI, bringing high-performance intelligence directly to user devices—be it phones or robots.


India Today
4 days ago
- India Today
Google's new Gemini AI can power robots and make them work without internet
Google DeepMind has launched a new version of its Gemini Robotics AI model that allows robots to operate entirely without internet access. Called Gemini Robotics On-Device, the system is designed to power robots in real-world settings where speed, autonomy, and privacy are crucial. This update marks a significant shift from earlier models that relied on cloud connectivity. By enabling robots to process information and make decisions on the device itself, Google hopes to make robotics more practical in offline environments such as remote areas, secure facilities, and latency-sensitive small and efficient enough to run directly on a robot,' said Carolina Parada, head of robotics at Google DeepMind, in a statement to The Verge. 'I would think about it as a starter model or as a model for applications that just have poor connectivity.'Despite being a smaller variant, the on-device version holds its own. 'We're actually quite surprised at how strong this on-device model is,' Parada Robotics On-Device brings several new features to the table. The model can carry out tasks straight out of the box and learn new ones from as few as 50 to 100 demonstrations. The model was initially trained using Google's ALOHA robot, but it has since been successfully adapted for use with other robotic systems, such as Apptronik's Apollo humanoid and the dual-armed Franka Google says that it can perform detailed actions such as folding clothes or unzipping bags, all while running low-latency inference perspective, Tesla's humanoid robot, Optimus, can also do all those things – folding a t-shirt, boiling an egg, dancing, etc – but it needs an internet connection to send data to cloud servers. However, in the case of Gemini Robotics On-Device, a standout feature is that all data is processed locally. That makes it particularly useful for privacy-sensitive applications, such as healthcare and industrial automation, where data security is a concern.'When we play with the robots, we see that they're surprisingly capable of understanding a new situation,' said Parada, highlighting the model's flexibility and the system does not rely on the cloud, it also keeps functioning in places with weak or no connectivity, making it highly reliable. 'It's drawing from Gemini's multimodal world understanding in order to do a completely new task,' Parada unlike the cloud-based hybrid version, the on-device model does not include built-in semantic safety tools. Google recommends that developers implement their own safety systems, including using Gemini Live APIs and connecting to low-level safety the full Gemini Robotics, you are connecting to a model that is reasoning about what is safe to do, period,' said launch comes shortly after Google introduced the AI Edge Gallery, an Android app that lets users run AI models offline on their smartphones. Powered by the compact Gemma 3 1B model, the app allows users to generate images, write text, and interact with AI tools directly on their devices – no internet like Gemini Robotics On-Device, AI Edge Gallery focuses on privacy and low-latency performance. It uses open-source models from platforms like Hugging Face and technologies like TensorFlow Lite to ensure smooth experiences across a range of devices.- Ends