logo
#

Latest news with #CarolinaParada

New Google AI makes robots smarter without the cloud
New Google AI makes robots smarter without the cloud

Yahoo

time02-07-2025

  • Yahoo

New Google AI makes robots smarter without the cloud

Google DeepMind has introduced a powerful on-device version of its Gemini Robotics AI. This new system allows robots to complete complex tasks without relying on a cloud connection. Known as Gemini Robotics On-Device, the model brings Gemini's advanced reasoning and control capabilities directly into physical robots. It is designed for fast, reliable performance in places with poor or no internet connectivity, making it ideal for real-world, latency-sensitive environments. Google Working To Decode Dolphin Communication Using Ai Unlike its cloud-connected predecessor, this version runs entirely on the robot itself. It can understand natural language, perform fine motor tasks and generalize from very little data, all without requiring an internet connection. According to Carolina Parada, head of robotics at Google DeepMind, the system is "small and efficient enough" to operate directly onboard. Developers can use the model in situations where connectivity is limited, without sacrificing intelligence or flexibility. What Is Artificial Intelligence (Ai)? Gemini Robotics On-Device can be customized with just 50 to 100 demonstrations. The model was first trained using Google's ALOHA robot, but it has already been adapted to other platforms like Apptronik's Apollo humanoid and the Franka FR3. For the first time, developers can fine-tune a DeepMind robotics model. Google is offering access through its trusted tester program and has released a full SDK to support experimentation and development. Read On The Fox News App Since the artificial intelligence runs directly on the robot, all data stays local. This approach offers better privacy for sensitive applications, such as in healthcare. It also allows robots to continue operating during internet outages or in isolated environments. Google sees this version as a strong fit for remote, security-sensitive, or infrastructure-poor settings. The system delivers faster response times and fewer points of failure, opening up new possibilities for robot deployment in real-world settings. The on-device model does not include built-in semantic safety features. Google recommends that developers build safety systems into their robots using tools like the Gemini Live API and trusted low-level controllers. The company is limiting access to select developers to better study safety risks and real-world applications. While the hybrid model still offers more overall power, this version holds its own for most common use cases and helps push robotics closer to everyday deployment. The release of Gemini Robotics On-Device marks a turning point. Robots no longer need a constant cloud connection to be smart, adaptive, and useful. With faster performance and stronger privacy, these systems are ready to tackle real-world tasks in places where traditional robots might fail. Would you be comfortable handing off tasks to a robot that doesn't need the internet to think? Let us know by writing to us at Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my Copyright 2025 All rights article source: New Google AI makes robots smarter without the cloud

New Google AI makes robots smarter without the cloud
New Google AI makes robots smarter without the cloud

Fox News

time02-07-2025

  • Fox News

New Google AI makes robots smarter without the cloud

Google DeepMind has introduced a powerful on-device version of its Gemini Robotics AI. This new system allows robots to complete complex tasks without relying on a cloud connection. Known as Gemini Robotics On-Device, the model brings Gemini's advanced reasoning and control capabilities directly into physical robots. It is designed for fast, reliable performance in places with poor or no internet connectivity, making it ideal for real-world, latency-sensitive environments. Unlike its cloud-connected predecessor, this version runs entirely on the robot itself. It can understand natural language, perform fine motor tasks and generalize from very little data, all without requiring an internet connection. According to Carolina Parada, head of robotics at Google DeepMind, the system is "small and efficient enough" to operate directly onboard. Developers can use the model in situations where connectivity is limited, without sacrificing intelligence or flexibility. Gemini Robotics On-Device can be customized with just 50 to 100 demonstrations. The model was first trained using Google's ALOHA robot, but it has already been adapted to other platforms like Apptronik's Apollo humanoid and the Franka FR3. For the first time, developers can fine-tune a DeepMind robotics model. Google is offering access through its trusted tester program and has released a full SDK to support experimentation and development. Since the artificial intelligence runs directly on the robot, all data stays local. This approach offers better privacy for sensitive applications, such as in healthcare. It also allows robots to continue operating during internet outages or in isolated environments. Google sees this version as a strong fit for remote, security-sensitive, or infrastructure-poor settings. The system delivers faster response times and fewer points of failure, opening up new possibilities for robot deployment in real-world settings. The on-device model does not include built-in semantic safety features. Google recommends that developers build safety systems into their robots using tools like the Gemini Live API and trusted low-level controllers. The company is limiting access to select developers to better study safety risks and real-world applications. While the hybrid model still offers more overall power, this version holds its own for most common use cases and helps push robotics closer to everyday deployment. The release of Gemini Robotics On-Device marks a turning point. Robots no longer need a constant cloud connection to be smart, adaptive, and useful. With faster performance and stronger privacy, these systems are ready to tackle real-world tasks in places where traditional robots might fail. Would you be comfortable handing off tasks to a robot that doesn't need the internet to think? Let us know by writing to us at Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my Copyright 2025 All rights reserved.

Google Gemini AI model brings real-time intelligence to bi-arm robots
Google Gemini AI model brings real-time intelligence to bi-arm robots

Mint

time25-06-2025

  • Mint

Google Gemini AI model brings real-time intelligence to bi-arm robots

Google DeepMind has announced the launch of a new artificial intelligence model tailored for robotics, capable of functioning entirely on a local device without requiring an active data connection. NamedGemini Robotics On-Device, the advanced model is designed to enable bi-arm robots to carry out complex tasks in real-world environments by combining voice, language and action (VLA) processing. In a blog post, Carolina Parada, Senior Director and Head of Robotics at Google DeepMind, introduced the new model, highlighting its low-latency performance and flexibility. As it operates independently of the cloud, the model is especially suited to latency-sensitive environments and real-time applications where constant internet connectivity is not feasible. Currently, access to the model is restricted to participants of Google's trusted tester programme. Developers can experiment with the AI system through the Gemini Robotics software development kit (SDK) and the company's MuJoCo physics simulator. Although Google has not disclosed specific details about the model's architecture or training methodology, it has outlined the model's robust capabilities. Designed for bi-arm robotic platforms, Gemini Robotics On-Device requires minimal computing resources. Remarkably, the system can adapt to new tasks using only 50 to 100 demonstrations, a feature that significantly accelerates deployment in diverse settings. In internal trials, the model demonstrated the ability to interpret natural language commands and perform a wide array of sophisticated tasks, from folding clothes and unzipping bags to handling unfamiliar objects. It also successfully completed precision tasks such as those found in industrial belt assembly, showcasing high levels of dexterity. Though originally trained on ALOHA robotic systems, Gemini Robotics On-Device has also been adapted to work with other bi-arm robots including Franka Emika's FR3 and Apptronik's Apollo humanoid robot. According to the American tech giant, the model exhibited consistent generalisation performance across different platforms, even when faced with out-of-distribution tasks or multi-step instructions.

Google's Gemini AI Now Powers Robots Without Internet Access
Google's Gemini AI Now Powers Robots Without Internet Access

Hans India

time25-06-2025

  • Hans India

Google's Gemini AI Now Powers Robots Without Internet Access

New Delhi: In a major leap for edge robotics, Google DeepMind has introduced Gemini Robotics On-Device, a new AI model that enables robots to function without needing an internet connection. This development brings greater autonomy, speed, and data privacy to real-world robotics, especially in locations where connectivity is limited or restricted. Carolina Parada, head of robotics at Google DeepMind, described the release as a practical shift toward making robots more independent. 'It's small and efficient enough to run directly on a robot,' she told The Verge. 'I would think about it as a starter model or as a model for applications that just have poor connectivity.' Despite being a more compact version of its cloud-based predecessor, the on-device variant is surprisingly robust. 'We're actually quite surprised at how strong this on-device model is,' Parada added, pointing to its effectiveness even with minimal training. The model can perform tasks almost immediately after deployment and requires only 50 to 100 demonstrations to learn new ones. Initially developed using Google's ALOHA robot, it has since been adapted to other robotic systems including Apptronik's Apollo humanoid and the dual-armed Franka FR3. Tasks such as folding laundry or unzipping bags can now be executed entirely on-device, without latency caused by cloud interaction. This is a key differentiator compared to other advanced systems like Tesla's Optimus, which still rely on cloud connectivity for processing. The local processing aspect is a highlight for sectors that prioritize data security, such as healthcare or sensitive industrial settings. 'When we play with the robots, we see that they're surprisingly capable of understanding a new situation,' Parada noted, emphasizing the model's flexibility and adaptability. However, Google acknowledges some trade-offs. Unlike the cloud-based Gemini Robotics suite, the on-device model lacks built-in semantic safety tools. Developers are encouraged to implement safety mechanisms independently, using APIs like Gemini Live and integrating with low-level robotic safety systems. 'With the full Gemini Robotics, you are connecting to a model that is reasoning about what is safe to do, period,' said Parada. This announcement follows Google's recent launch of the AI Edge Gallery, an Android-based app that lets users run generative AI models offline using the compact Gemma 3 1B model. Much like Gemini Robotics On-Device, this app focuses on privacy-first, low-latency experiences using frameworks like TensorFlow Lite and open-source models from Hugging Face. Together, these launches signal Google's broader move to decentralize AI, bringing high-performance intelligence directly to user devices—be it phones or robots.

Google's new Gemini AI can power robots and make them work without internet
Google's new Gemini AI can power robots and make them work without internet

India Today

time25-06-2025

  • India Today

Google's new Gemini AI can power robots and make them work without internet

Google DeepMind has launched a new version of its Gemini Robotics AI model that allows robots to operate entirely without internet access. Called Gemini Robotics On-Device, the system is designed to power robots in real-world settings where speed, autonomy, and privacy are crucial. This update marks a significant shift from earlier models that relied on cloud connectivity. By enabling robots to process information and make decisions on the device itself, Google hopes to make robotics more practical in offline environments such as remote areas, secure facilities, and latency-sensitive small and efficient enough to run directly on a robot,' said Carolina Parada, head of robotics at Google DeepMind, in a statement to The Verge. 'I would think about it as a starter model or as a model for applications that just have poor connectivity.'Despite being a smaller variant, the on-device version holds its own. 'We're actually quite surprised at how strong this on-device model is,' Parada Robotics On-Device brings several new features to the table. The model can carry out tasks straight out of the box and learn new ones from as few as 50 to 100 demonstrations. The model was initially trained using Google's ALOHA robot, but it has since been successfully adapted for use with other robotic systems, such as Apptronik's Apollo humanoid and the dual-armed Franka Google says that it can perform detailed actions such as folding clothes or unzipping bags, all while running low-latency inference perspective, Tesla's humanoid robot, Optimus, can also do all those things – folding a t-shirt, boiling an egg, dancing, etc – but it needs an internet connection to send data to cloud servers. However, in the case of Gemini Robotics On-Device, a standout feature is that all data is processed locally. That makes it particularly useful for privacy-sensitive applications, such as healthcare and industrial automation, where data security is a concern.'When we play with the robots, we see that they're surprisingly capable of understanding a new situation,' said Parada, highlighting the model's flexibility and the system does not rely on the cloud, it also keeps functioning in places with weak or no connectivity, making it highly reliable. 'It's drawing from Gemini's multimodal world understanding in order to do a completely new task,' Parada unlike the cloud-based hybrid version, the on-device model does not include built-in semantic safety tools. Google recommends that developers implement their own safety systems, including using Gemini Live APIs and connecting to low-level safety the full Gemini Robotics, you are connecting to a model that is reasoning about what is safe to do, period,' said launch comes shortly after Google introduced the AI Edge Gallery, an Android app that lets users run AI models offline on their smartphones. Powered by the compact Gemma 3 1B model, the app allows users to generate images, write text, and interact with AI tools directly on their devices – no internet like Gemini Robotics On-Device, AI Edge Gallery focuses on privacy and low-latency performance. It uses open-source models from platforms like Hugging Face and technologies like TensorFlow Lite to ensure smooth experiences across a range of devices.- Ends

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store