Latest news with #Apptronik
Yahoo
11-07-2025
- Automotive
- Yahoo
Humanoids, AVs, and what's next in AI hardware at TechCrunch Disrupt 2025
TechCrunch Disrupt 2025 hits Moscone West in San Francisco from October 27 to 29, bringing together 10,000+ startup and VC leaders for three days of bold ideas, groundbreaking tech, and future-shaping conversations. One of the most highly anticipated sessions happening on one of the two AI Stages will spotlight where AI hardware is heading next, featuring a live look at the robotics and autonomous systems pushing boundaries in real time. In this session, two of the field's most visionary builders, Raquel Urtasun and Jeff Cardenas, will take the stage to explore the current and future state of AI hardware. They will unpack how it's enabling new applications across humanoid robotics and autonomous vehicles. Expect live demonstrations, deep technical insight, and a look at what it takes to move advanced simulation and embodied intelligence from concept to real-world deployment. Jeff Cardenas is the co-founder and CEO of Apptronik, a human-centered robotics company developing some of the world's most advanced humanoid robots. Focused on designing machines that safely and intelligently work alongside people, Cardenas has guided Apptronik through key partnerships with companies like Google DeepMind, Nvidia, and Mercedes-Benz. His mission is clear: make robotics practical, capable, and commercially viable. Raquel Urtasun, founder and CEO of Waabi, is one of the most respected voices in self-driving technology. As a decorated researcher and entrepreneur, she is building a new generation of autonomous vehicle systems grounded in simulation and AI. Her work has earned recognition from TIME, Business Insider, and the Royal Society of Canada, and her company is setting new benchmarks in scalable, intelligent AV platforms. AI hardware is no longer just a technical foundation — it is the interface between intelligence and action. Whether it's robots that can operate in human environments or autonomous systems navigating real-world complexity, the next phase of AI will depend on hardware that can think, sense, and perform. This session will examine the progress, the challenges, and the breakthroughs that are shaping this frontier. Catch Raquel Urtasun and Jeff Cardenas on the AI Stage at TechCrunch Disrupt 2025, happening October 27 to 29 at Moscone West in San Francisco. Register now to join more than 10,000 startup and VC leaders and save up to $675 before prices increase. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


TechCrunch
11-07-2025
- Automotive
- TechCrunch
Humanoids, AVs, and what's next in AI hardware at Disrupt 2025
TechCrunch Disrupt 2025 hits Moscone West in San Francisco from October 27 to 29, bringing together 10,000+ startup and VC leaders for three days of bold ideas, groundbreaking tech, and future-shaping conversations. One of the most highly anticipated sessions happening on one of the two AI Stages will spotlight where AI hardware is heading next, featuring a live look at the robotics and autonomous systems pushing boundaries in real time. In this session, two of the field's most visionary builders, Raquel Urtasun and Jeff Cardenas, will take the stage to explore the current and future state of AI hardware. They will unpack how it's enabling new applications across humanoid robotics and autonomous vehicles. Expect live demonstrations, deep technical insight, and a look at what it takes to move advanced simulation and embodied intelligence from concept to real-world deployment. Building the future, piece by piece Jeff Cardenas is the co-founder and CEO of Apptronik, a human-centered robotics company developing some of the world's most advanced humanoid robots. Focused on designing machines that safely and intelligently work alongside people, Cardenas has guided Apptronik through key partnerships with companies like Google DeepMind, NVIDIA, and Mercedes-Benz. His mission is clear: make robotics practical, capable, and commercially viable. Raquel Urtasun, founder and CEO of Waabi, is one of the most respected voices in self-driving technology. As a decorated researcher and entrepreneur, she is building a new generation of autonomous vehicle systems grounded in simulation and AI. Her work has earned recognition from TIME, Business Insider, and the Royal Society of Canada, and her company is setting new benchmarks in scalable, intelligent AV platforms. Why this session matters AI hardware is no longer just a technical foundation — it is the interface between intelligence and action. Whether it's robots that can operate in human environments or autonomous systems navigating real-world complexity, the next phase of AI will depend on hardware that can think, sense, and perform. This session will examine the progress, the challenges, and the breakthroughs that are shaping this frontier. Catch Raquel Urtasun and Jeff Cardenas on the AI Stage at TechCrunch Disrupt 2025, happening October 27 to 29 at Moscone West in San Francisco. Register now to join more than 10,000 startup and VC leaders and save up to $675 before prices increase.
Yahoo
02-07-2025
- Yahoo
New Google AI makes robots smarter without the cloud
Google DeepMind has introduced a powerful on-device version of its Gemini Robotics AI. This new system allows robots to complete complex tasks without relying on a cloud connection. Known as Gemini Robotics On-Device, the model brings Gemini's advanced reasoning and control capabilities directly into physical robots. It is designed for fast, reliable performance in places with poor or no internet connectivity, making it ideal for real-world, latency-sensitive environments. Google Working To Decode Dolphin Communication Using Ai Unlike its cloud-connected predecessor, this version runs entirely on the robot itself. It can understand natural language, perform fine motor tasks and generalize from very little data, all without requiring an internet connection. According to Carolina Parada, head of robotics at Google DeepMind, the system is "small and efficient enough" to operate directly onboard. Developers can use the model in situations where connectivity is limited, without sacrificing intelligence or flexibility. What Is Artificial Intelligence (Ai)? Gemini Robotics On-Device can be customized with just 50 to 100 demonstrations. The model was first trained using Google's ALOHA robot, but it has already been adapted to other platforms like Apptronik's Apollo humanoid and the Franka FR3. For the first time, developers can fine-tune a DeepMind robotics model. Google is offering access through its trusted tester program and has released a full SDK to support experimentation and development. Read On The Fox News App Since the artificial intelligence runs directly on the robot, all data stays local. This approach offers better privacy for sensitive applications, such as in healthcare. It also allows robots to continue operating during internet outages or in isolated environments. Google sees this version as a strong fit for remote, security-sensitive, or infrastructure-poor settings. The system delivers faster response times and fewer points of failure, opening up new possibilities for robot deployment in real-world settings. The on-device model does not include built-in semantic safety features. Google recommends that developers build safety systems into their robots using tools like the Gemini Live API and trusted low-level controllers. The company is limiting access to select developers to better study safety risks and real-world applications. While the hybrid model still offers more overall power, this version holds its own for most common use cases and helps push robotics closer to everyday deployment. The release of Gemini Robotics On-Device marks a turning point. Robots no longer need a constant cloud connection to be smart, adaptive, and useful. With faster performance and stronger privacy, these systems are ready to tackle real-world tasks in places where traditional robots might fail. Would you be comfortable handing off tasks to a robot that doesn't need the internet to think? Let us know by writing to us at Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my Copyright 2025 All rights article source: New Google AI makes robots smarter without the cloud


Fox News
02-07-2025
- Fox News
New Google AI makes robots smarter without the cloud
Google DeepMind has introduced a powerful on-device version of its Gemini Robotics AI. This new system allows robots to complete complex tasks without relying on a cloud connection. Known as Gemini Robotics On-Device, the model brings Gemini's advanced reasoning and control capabilities directly into physical robots. It is designed for fast, reliable performance in places with poor or no internet connectivity, making it ideal for real-world, latency-sensitive environments. Unlike its cloud-connected predecessor, this version runs entirely on the robot itself. It can understand natural language, perform fine motor tasks and generalize from very little data, all without requiring an internet connection. According to Carolina Parada, head of robotics at Google DeepMind, the system is "small and efficient enough" to operate directly onboard. Developers can use the model in situations where connectivity is limited, without sacrificing intelligence or flexibility. Gemini Robotics On-Device can be customized with just 50 to 100 demonstrations. The model was first trained using Google's ALOHA robot, but it has already been adapted to other platforms like Apptronik's Apollo humanoid and the Franka FR3. For the first time, developers can fine-tune a DeepMind robotics model. Google is offering access through its trusted tester program and has released a full SDK to support experimentation and development. Since the artificial intelligence runs directly on the robot, all data stays local. This approach offers better privacy for sensitive applications, such as in healthcare. It also allows robots to continue operating during internet outages or in isolated environments. Google sees this version as a strong fit for remote, security-sensitive, or infrastructure-poor settings. The system delivers faster response times and fewer points of failure, opening up new possibilities for robot deployment in real-world settings. The on-device model does not include built-in semantic safety features. Google recommends that developers build safety systems into their robots using tools like the Gemini Live API and trusted low-level controllers. The company is limiting access to select developers to better study safety risks and real-world applications. While the hybrid model still offers more overall power, this version holds its own for most common use cases and helps push robotics closer to everyday deployment. The release of Gemini Robotics On-Device marks a turning point. Robots no longer need a constant cloud connection to be smart, adaptive, and useful. With faster performance and stronger privacy, these systems are ready to tackle real-world tasks in places where traditional robots might fail. Would you be comfortable handing off tasks to a robot that doesn't need the internet to think? Let us know by writing to us at Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my Copyright 2025 All rights reserved.


Bloomberg
30-06-2025
- Business
- Bloomberg
Humanoid Robots Need to Avoid Chinese Domination
Several US makers of humanoid robots designed for general purposes are testing them in real-life settings, improving them and preparing for mass production. As this technology progresses and begins to populate factory and warehouse floors, authorities should make sure the US doesn't make the same mistakes it did with the drone industry. Producers of these machines, such as Agility Robotics, Apptronik and Tesla Inc., are at about the same stage as drones were about 15 years ago. Drones were being built and tested, and people were trying to figure out the use cases, when the industry was blown away in 2013 by the Phantom 2 Vision drone made by a Chinese company known as DJI. This drone came with a built-in camera, a ready-to-fly ease of operation, and a low price. While DJI's drones were sweeping the US market, it wasn't yet clear that they would become essential on the battlefield, and the alarm had not sounded over China's aggressive military buildup. These concerns became crystalized after Russia invaded Ukraine and China backed Russia; a pandemic originating in China swept the globe, exposing US dependence on Chinese goods; a Chinese spy balloon that drifted across the US symbolized a nation emboldened by a massive military expansion; and a tariff war tipped China's hand that it would use the supply chain as a cudgel in areas such as rare-earth products.