
Flying ‘baby-faced' robot is the first of its kind — and it's freaking people out: ‘What the f—k are we doing!?'
This adolescent-looking android is the first flying humanoid robot — but the internet is creeped out by how it looks.
The Italian Institute of Technology (IIT) recently shared a video that updated curious viewers on the project titled iRonCub MK3's progress, but the robot's face seemed to be of special interest to the internet.
'Why does he look so freaky though,' pondered one viewer in a Reddit thread dedicated to the humanoid robot, meanwhile another wrote: 'This is very impressive, but by God, what's with that face? He is staring into my soul.'
'The technology showcased here is great, but why in the f–k does it have to look like a monster baby?' wondered another Redditor.
Some viewers who stumbled across the mechanical creation happened to be especially out-of-the-box thinkers and proposed some wild theories about its appearance.
'When the uprising comes, you'll be much less likely to shoot a cyborg with a baby face,' theorized one user. 'You'll hesitate that fraction of a second, which is all it needs…'
However human the robot may look, conspiracy theorists don't need to fret — it's not controlled by AI. Instead, it's teleoperated, or in other words, controlled by real people remotely.
The Artificial and Mechanical Intelligence research team within the IIT works almost entirely with robotic humanoid technology — and now has five different robots, according to Live Science.
This particular model is the result of two years of research, testing and development. With the jet pack, the baby robot weighs in at 154 lbs and stands about 3 feet tall.
The widely reviled airborne automaton is called the iRonCub MK3, and is based on the institute's earlier humanoid robot model, the iCub.
AFP via Getty Images
According to the IIT, the iRonCub MK3 is being developed with 'specific applications such as disaster response' in mind.
Typically, these robotic research efforts focus on land-based rescue and exploration, but the institute believes that implementing aerial locomotion skills will increase the utility and efficiency of any such endeavors.
'This research is radically different from traditional humanoid robotics and forced us to make a substantial leap forward with respect to the state of the art,' explained Daniele Pucci, one of the researchers on the team.
While many internet users expressed profound confusion at the robot's uncanny childlike appearance, it turns out that some of the more humanoid features the iRonCub MK3 possesses have practical purposes.
A computer modeling mockup of the flying humanoid robot.
Istituto Italiano di Tecnologia
Functional legs allow the robot to traverse terrain once it arrives via air, and realistic hand and arm capabilities let it open doors, move objects or even interact with things like switches or valves.
Currently, the robot's arms have been replaced by two jet thrusters, but as the project's development continues, it will have its functional upper limbs restored.
The iRonCub MK3 has been tested outdoors in a variety of situations, and has also undergone flight testing in a wind tunnel — another first for a robot.
Though the majority of internet users ragged on the robot's baby face, others saw its charm. 'It's Astroboy!!' one user commented enthusiastically, while another gushed: 'Actually, it's cute.'
No matter where you land on the topic of the android's appearance, don't be too mean about it — after all, this unique-looking creation could save your life someday.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Verge
7 hours ago
- The Verge
Reddit turns 20, and it's going big on AI
Reddit has become known as the place to go for unfiltered answers from real, human users. But as the site celebrates its 20th anniversary this week, the company is increasingly thinking about how it can augment that human work with AI. The initial rollout of AI tools, like Reddit Answers, is 'going really well,' CTO Chris Slowe tells The Verge. At a time when Google and its AI tools are going to Reddit for human answers, Reddit is going to its own human answers to power AI features, hoping they're the key to letting people unlock useful information from its huge trove of posts and communities. Reddit Answers is the first big user-facing piece of the company's AI push. Like other AI search tools, Reddit Answers will show an AI-generated summary to a query. But Reddit Answers also very prominently links to where the content came from — and as a user, you also know that the link will point you to another place on Reddit instead of some SEO-driven garbage. It also helps that the citations feel much more prominent than on tools like Google's AI Mode — a tool that news publishers have criticized as 'theft.' 'If you just want the short summary, it's there,' Slowe says. 'If you want to delve deeper, it's an easier way to get into it.' In order for those AI answers to be useful, they need to continue to be based on real human responses. Reddit now has to be on the lookout for AI-generated comments and posts infiltrating its site. It's an important thing for the platform to stay on top of, says Slowe: Reddit's key benefit is that you can trust that a lot of what's written on it is written by humans, and AI spam could erode that. 'Trust is an essential component of the way Reddit works,' Slowe says. The platform is using AI and LLMs to help with moderation and user safety, too. The other half of Reddit's AI equation is selling its own data, which is extremely valuable to AI giants. The changes that forced notable apps to shut down and spurred widespread user protests (which Slowe referred to as 'some unpleasantness that happened about two years ago') were positioned by CEO Steve Huffman as more of a way to get AI companies to pony up. And two of the biggest companies have already done so, as Reddit has cut AI deals with both Google and OpenAI. But Reddit also has to be on the lookout for improper use of its data, with the most recent crackdown being its lawsuit against Anthropic. 'At the end of the day, we aren't a charity,' Slowe says. Reddit wants to provide a service that people can use for free, 'but don't build your business on our back and expect us not to try and defend ourselves.' Still, with new AI-powered search products from Google, OpenAI, and others on the rise, Reddit risks getting buried by AI summaries. And Reddit is experimenting with AI-powered searches on its own platform. So what's the company's goal for the future? 'Keep allowing Reddit to be Reddit,' Slowe says. 'I think that the underlying model for Reddit hasn't really drastically changed since the early days.' The platform doesn't require real names (your username is a 'coveted thing' that many people keep private, Slowe says), everything is focused on text, and reputation is more important than who you are; all of these elements marked 'a drastic difference with the rest of social media.' Reddit is also facing competition from a slightly different angle: Digg, which is making a return with the backing of founder Kevin Rose and Reddit co-founder Alexis Ohanian. Slowe didn't have much to say about it, though. 'I always love seeing innovation and I always love seeing new bends on old business models.'


CNBC
8 hours ago
- CNBC
At 20 years old, Reddit is defending its data and fighting AI with AI
For 20 years, Reddit has pitched itself as "the front page of the internet." AI threatens to change that. As social media has changed over the past two decades with the shift to mobile and the more recent focus on short-form video, peers like MySpace, Digg and Flickr have faded into oblivion. Reddit, meanwhile, has refused to die, chugging along and gaining an audience of over 108 million daily users who congregate in more than 100,000 subreddit communities. There, Reddit users keep it old school and leave simple text comments to one another about their favorite hobbies, pastimes and interests. Those user-generated text comments are a treasure trove that, in the age of artificial intelligence, Reddit is fighting to defend. The emergence of AI chatbots like OpenAI's ChatGPT, Anthropic's Claude and Google's Gemini threaten to inhale vast swaths of data from services like Reddit. As more people turn to chatbots for information they previously went to websites for, Reddit faces a gargantuan challenge gaining new users, particularly if Google's search floodgates dry up. CEO Steve Huffman explained Reddit's situation to analysts in May, saying that challenges like the one AI poses can also create opportunities. While the "search ecosystem is under heavy construction," Huffman said he's betting that the voices of Reddit's users will help it stand out amid the "annotated sterile answers from AI." Huffman doubled down on that notion last week, saying on a podcast that the reality is AI is still in its infancy. "There will always be a need, a desire for people to talk to people about stuff," Huffman said. "That is where we are going to be focused." Huffman may be correct about Reddit's loyal user base, but in the age of AI, many users simply "go the easiest possible way," said Ann Smarty, a marketing and reputation management consultant who helps brands monitor consumer perception on Reddit. And there may be no simpler way of finding answers on the internet than simply asking ChatGPT a question, Smarty said. "People do not want to click," she said. "They just want those quick answers." In a sign that the company believes so deeply in the value of its data, Reddit sued Anthropic earlier this month, alleging that the AI startup "engaged in unlawful and unfair business acts" by scraping subreddits for information to improve its large language models. While book authors have taken companies like Meta and Anthropic to court alleging that their AI models break copyright law and have suffered recent losses, Reddit is basing its lawsuit on the argument of unfair business practices. Reddit's case appears to center on Anthropic's "commercial exploitation of the data which they don't own," said Randy McCarthy, head of the IP law group at Hall Estill. Reddit is defending its platform of user-generated content, said Jason Bloom, IP litigation chair at the law firm Haynes Boone. The social media company's repository of "detailed and informative discussions" are particularly useful for "training an AI bot or an AI platform," Bloom said. As many AI researchers have noted, Reddit's large volume of moderated conversations can help make AI chatbots produce more natural-sounding responses to questions covering countless topics than say a university textbook. Although Reddit has AI-related data-licensing agreements with OpenAI and Google, the company alleged in its lawsuit that Anthropic has been covertly siphoning its data without obtaining permission. Reddit alleges that Anthropic's data-hoovering actions are "interfering with Reddit's contractual relationships with Reddit's users," the legal filing said. This lack of clarity regarding what is permitted when it comes to the use of data scraping for AI is what Reddit's case and other similar lawsuits are all about, legal and AI experts said. "Commercial use requires commercial terms," Huffman said on The Best One Yet podcast. "When you use something — content or data or some resource — in business, you pay for it." Anthropic disagrees "with Reddit's claims and will defend ourselves vigorously," a company spokesperson told CNBC. Reddit's decision to sue over claims of unfair business practices instead of copyright infringement underscores the differences between traditional publishers and platforms like Reddit that host user-generated content, McCarthy said. Bloom said that Reddit could have a valid case against Anthropic because social media platforms have many different revenue streams. One such revenue stream is selling access to their data, Bloom said. That "enables them to sell and license that data for legitimate uses while still protecting their consumers privacy and whatnot," Bloom said. Reddit isn't just fending off AI. It launched its own Reddit Answers AI service in December, using technology from OpenAI and Google. Unlike general-purpose chatbots that summarize others' web pages, the Reddit Answers chatbot generates responses based purely on the social media service, and it redirects people to the source conversations so they can see the specific user comments. A Reddit spokesperson said that over 1 million people are using Reddit Answers each week. Huffman has been pitching Reddit Answers as a best-of-both worlds tool, gluing together the simplicity of AI chatbots with Reddit's corpus of commentary. He used the feature after seeing electronic music group Justice play recently in San Francisco. "I was like, how long is this set? And Reddit could tell me it's 90 minutes 'cause somebody had already asked that question on Reddit," Huffman said on the podcast. Though investors are concerned about AI negatively impacting Reddit's user growth, Seaport Senior Internet Analyst Aaron Kessler said he agrees with Huffman's sentiment that the site's original content gives it staying power. People who visit Reddit often search for information about things or places they may be interested in, like tennis rackets or ski resorts, Kessler said. This user data indicates "commercial intent," which means advertisers are increasingly considering Reddit as a place to run online ads, he said. "You can tell by which page you're on within Reddit what the consumer is interested in," Kessler said. "You could probably even argue there's stronger signals on Reddit versus a Facebook or Instagram, where people may just be browsing videos."


Forbes
11 hours ago
- Forbes
The AI Mental Health Market Is Booming — But Can The Next Wave Deliver Results?
AI tools promise scalable mental health support, but can they actually deliver real care, or just ... More simulate it? In April of 2025, Amanda Caswell found herself on the edge of a panic attack one midnight. With no one to call and the walls closing in, she opened ChatGPT. As she wrote in her piece for Tom's Guide, the AI chatbot calmly responded, guiding her through a series of breathing techniques and mental grounding exercises. It worked, at least in that moment. Caswell isn't alone. Business Insider reported earlier that an increasing number of Americans are turning to AI chatbots like ChatGPT for emotional support, not as a novelty, but as a lifeline. A recent survey of Reddit users found many people report using ChatGPT and similar tools to cope with emotional stress. These stats paint a hopeful picture: AI stepping in where traditional mental health care can't. But they also raise a deeper question about whether these tools are actually helping. A Billion-Dollar Bet On Mental Health AI AI-powered mental health tools are everywhere — some embedded in employee assistance programs, others packaged as standalone apps or productivity companions. In the first half of 2024 alone, investors poured nearly $700 million into AI mental health startups globally, the most for any digital healthcare segment, according to Rock Health. The demand is real. Mental health conditions like depression and anxiety cost the global economy more than $1 trillion each year in lost productivity, to the World Health Organization. And per data from the CDC, over one in five U.S. adults under 45 reported symptoms in 2022. Yet, many couldn't afford therapy or were stuck on waitlists for weeks — leaving a care gap that AI tools increasingly aim to fill. Companies like are trying to do just that. Founded by Sarah Wang — a former Meta and TikTok tech leader who built AI systems for core product and global mental health initiatives — BlissBot blends neuroscience, emotional resilience training and AI to deliver what she calls 'scalable healing systems.' 'Mental health is the greatest unmet need of our generation,' Wang explained. 'AI gives us the first real shot at making healing scalable, personalized and accessible to all.' She said Blissbot was designed from scratch as an AI-native platform, a contrast to existing tools that retrofit mental health models into general-purpose assistants. Internally, the company is exploring the use of quantum-inspired algorithms to optimize mental health diagnostics, though these early claims have not yet been peer-reviewed. It also employs privacy-by-design principles, giving users control over their sensitive data. Sarah Wang- Founder, Blissbot 'We've scaled commerce and content with AI,' Wang added. 'It's time we scale healing.' Blissbot isn't alone in this shift. Other companies, like Wysa, Woebot Health and Innerworld, are also integrating evidence-based psychological frameworks into their platforms. While each takes a different approach, they share the common goal of delivering meaningful mental health outcomes. Why Outcomes Still Lag Behind Despite the flurry of innovation, mental health experts caution that much of the AI being deployed today still isn't as effective as claimed. 'Many AI mental health tools create the illusion of support,' said Funso Richard, an information security expert with a background in psychology. 'But if they aren't adaptive, clinically grounded and offer context-aware support, they risk leaving users worse off — especially in moments of real vulnerability.' Even when AI platforms show promise, Richard cautioned that outcomes remain elusive, noting that AI's perceived authority could mislead vulnerable users into trusting flawed advice, especially when platforms aren't transparent about their limitations or aren't overseen by licensed professionals. Wang echoed these concerns, citing a recent Journal of Medical Internet Research study that pointed out limitations in the scope and safety features of AI-powered mental health tools. The regulatory landscape is also catching up. In early 2025, the European Union's AI Act classified mental health-related AI as 'high risk,' requiring stringent transparency and safety measures. While the U.S. has yet to implement equivalent guardrails, legal experts warn that liability questions are inevitable if systems offer therapeutic guidance without clinical validation. For companies rolling out AI mental health benefits as part of diversity, equity, inclusion (DEI) and retention strategies, the stakes are high. No If tools don't drive outcomes, they risk becoming optics-driven solutions that fail to support real well-being. However, it's not all gloom and doom. Used thoughtfully, AI tools can help free up clinicians to focus on deeper, more complex care by handling structured, day-to-day support — a hybrid model that many in the field see as both scalable and safe. What To Ask Before Buying Into The Hype For business leaders, the allure of AI-powered mental health tools is clear: lower costs, instant availability and a sleek, data-friendly interface. But adopting these tools without a clear framework for evaluating their impact can backfire. So what should companies be asking? Before deploying these tools, Wang explained, companies should interrogate the evidence behind them. 'Are they built on validated frameworks like cognitive behavioral therapy (CBT) or acceptance and commitment therapy (ACT), or are they simply rebranding wellness trends with an AI veneer?,' she questioned. 'Do the platforms measure success based on actual outcomes — like symptom reduction or long-term behavior change — or just logins? And perhaps most critically, how do these systems protect privacy, escalate crisis scenarios and adapt across different cultures, languages, and neurodiverse communities?' Richard agreed, adding that 'there's a fine line between offering supportive tools and creating false assurances. If the system doesn't know when to escalate — or assumes cultural universality — it's not just ineffective. It's dangerous.' Wang also emphasized that engagement shouldn't be the metric of success. 'The goal isn't constant use,' she said. 'It's building resilience strong enough that people can eventually stand on their own.' She added that the true economics of AI in mental health don't come from engagement stats. Rather, she said, the show up later — in the price we pay for shallow interactions, missed signals and tools that mimic care without ever delivering it. The Bottom Line Back in that quiet moment when Caswell consulted ChatGPT during a panic attack, the AI didn't falter. It guided her through that moment like a human therapist would. However, it also didn't diagnose, treat, or follow up. It helped someone get through the night — and that matters. But as these tools become part of the infrastructure of care, the bar has to be higher. As Caswell noted, 'although AI can be used by therapists to seek out diagnostic or therapeutic suggestions for their patients, providers must be mindful of not revealing protected health information due to HIPAA requirements.' That's especially because scaling empathy isn't just a UX challenge. It's a test of whether AI can truly understand — not just mimic — the emotional complexity of being human. For companies investing in the future of well-being, the question isn't just whether AI can soothe a moment of crisis, but whether it can do so responsibly, repeatedly and at scale. 'That's where the next wave of mental health innovation will be judged,' Wang said. 'Not on simulations of empathy, but on real and measurable human outcomes.'