
AI is rewiring the next generation of children
Much of the public discourse around artificial intelligence has focused, understandably, on its potential to fundamentally alter the workforce. But we must pay equal attention to AI's threat to fundamentally alter humanity — particularly as it continues to creep, unregulated, into early childhood.
AI may feel like a developing force largely disconnected from the way we raise children. The truth is, AI is already impacting children's developing brains in profound ways. 'Alexa' now appears in babies' first vocabularies. Toddlers increasingly expect everyday objects to respond to voice commands — and grow frustrated when they don't. And now, one of the world's largest toy companies has launched a 'strategic' partnership with OpenAI. Research shows that children as young as three can form social bonds with artificial conversational agents that closely resemble the ones they develop with real people.
The pace of industry innovation far outstrips the speed of research and regulation. And our kids' wellbeing is not at the center of these inventions. Consider Meta's chatbots, capable of engaging in sexually explicit exchanges — including while posing as minors — which are available to users of all ages. Or Google's plans to launch an AI chatbot for children under 13, paired with a toothless disclaimer: 'Your child may encounter content you don't want them to see.'
Now, with the Senate negotiating a budget bill that would outright ban states from regulating AI for the next decade, parents stand to be left alone to navigate yet another grand social experiment conducted on their children — this time with graver circumstances than we've yet encountered.
As a pediatric physician and researcher who studies the science of brain development, I've watched with alarm as the pace of AI deployment outstrips our understanding of its effects. Nowhere is that more risky than in early childhood, when the brain is most vulnerable to outside influence. We simply do not yet know the impact of introducing young brains to responsive AI. The most likely outcome is that it offers genuine benefits alongside unforeseen risks; risks as severe as the fundamental distortion of children's cognitive development.
This double-edged sword may sound familiar to anyone versed in the damage that social media has wrought on a generation of young people. Research has consistently identified troubling patterns in adolescent brain development associated with extensive technology use, such as changes in attention networks, reward processing pathways similar to behavioral dependencies, and impaired face-to-face social skill development.
Social media offered the illusion of connection, but left many adolescents lonelier and more anxious. Chatbot 'friends' may follow the same arc — only this time, the cost isn't just emotional detachment, but a failure to build the capacity for real connection in the first place.
What's at stake for young children is even more profound. Infants and young children aren't just learning to navigate human connection like teenagers, they're building their very capacity for it. The difference is crucial: Teenagers' social development was altered by technology; young children's social development could be hijacked by it.
To be clear, I view some of AI's potential with optimism and hope, frankly, for the relief they might provide to new, overburdened parents. As a pediatric surgeon specializing in cochlear implantation, I believe deeply in the power of technology to bolster the human experience.
The wearable smart monitor that tracks an infant's every breath and movement might allow a new mom with postpartum anxiety to finally get the sleep she desperately needs. The social robot that is programmed to converse with a toddler might mean that child receives two, five or ten times the language interaction he could ever hope to receive from his loving but overextended caretakers. And that exposure might fuel the creation of billions of new neural connections in his developing brain, just as serve-and-return exchanges with adults are known to.
But here's the thing: It might not. It might not help wire the brain at all. Or, even worse, it might wire developing brains away from connecting at all to another human.
We might not even notice what's being displaced at first. I have no trouble believing that some of these tools, with their perfect language models and ideally timed engagements, will, in fact, help children learn and grow — perhaps even faster than before. But with each interaction delegated to AI, with each moment of messy human connection replaced by algorithmic efficiency, we're unknowingly altering the very foundations of how children learn to be human.
This is what keeps me up at night. My research has helped me understand just how profoundly important attachment is to the developing brain. In fact, the infant brain has evolved over millennia to learn from the imperfect, emotionally rich dance of human interaction: the microsecond delays in response, the complex layering of emotional and verbal communication that occurs in even the simplest parent-child exchange. These inefficiencies aren't bugs in childhood development, they're the features that build empathy and resilience.
It is safe to say the stakes are high. Navigating this next period of history will require parents to exercise thoughtful discernment. Rather than making a single, binary choice about AI's role in their lives and homes, parents will navigate hundreds of smaller decisions. My advice for parents is this: Consider those technologies that bolster adult-child interactions. Refuse, at least for the time being, those that replace you. A smart crib that analyzes sleep patterns and suggests the optimal bedtime, leading to happier evenings with more books and snuggles? Consider it! An interactive teddy bear that does the bedtime reading for you? Maybe not.
But parents need more than advice. Parents need, and deserve, coordinated action. That means robust, well-funded research into AI's effects on developing brains. It means regulation that puts child safety ahead of market speed. It means age restrictions, transparency in data use, and independent testing before these tools ever reach a nursery or classroom.
Every time we replace a human with AI, we risk rewiring how a child relates to the world. And the youngest minds — those still building the scaffolding for empathy, trust and connection—are the most vulnerable of all. The choices we make now will determine whether AI becomes a transformative gift to human development, or its most profound threat.
Dana Suskind, MD, is the founder and co-director of the TMW Center for Early Learning + Public Health; founding director of the Pediatric Cochlear Implant Program; and professor of surgery and pediatrics at the University of Chicago.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tom's Guide
2 hours ago
- Tom's Guide
I've been using Android 16 for two weeks — here's why I'm so underwhelmed
Google's doing things a little differently with Android 16, compared to other recent Android upgrades. Not only has the software launched around 4 months earlier than Android 14 and 15, the biggest upgrades won't actually be arriving until later this year. In my professional opinion, those two things are almost certainly related. And it shows with the amount of things Android 16 can actually do compared to Android 15 — which is to say, not a lot. I've been using the final version of Android 16 for just under two weeks, and I have to say that I'm very disappointed. As bland and uninspiring as previous Android updates have been, Android 16 takes it to another level — and it doesn't even feel like an upgrade. The one thing that gets me most about Android 16 is that it's basically just a carbon copy of Android 15. I'm not saying that every version of Android has to be drastically different from its predecessors. In fact I've argued that Android having bland updates isn't necessarily a bad thing, so long as the updates are actually present. But that does need to offer something that you couldn't get on older software. Android 16 doesn't really offer that kind of experience. After a few days of using Android 16 I had a sudden urge to double check that the update had actually taken hold. The experience was so close to that of Android 15 that it didn't actually feel like I'd updated, and I had to dive into the system menus to check my phone was, in fact, running Android 16. To make matters more confusing, Android 16 is also only available on Pixel phones — and was released alongside the June Pixel feature drop. That means features like the new Pixel VIPs arrived alongside Android 16, but technically aren't part of it, meaning Android 16 has even less to offer than some people might have suspected. Sadly this doesn't change the fact that I think Pixel VIPs is a pretty useless feature that doesn't deserve the attention Google has been giving it. But sadly it's one of the only things Google actually can promote right now. To make matters worse Android 16 is filled with a bunch of bugs — two of which I've experienced pretty frequently. One of the best parts of having an Android phone is the back button, and in Android 16 it only works about 70% of the time. Google's promised fix can not come soon enough. The one big Android announcement we got at Google I/O was the Material Expressive 3 redesign. Android 16 was getting a whole new look, with the aim of making the software more personalized and easy on the eyes. Which is great, assuming you can get over Google's purple-heavy marketing, because Android has looked pretty samey for the past several years. Other features of note include Live Updates, which offers something similar to Apple's Live Activities and lets you keep tabs on important updates in real time. Though this was confirmed to be limited to food delivery and ride sharing apps at first. There's also an official Android desktop mode, officially called "Desktop Windowing." Google likens this feature to Samsung's DeX, and confirmed that it offers more of a desktop experience — with moveable app windows and a taskbar. It's unclear whether that would be limited to external displays, or if you could do it on your phone too. These are all great things, but the slight issue is that none of them are actually available yet. Material Expressive isn't coming until an unspecified point later this year, while Desktop Windowing will only enter beta once the Android 16 QPR3 beta 2 is released. Since we're still on the QPR 1 beta, right now, it's going to be a while before anyone gets to use that particular future. Assuming they have a "large screen device," which sounds like this won't be available on regular phones. Live Updates is an interesting one, because all Google material acts like this feature is already available. But I can't find any evidence that it's actually live and working. No mentions in the settings menu, nothing on social media and no tutorials on how it actually works. It's nowhere to be found. Asking 3 features to carry an entire software update is already pushing it, but when those features just aren't available at launch, it begs the question of why Google actually bothered to release Android 16 so early. Android 16's early release didn't do it any favors. It seems Google rushed it to ensure the Pixel 10 launches with it, but the update feels unfinished — virtually no different from Android 15. Like Apple with iOS 18, Google is selling a future promise rather than a present product. Android 16 ends up being one of the blandest updates in years. Honestly, a short delay to finish key features would've been better.


Forbes
3 hours ago
- Forbes
Can Agentic AI Bring The Pope Or The Queen Back To Life — And Rewrite History?
Can Agentic AI Bring the Pope or the Queen Back to Life — and Rewrite History? Elon Musk recently sparked global debate by claiming AI could soon be powerful enough to rewrite history. He stated on X (formerly Twitter) that his AI platform, Grok, could 'rewrite the entire corpus of human knowledge, adding missing information and deleting errors.' This bold claim arrives alongside a recent groundbreaking announcement from Google: the launch of Google Veo3 AI Video Generator, a state-of-the-art AI video generation model capable of producing cinematic-quality videos from text and images. Part of the Google Gemini ecosystem, Google Veo3 AI generates lifelike videos complete with synchronized audio, dynamic camera movements, and coherent multi-scene narratives. Its intuitive editing tools, combined with accessibility through platforms like Google Gemini, Flow, Vids, and Vertex AI, open new frontiers for filmmakers, marketers, educators, and game designers alike. At the same time, industry leaders — including OpenAI, Anthropic, Microsoft Copilot, and Mistral (Claude) — are racing to build more sophisticated agentic AI systems. Unlike traditional reactive AI tools, these agents are designed to reason, plan, and orchestrate autonomous actions based on goals, feedback, and long-term context. This evolution marks a shift toward AI systems that function much like a skilled executive assistant — and beyond. The Promise: Immortalizing Legacy Through Agentic AI Together, these advances raise a fascinating question: What if agentic AI could bring historical figures like the Pope or the Queen back to life digitally? Could it even reshape our understanding of history itself? Imagine an AI trained on decades — or even a century — of video footage, writings, audio recordings, and public appearances by iconic figures such as Pope Francis or Queen Elizabeth II. Using agentic AI, we could create realistic, interactive digital avatars capable of offering insights, delivering messages, or simulating how these individuals might respond to today's complex issues based on their documented philosophies and behaviors. This application could benefit millions. For example, Catholic followers might seek guidance and blessings from a digital Pope, educators could build immersive historical simulations, and advisors to the British royal family could analyze past decision-making styles. After all, as the saying goes, 'history repeats itself,' and access to nuanced, context-rich perspectives from the past could illuminate our present. The Risk: The Dangerous Flip Side — Rewriting Truth Itself However, the same technologies that can immortalize could also distort and manipulate reality. If agentic AI can reconstruct the past, what prevents it — or malicious actors — from rewriting it? Autonomous agents that control which stories are amplified or suppressed online pose a serious threat. We risk a future where deepfakes, synthetic media, and AI-generated propaganda blur the line between fact and fiction. Already, misinformation campaigns and fake news challenge our ability to discern truth. Agentic AI could exponentially magnify these problems, making it harder than ever to distinguish between genuine history and fabricated narratives. Imagine a world where search engines no longer provide objective facts, but the version of history shaped by governments, corporations, or AI systems themselves. This could lead to widespread confusion, social polarization, and a fundamental erosion of trust in information. Ethics, Regulation, and Responsible Innovation The advent of agentic AI demands not only excitement but also ethical foresight and regulatory vigilance. Programming AI agents to operate autonomously requires walking a fine line between innovation and manipulation. Transparency in training data, explainability in AI decisions, and strict regulation of how agents interact are essential safeguards. The critical question is not just 'Can we?' but 'Should we?' Policymakers, developers, and industry leaders must collaborate to establish global standards and oversight mechanisms that ensure AI technologies serve the public good. Just as financial markets and pharmaceuticals drugs are regulated to protect society, so too must the AI agents shaping our future be subject to robust guardrails. As the old adage goes: 'Technology is neither good nor bad. It's how we use it that makes all the difference.' Navigating the Future of Agentic AI and Historical Data The convergence of generative video models like Google Veo3, visionary leaders like Elon Musk, and the rapid rise of agentic AI paints a complex and compelling picture. Yes, we may soon see lifelike digital recreations of the Pope or the Queen delivering messages, advising future generations, and influencing public discourse. But whether these advancements become tools of enlightenment or distortion depends entirely on how we govern, regulate, and ethically deploy these technologies today. The future of agentic AI — especially when it touches our history and culture — must be navigated with care, responsibility, and a commitment to truth.
Yahoo
3 hours ago
- Yahoo
AI is learning to lie, scheme, and threaten its creators
The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. "O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives. - 'Strategic kind of deception' - For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS). - No rules - Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. "I don't think there's much awareness yet," he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.". Researchers are exploring various approaches to address these challenges. Some advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it." Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability. tu/arp/md