
AI Will Never Be Your Kid's ‘Friend'
I recently found myself reflecting on that question when I noticed two third graders sitting in a hallway at the school I lead, working on a group project. They both wanted to write the project's title on their poster board. 'You got to last time!' one argued. 'But your handwriting is messy!' the other replied. Voices were raised. A few tears appeared.
Ten minutes later, I walked past the same two students. The poster board had a title, and the students appeared to be working purposefully. The earlier flare-up had faded into the background.
That mundane scene captured something important about human development that digital 'friends' threaten to eliminate: the productive friction of real relationships.
Virtual companions, such as the chatbots developed by Character.AI and PolyBuzz, are meant to seem like intimates, and they offer something seductive: relationships without the messiness, unpredictability, and occasional hurt feelings that characterize human interaction. PolyBuzz encourages its users to 'chat with AI friends.' Character.AI has said that its chatbots can 'hear you, understand you, and remember you.' Some chatbots have age restrictions, depending on the jurisdiction where their platforms are used—in the United States, people 14 and older can use PolyBuzz, and those 13 and up can use Character.AI. But parents can permit younger children to use the tools, and determined kids have been known to find ways to get around technical impediments.
The chatbots' appeal to kids, especially teens, is obvious. Unlike human friends, these AI companions will think all your jokes are funny. They're programmed to be endlessly patient and to validate most of what you say. For a generation already struggling with anxiety and social isolation, these digital 'relationships' can feel like a refuge.
But learning to be part of a community means making mistakes and getting feedback on those mistakes. I still remember telling a friend in seventh grade that I thought Will, the 'alpha' in our group, was full of himself. My friend, seeking to curry favor with Will, told him what I had said. I suddenly found myself outside the group. It was painful, and an important lesson in not gossiping or speaking ill of others. It was also a lesson I could not have learned from AI.
As summer begins, some parents are choosing to allow their kids to stay home and 'do nothing,' also described as ' kid rotting.' For overscheduled young people, this can be a gift. But if unstructured time means isolating from peers and living online, and turning to virtual companions over real ones, kids will be deprived of some of summer's most essential learning. Whether at camp or in classrooms, the difficulties children encounter in human relationships—the negotiations, compromises, and occasional conflicts—are essential for developing social and emotional intelligence. When kids substitute these challenging exchanges for AI 'friendships' that lack any friction, they miss crucial opportunities for growth.
Much of the reporting on chatbots has focused on a range of alarming, sometimes catastrophic, cases. Character.AI is being sued by a mother who alleges that the company's chatbots led to her teenage son's suicide. (A spokesperson for Character.AI, which is fighting the lawsuit, told Reuters that the company's platform has safety measures in place to protect children, and to restrict 'conversations about self-harm.') The Wall Street Journal reported in April that in response to certain prompts, Meta's AI chatbots would engage in sexually explicit conversations with users identified as minors. Meta dismissed the Journal 's use of its platform as 'manipulative and unrepresentative of how most users engage with AI companions' but did make 'multiple alterations to its products,' the Journal noted, after the paper shared its findings with the company.
These stories are distressing. Yet they may distract from a more fundamental problem: Even relatively safe AI friendships are troubling, because they cannot replace authentic human companionship.
Consider what those two third graders learned in their brief hallway squabble. They practiced reading emotional cues, experienced the discomfort of interpersonal tension, and ultimately found a way to collaborate. This kind of social problem-solving requires skills that can be developed only through repeated practice with other humans: empathy, compromise, tolerance with frustration, and the ability to repair relationships after disagreement. An AI companion might simply have concurred with both children, offering hollow affirmations without the opportunity for growth. Your handwriting is beautiful! it might have said. I'm happy for you to go first.
But when children become accustomed to relationships requiring no emotional labor, they might turn away from real human connections, finding them difficult and unrewarding. Why deal with a friend who sometimes argues with you when you have a digital companion who thinks everything you say is brilliant?
The friction-free dynamic is particularly concerning given what we know about adolescent brain development. Many teenagers are already prone to seeking immediate gratification and avoiding social discomfort. AI companions that provide instant validation without requiring any social investment may reinforce these tendencies precisely when young people need to be learning to do hard things.
The proliferation of AI companions reflects a broader trend toward frictionless experiences. Instacart enables people to avoid the hassles of the grocery store. Social media allows people to filter news and opinions, and to read only those views that echo their own. Resy and Toast save people the indignity of waiting for a table or having to negotiate with a host. Some would say this represents progress. But human relationships aren't products to be optimized—they're complex interactions that require practice and patience. And ultimately, they're what make life worth living.
In my school, and in schools across the country, educators have spent more time in recent years responding to disputes and supporting appropriate interactions between students. I suspect this turbulent social environment stems from isolation born of COVID and more time spent on screens. Young people lack experience with the awkward pauses of conversation, the ambiguity of social cues, and the grit required to make up with a hurt or angry friend. This was one of the factors that led us to ban phones in our high school last year—we wanted our students to experience in-person relationships and to practice finding their way into conversations even when doing so is uncomfortable.
This doesn't mean we should eliminate AI tools entirely from children's lives. Like any technology, AI has practical uses—helping students understand a complex math problem; providing targeted feedback when learning a new language. But we need to recognize that AI companions are fundamentally different from educational or creative AI applications. As AI becomes more sophisticated and ubiquitous, the temptation to retreat into frictionless digital relationships will only grow. But for children to develop into adults capable of love, friendship, and cooperation, they need to practice these skills with other humans—mess, complications, and all. Our present and future may be digital. But our humanity, and the task of teaching children to navigate an ever more complex world, depends on keeping our friendships analog.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Washington Post
37 minutes ago
- Washington Post
DOGE builds AI tool to cut 50 percent of federal regulations
The U.S. DOGE Service is using a new artificial intelligence tool to slash federal regulations, with the goal of eliminating half of Washington's regulatory mandates by the first anniversary of President Donald Trump's inauguration, according to documents obtained by The Washington Post and four government officials familiar with the plans. The tool, called the 'DOGE AI Deregulation Decision Tool,' is supposed to analyze roughly 200,000 federal regulations to determine which can be eliminated because they are no longer required by law, according to a PowerPoint presentation obtained by The Post that is dated July 1 and outlines DOGE's plans. Roughly 100,000 of those rules would be deemed worthy of trimming, the PowerPoint estimates — mostly through the automated tool with some staff feedback. The PowerPoint also suggests the AI tool will save the United States trillions of dollars by reducing compliance requirements, slashing the federal budget and unlocking unspecified 'external investment.' The tool has already been used to eliminated more than 1,000 'regulatory sections' at the Department of Housing and Urban Development in under two weeks, according to the PowerPoint, and to write '100% of deregulations' at the Consumer Financial Protection Bureau (CFPB). Three HUD employees — as well as documents obtained by The Post — confirmed that an AI tool was recently used to review hundreds, if not more than 1,000, lines of regulations at that agency and suggest edits or deletions. The tool was developed by engineers brought into government as part of Elon Musk's DOGE project, according to two federal officials directly familiar with DOGE's work, who, like others interviewed for this story, spoke on the condition of anonymity to describe internal deliberations they were not authorized to discuss publicly. Conservatives have long argued that the federal government issues far too many regulations that constrain economic growth and hurt the private sector. Many liberals have emphasized that there are reasons federal regulations are in place, such as protecting the environment and ensuring food safety. Asked about the AI-fueled deregulation, White House spokesman Harrison Fields wrote in an email that 'all options are being explored' to achieve the president's goal of deregulating government. Fields noted that 'no single plan has been approved or green-lit,' cautioning that the work is 'in its early stages and is being conducted in a creative way in consultation with the White House.' Fields added: 'The DOGE experts creating these plans are the best and brightest in the business and are embarking on a never-before-attempted transformation of government systems and operations to enhance efficiency and effectiveness.' One former member of DOGE, which stands for Department of Government Efficiency, wrote in a text message that the team did everything it could to come up with legal and technological solutions to repeal as many regulations as possible within Trump's term. 'Creative deployment of artificial intelligence to advance the president's regulatory agenda is one logical strategy to make significant progress in that finite amount of time,' wrote James Burnham, who served as chief attorney for DOGE and is now managing partner at King Street Legal. The proposed use of AI to accomplish swift, massive deregulation expands upon the Trump administration's work to embed AI across the government — using it for everything from fighting wars to reviewing taxes. And it dovetails with the administration's aim to unwind regulations government-wide, even without AI. But it's unclear if a new, untested technology could make mistakes in its attempts to analyze federal regulations typically put in place for a reason. On Jan. 31, Trump issued an executive order to 'unleash prosperity through deregulation,' which required agencies to repeal 10 rules for every new rule issued. Since then, some departments have engaged in what almost appears to be a competition to cut. In May, the Transportation Department declared it had deleted 52 regulations and more than 73,000 words from the Federal Register. This month, the Labor Department announced plans to nix more than 60 regulations. Still, Republicans have grown frustrated by the relatively slow pace of deregulatory actions. During the first six months of Trump's first term, his administration cut costs by about $550 million and paperwork hours by 566,000, according to the American Action Forum, a center-right think tank that tracks regulations. Through July of this year, the Trump administration has achieved nearly all its cost reductions by repealing one rule regarding what businesses must report about their ownership ties. Without that, the Trump administration would have increased regulatory costs by $1.1 billion and paperwork hours by 3.3 million, according to the think tank. 'They're way behind where they were in 2017 on the numbers, no question about it,' said Doug Holtz-Eakin, president of the American Action Forum and former director of the nonpartisan Congressional Budget Office. 'I thought this was going to be something they crushed because they did so in 2017. I've been baffled by this.' The AI tool is intended to massively accelerate the deregulation process, with every federal agency able to develop a list of regulations to eliminate in less than four weeks, according to the PowerPoint. The agencies are supposed to finish their lists by Sept. 1, and this month, DOGE is supposed to start training staff at agencies on how to use the AI tool, the PowerPoint states. Read the PowerPoint here. While DOGE had pushed earlier this year to take a larger role in the deregulatory effort, the Musk-led team was frequently rebuffed by agency employees who worried about outsourcing decisions and their authorities, according to three people who have participated in deregulatory conversations at the White House and the agency level who spoke on the condition of anonymity to share private conversations. Federal officials also questioned whether DOGE had the subject matter expertise to comb through highly technical regulations and find appropriate targets for cuts, the people said. As DOGE's influence waned following Musk's departure, the administration has remained focused on Trump's deregulatory order, the people said. White House staff are also using internal trackers to monitor how quickly agencies are paring regulations, while leaders at every major agency are meeting regularly to discuss how quickly they can meet Trump's ambitions and which cuts 'count' toward the president's order, according to the people. In some cases, DOGE's campaign to fire federal workers and dramatically shrink the federal workforce has hampered the deregulatory effort, the three people said. 'The White House wants us higher on the leader board,' said one of the three people. 'But you have to have staff and time to write the deregulatory notices, and we don't. That's a big reason for the holdup.' Trump officials have tried to use AI to roll back regulations before. At the Department of Health and Human Services, a 2020 'Regulatory Clean Up Initiative' drew on an AI tool to identify and remove archaic language, defunct federal provisions and outdated terms from federal rules. Trump has pushed the limits of the Administrative Procedure Act, which governs repealing federal regulations, most notably through an executive order ending a rule that restricted the water flow of showerheads. It is unclear if courts will allow the administration to void rules. Meanwhile, private-sector companies tend to be uncomfortable ignoring a rule that was illegally repealed, said Nicholas Bagley, an administrative law expert at the University of Michigan. 'There's been some flashy sideshow efforts to avoid the legal strictures, but in general, they don't stick,' Bagley said of Trump's unilateral efforts to cut regulations. DOGE officials may be concerned about the legality of the AI tool. One page of the slideshow says four people identified as 'DOGE lawyers' — Burnham, Austin Raynor, Jacob Altik and Ashley Boizelle — each 'vetted and endorsed' the AI deregulation tool. Raynor, Altik and Boizelle could not be reached for comment. Federal regulations, as they stand now, can be divided into three categories, the PowerPoint says: 50 percent are not required by law, 38 percent are statutorily mandated and 12 percent are 'Not Required but Agency Needs.' By ending the rules that are both unnecessary by law and to agency operations, the PowerPoint states, the government could recover $3.3 trillion a year. But the PowerPoint also suggests it would take 3.6 million 'man-hours' to nix 100,000 regulations under the current system. It is not clear how the PowerPoint's authors arrived at these figures. That's where the AI tool comes in, the PowerPoint proposes. The tool will save 93 percent of the human labor involved by reviewing up to 500,000 comments submitted by the public in response to proposed rule changes. By the end of the deregulation exercise, humans will have spent a grand total of 36 hours gutting half of all federal regulations, the PowerPoint claims. The PowerPoint lists two case studies as examples of how well its AI tool can work, detailing recent efforts to slash regulations at HUD and CFPB. Asked about the AI-driven regulation slashing, a HUD spokesperson wrote in a statement that the agency is having 'ongoing discussions' to consider how to make government more efficient. 'We are not disclosing specifics about how many regulations are being examined or where we are at in the broader process,' the spokesperson said, adding, 'the process is far from final.' The spokesperson continued: 'The intent of the developments is not to replace the judgement, discretion and expertise of staff but be additive to the process.' CFPB did not respond to questions. The Post was not able to independently confirm the use of AI at the agency. At HUD, efforts to use AI to kill regulations began three months ago, according to three employees familiar with the matter and emails obtained by The Post. A message sent to some of the agency's Public and Indian Housing staff on April 18 announced a 'DOGE team' would be 'learning how AI will be able to analyze all PIH regulations looking for and flagging discrepancies between them and the underlying statute.' 'This is a major effort,' the email continued. 'We are working with the lawyers to simplify the [Administrative Procedure Act] process … use AI for drafting, and use AI for complying notices in the future.' The overall goal, the email noted, was to deploy AI to reduce the time staff had to spend on deregulation. Another document, signed 'HUD DOGE Team' and sent to staff, detailed how DOGE team members wanted federal staffers to engage the AI tool. Staffers were supposed to look over the tool's recommendations for proposed regulatory eliminations and mark whether they agreed, disagreed or believed deletions should go further. One HUD employee who participated in this process said the AI tool made several errors. It delivered an analysis saying those who drafted various agency regulations had misunderstood the law in several places, said the employee, who spoke on the condition of anonymity to reveal internal conversations. But the AI tool was sometimes wrong, the employee said. 'There were a couple places where the AI said the language was outside of the statute,' the employee said, 'and actually, no — the AI read the language wrong, and it is actually correct.' After its tryout at HUD, the AI deregulation tool is supposed to deploy across the rest of government in coming months, according to the DOGE PowerPoint. Over the next five months, agencies will work with the AI tool to identify regulations to kill, respond to public comments about the proposed deletions and submit formal deregulation proposals, the PowerPoint says. The goal is to wrap everything up and 'Relaunch America on Jan. 20, 2026,' the PowerPoint states.


Forbes
39 minutes ago
- Forbes
What Ancient Farmers Can Teach The Modern Boardroom About AI Strategy
Ancient Farmer and Modern Day AI User Every day in 2025 brings a new AI milestone. From generative tools rewriting code to AI copilots augmenting medical diagnoses, it's easy to feel we've entered unprecedented terrain. But we've been here before. Just ask our ancient ancestors—who faced the first great disruption when they transitioned from hunting and gathering to agriculture. That shift didn't just change how we ate. It transformed how we lived, worked, governed, and grew. The parallels to today's AI revolution are striking—and instructive. If history is any guide, organizations that manage technological transitions through thoughtful governance, strategic investment in people, and adaptive policies will not only survive but thrive. From Stone Tools to Silicon Chips: How Societies Adapt to Disruption Roughly 12,000 years ago, human communities in the Fertile Crescent began cultivating crops and domesticating animals. Archaeological sites like Abu Hureyra in modern-day Syria reveal this wasn't a sudden break from the past but a gradual, iterative process of learning, testing, and integrating new tools into existing ways of life. This evolutionary—not revolutionary—mindset offers a key lesson: Transformation doesn't mean total disruption. It means layering innovation onto what works and building systems that scale over time. The AI transition requires a similar approach: pilot programs, feedback loops, reskilling, and workforce support must evolve hand-in-hand with technological integration. Strategy: AI Integration Requires Incremental Adaptation Just as early agricultural societies developed irrigation systems and record-keeping via clay tablets, modern organizations are building the infrastructure to support AI-enabled workflows. But success hinges not on speed, but on sequencing. McKinsey reports that although generative AI could add up to $4.4 trillion in global productivity annually, only 21% of companies have adopted AI in more than one business function as of 2023. This isn't a failure—it's a sign that organizations are proceeding thoughtfully. Boards and CFOs should be tracking AI ROI, not only in terms of cost savings but in how it reshapes value creation. That means budgeting for phased implementation and workforce transformation simultaneously. Policy: Redesigning Governance for Human-AI Collaboration Early civilizations didn't just invent tools—they also wrote laws. The Code of Hammurabi, dating to 1750 BCE, is one of the first known legal frameworks for managing agricultural property, labor, and dispute resolution. Today's version? AI ethics policies, data governance protocols, and algorithmic accountability. Regulators are already acting. The EU's AI Act, passed in 2024, imposes risk-based requirements for transparency, bias mitigation, and human oversight. Meanwhile, the European Sustainability Reporting Standards (ESRS) specifically S1 and S2, under the guidance of EFRAG, now require companies to report on the impact of AI on workforce strategy and human capital governance. Boards must go beyond compliance. They must ask: The fiduciary risks are real. A poorly governed algorithm can expose companies to legal liability, reputational damage, and talent loss. Treat AI governance as a board-level issue, not just a technical one. Programs: Cultivating Workforce Resilience and Knowledge Transfer Farming didn't eliminate the need for hunting. Early societies maintained both capabilities as insurance against failure. Similarly, organizations shouldn't rush to fully automate without investing in human capital. This means reskilling must become the norm. According to PwC study on workforce hopes and fears, 40% of workers will need up to six months of training to remain relevant in the AI economy. High-performing companies are already taking the lead. AT&T's collaboration with Udacity to create nano-degree programs reduced reskilling time by 35% and boosted internal mobility. Boards and CHROs should champion programs that: If early societies could preserve astronomy, crop rotation, and animal husbandry without Google, we can certainly codify AI literacy and strategic workforce knowledge today. Measuring What Matters: Human Capital as a Source of Value The agricultural revolution spurred population growth, urbanization, and eventually, modern economies. Today's AI revolution will reshape how we measure value—especially in intangible assets like skills, collaboration, and creativity. Research by Alex Edmans shows that companies investing in employee well-being significantly outperform their peers in long-term shareholder returns. This is why ISO 30414 and SEC human capital disclosure expectations are gaining traction. Human Capital ROI (HCROI) should be tracked with the same rigor as Return on Equity or Investment. Boards should demand metrics that show not just AI adoption but how it enhances organizational resilience and workforce productivity. Final Thought: Change Is Inevitable. Human Judgment Is Indispensable. AI may feel new, but the pattern is ancient. Transformation is never just about tools—it's about how we govern change, support people, and sustain growth. The societies that thrived in the wake of agriculture weren't the most technologically advanced. They were the ones that integrated new tools into stable, human-centered systems. As we stand at the threshold of another era, it's time to remember what worked the first time: governance, adaptation, and investment in human capability. Let's not forget—we've been here before. Postscript: With appreciation to Stela Lupushor, whose co-authored blog post with me on this topic served as inspiration for this column.

Wall Street Journal
39 minutes ago
- Wall Street Journal
AI Founder Pays $38.2 Million for Beachfront Miami-Area Penthouse
This spring, 42-year-old tech entrepreneur Daniel Nadler gave up his Miami rental apartment and moved into a beachfront hotel. The goal was to streamline his life and focus on building OpenEvidence, his Google-backed medical AI company, which is valued at $3.5 billion. 'I didn't want the overhead of dealing with houses and all of the stuff that comes with houses,' he said. 'If I could wake up at 4 a.m. and just order room service—this is so perfect.'