
AI Will Never Be Your Kid's ‘Friend'
I recently found myself reflecting on that question when I noticed two third graders sitting in a hallway at the school I lead, working on a group project. They both wanted to write the project's title on their poster board. 'You got to last time!' one argued. 'But your handwriting is messy!' the other replied. Voices were raised. A few tears appeared.
Ten minutes later, I walked past the same two students. The poster board had a title, and the students appeared to be working purposefully. The earlier flare-up had faded into the background.
That mundane scene captured something important about human development that digital 'friends' threaten to eliminate: the productive friction of real relationships.
Virtual companions, such as the chatbots developed by Character.AI and PolyBuzz, are meant to seem like intimates, and they offer something seductive: relationships without the messiness, unpredictability, and occasional hurt feelings that characterize human interaction. PolyBuzz encourages its users to 'chat with AI friends.' Character.AI has said that its chatbots can 'hear you, understand you, and remember you.' Some chatbots have age restrictions, depending on the jurisdiction where their platforms are used—in the United States, people 14 and older can use PolyBuzz, and those 13 and up can use Character.AI. But parents can permit younger children to use the tools, and determined kids have been known to find ways to get around technical impediments.
The chatbots' appeal to kids, especially teens, is obvious. Unlike human friends, these AI companions will think all your jokes are funny. They're programmed to be endlessly patient and to validate most of what you say. For a generation already struggling with anxiety and social isolation, these digital 'relationships' can feel like a refuge.
But learning to be part of a community means making mistakes and getting feedback on those mistakes. I still remember telling a friend in seventh grade that I thought Will, the 'alpha' in our group, was full of himself. My friend, seeking to curry favor with Will, told him what I had said. I suddenly found myself outside the group. It was painful, and an important lesson in not gossiping or speaking ill of others. It was also a lesson I could not have learned from AI.
As summer begins, some parents are choosing to allow their kids to stay home and 'do nothing,' also described as ' kid rotting.' For overscheduled young people, this can be a gift. But if unstructured time means isolating from peers and living online, and turning to virtual companions over real ones, kids will be deprived of some of summer's most essential learning. Whether at camp or in classrooms, the difficulties children encounter in human relationships—the negotiations, compromises, and occasional conflicts—are essential for developing social and emotional intelligence. When kids substitute these challenging exchanges for AI 'friendships' that lack any friction, they miss crucial opportunities for growth.
Much of the reporting on chatbots has focused on a range of alarming, sometimes catastrophic, cases. Character.AI is being sued by a mother who alleges that the company's chatbots led to her teenage son's suicide. (A spokesperson for Character.AI, which is fighting the lawsuit, told Reuters that the company's platform has safety measures in place to protect children, and to restrict 'conversations about self-harm.') The Wall Street Journal reported in April that in response to certain prompts, Meta's AI chatbots would engage in sexually explicit conversations with users identified as minors. Meta dismissed the Journal 's use of its platform as 'manipulative and unrepresentative of how most users engage with AI companions' but did make 'multiple alterations to its products,' the Journal noted, after the paper shared its findings with the company.
These stories are distressing. Yet they may distract from a more fundamental problem: Even relatively safe AI friendships are troubling, because they cannot replace authentic human companionship.
Consider what those two third graders learned in their brief hallway squabble. They practiced reading emotional cues, experienced the discomfort of interpersonal tension, and ultimately found a way to collaborate. This kind of social problem-solving requires skills that can be developed only through repeated practice with other humans: empathy, compromise, tolerance with frustration, and the ability to repair relationships after disagreement. An AI companion might simply have concurred with both children, offering hollow affirmations without the opportunity for growth. Your handwriting is beautiful! it might have said. I'm happy for you to go first.
But when children become accustomed to relationships requiring no emotional labor, they might turn away from real human connections, finding them difficult and unrewarding. Why deal with a friend who sometimes argues with you when you have a digital companion who thinks everything you say is brilliant?
The friction-free dynamic is particularly concerning given what we know about adolescent brain development. Many teenagers are already prone to seeking immediate gratification and avoiding social discomfort. AI companions that provide instant validation without requiring any social investment may reinforce these tendencies precisely when young people need to be learning to do hard things.
The proliferation of AI companions reflects a broader trend toward frictionless experiences. Instacart enables people to avoid the hassles of the grocery store. Social media allows people to filter news and opinions, and to read only those views that echo their own. Resy and Toast save people the indignity of waiting for a table or having to negotiate with a host. Some would say this represents progress. But human relationships aren't products to be optimized—they're complex interactions that require practice and patience. And ultimately, they're what make life worth living.
In my school, and in schools across the country, educators have spent more time in recent years responding to disputes and supporting appropriate interactions between students. I suspect this turbulent social environment stems from isolation born of COVID and more time spent on screens. Young people lack experience with the awkward pauses of conversation, the ambiguity of social cues, and the grit required to make up with a hurt or angry friend. This was one of the factors that led us to ban phones in our high school last year—we wanted our students to experience in-person relationships and to practice finding their way into conversations even when doing so is uncomfortable.
This doesn't mean we should eliminate AI tools entirely from children's lives. Like any technology, AI has practical uses—helping students understand a complex math problem; providing targeted feedback when learning a new language. But we need to recognize that AI companions are fundamentally different from educational or creative AI applications. As AI becomes more sophisticated and ubiquitous, the temptation to retreat into frictionless digital relationships will only grow. But for children to develop into adults capable of love, friendship, and cooperation, they need to practice these skills with other humans—mess, complications, and all. Our present and future may be digital. But our humanity, and the task of teaching children to navigate an ever more complex world, depends on keeping our friendships analog.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
14 minutes ago
- Forbes
How Higher Ed Can Operationalize The AI Action Plan With Agentic AI
Rear view of two university students walk down campus stairs at sunset The federal government's new AI Action Plan makes one thing clear: Artificial intelligence is national infrastructure. With over $13 billion authorized for AI-related education and workforce development through CHIPS & Science, and more than $490 million in core AI research funding in the NSF pipeline for 2025 alone, the question facing colleges and universities is not whether to engage with agentic AI—it's how. And how fast. What many institutions still lack isn't ambition; it's operational capacity. The ability to move from a strategy document to a deliverable. From a pilot idea to a fundable, scalable program. That's where agentic workflows come in. Agentic workflows are multi-step, semi-autonomous processes designed to operate under institutional oversight. They take in complex data, make decisions, and act. And they're already transforming the potential for higher education to public policy, funding opportunities, and internal innovation goals. Here are three workflows I believe every forward-looking institution can implement to meet the moment. Building AI Infrastructure: The Agentic AI Readiness Mapper If your institution wants to get ahead in the era of AI-driven growth, now is the time to start mapping your college-to-career pipeline. For example, the U.S. is projected to face a shortage of nearly 67,000 skilled semiconductor workers by 2030—a gap that colleges and universities are uniquely positioned to help close. The CHIPS & Science Act is clear in its expectations: Funding will flow to those who can show evidence-based plans for talent development. That means building live, data-informed roadmaps that align education with the future of work. Maricopa Community Colleges offer one model. They launched a 10-day 'Semiconductor Technician Quick-Start' boot camp and secured $1.7 million from the National Semiconductor Technology Center to expand that effort across four campuses. They didn't wait for a perfect curriculum—they partnered with industry, moved quickly, and aligned their messaging with what the federal government wants. An AI Infrastructure Mapper automates the front-end of this process. It scans course catalogs, labor market data, and physical infrastructure to identify where talent pipelines exist and where they need to be built. And it translates that into a funding narrative. These workflows generate the backbone for grant proposals, program design, and workforce planning. Competing For Federal Funding: The Agentic AI Grant Alignment Advisor Institutions don't lose grants because their ideas are bad. They lose because they're not speaking the language of the solicitation. As someone who's reviewed and advised on college applications, I can tell you: Alignment is everything. That's what makes an agentic Grant Alignment Advisor such a game-changer. It can continuously scans RFPs across federal agencies—NSF, Department of Labor, Department of Education—and match solicitations to existing institutional initiatives. It rewrites objectives, fills in gaps, and ensures the proposal mirrors the values and language of the funder. We've already seen the power of grant making in action at institutions like UMass Lowell, which funded over 30 AI mini-grants for faculty to experiment with GenAI tools across disciplines. By lowering the barrier to internal proposal writing and aligning project goals with broader institutional strategy, they created a feedback loop: Fundable ideas became test beds for larger-scale grant applications. The same logic can—and should—be applied across the enterprise. Embedding Ethics At Scale: The Responsible Agentic AI Course Co-Designer The 2025 Corporate Recruiters Survey from the Graduate Management Admission Council—based on responses from over 1,100 employers, including many Fortune 500 firms—shows that AI fluency, especially when paired with ethical reasoning, is the most sought-after skill for the next five years. To stay relevant and empower students for this future, academic programs must go beyond teaching how to use AI tools—they must also help students critically evaluate, manage, and make judgments about their capabilities. We're seeing institutions like the University of Louisiana System take the lead here. They launched a 16-hour AI literacy microcredential available to all 82,000 students and staff. It integrates AI fluency with ethics, including bias, privacy, accountability. That's intentional and smart. An agentic Course Co-Designer accelerates this process. It crosswalks global AI ethics frameworks—from NIST to OECD—and suggests course structures, assessments, and case studies that align with them. It flags outdated materials. It iterates as the frameworks evolve. It takes what would be a six-month curriculum design sprint and gets it 80% of the way there in a day. And most importantly, it ensures that institutions are building AI capacity responsibly—not just reactively. Higher education often spends more time analyzing problems than solving them. But with AI, and the capabilities of agentic AI, we don't have that luxury. The AI Action Plan comes with real funding, active policy momentum, and fast-rising expectations from government, employers, and students alike. It's time to move from reflection to action.


Digital Trends
43 minutes ago
- Digital Trends
More AI is coming to Samsung phones, but from unexpected places
Samsung was one of the first smartphone makers to go all-in with AI, thanks to the Galaxy AI stack. The Galaxy flagships didn't only ship the usual Gemini features, but also delivered their own unique AI experiences with tools like Now Bar, Now Brief, and Interpreter, among others. The company has no plans of stopping there. On the contrary, Samsung is eyeing deals with more AI companies to serve their AI tools atop Galaxy smartphones — at the cost of stealing some spotlight away from Google's Gemini. According to a report by Bloomberg, Perplexity and ChatGPT-maker OpenAI are two of those potential candidates. Why does this matter? 'We are talking to multiple vendors. As long as these AI agents are competitive and can provide the best user experiences, we are open to any AI agent out there,' Won-joon Choi, Chief Operating Officer (COO) of Samsung's MX Business, was quoted as saying. The shift is pretty interesting and something that is going to worry Google for multiple reasons. Recommended Videos As part of the Justice Department's antitrust case, it was reported that Google was paying an 'enormous' sum to Samsung. The fee was paid as a revenue share for onboarding paid subscribers, and also on a monthly basis for each Galaxy device that came pre-installed with the Gemini app. Interestingly, Samsung was also approached by other companies with similar offers, a list that includes names such as Meta, Microsoft, and OpenAI. How soon Samsung phones integrate AI products from other companies remains to be seen, but it would certainly be a big blow to Google. Over the past few years, Google executives have appeared on the launch stage for flagship Samsung devices. Moreover, as recently as its I/O event in May, Google heavily showcased Samsung's phones to reveal its upcoming Android and AI features. Samsung's status as the biggest name in the smartphone world is definitely a key part of the equation here. What's next for Google and Gemini? Google's Pixel phones are the best showcase of what AI (read: Gemini) can accomplish on a phone. But Google's smartphones are nowhere near as popular in terms of market reception and sales volumes are Samsung's Galaxy phones. That's why finding a prominent place for Gemini on Galaxy phones was such a big deal. But the challenge has already started right in the Android ecosystem. Perplexity, which offers a product that aims to compete with Google Search and Gemini, inked a deal with Nothing last year to offer its Pro subscription for free. Earlier this year, another deal with Motorola ensured that Perplexity would come pre-installed on Motorola devices. The company has also revealed that its AI-focused browser called Comet will soon land on Apple and Android smartphones. Perplexity was also rumored to be on Apple's potential list of partners as the company struggles to push Siri in the same league as ChatGPT and Gemini. As a stopgap solution, Apple inked a deal with OpenAI that allows Siri to seamlessly work with ChatGPT for advanced queries. With Samsung also exploring rival AI products for Galaxy smartphones, it would be interesting to see how aggressive Google gets at pushing Gemini on mobile devices.


Fox News
an hour ago
- Fox News
Why AI is causing summer electricity bills to soar
If your electricity bill seems shockingly high, you're not imagining it. A big part of the spike is being driven by rising artificial intelligence electricity demand. PJM Interconnection, the largest power grid operator in the United States, says electricity usage is climbing sharply this summer. Some areas may see bills increase by as much as 20%. One of the main drivers behind this trend is the growing power consumption from data centers that support AI systems like ChatGPT and other generative tools. Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide — free when you join my PJM supplies electricity to 67 million customers across 13 states: Delaware, Illinois, Indiana, Kentucky, Maryland, Michigan, New Jersey, North Carolina, Ohio, Pennsylvania, Tennessee, Virginia and West Virginia, as well as the District of Columbia. It manages a vast and complex network of power distribution. This summer, the grid is showing signs of strain. In just the past year, data centers running AI have started using much more electricity. These centers can consume up to 30 times more power than traditional data centers. Most of them are connected to the same grid that serves homes and businesses, which means the cost of that power growth is shared by everyone. AI took off in 2023 when tools like ChatGPT became widely adopted. Since then, companies have been racing to build more infrastructure to keep up. PJM's territory now has the largest number of data centers in the world. Between 2024 and 2025, electricity demand from AI and data centers in the PJM region contributed to a $9 billion increase in power costs. PJM expects peak usage this summer to reach over 154,000 megawatts, with the potential to exceed all-time records during heat waves or emergencies. While demand is rising quickly, the power supply is not keeping up with the pace. Many fossil fuel plants are shutting down due to state regulations, aging infrastructure or market conditions. More than 9,000 megawatts of coal capacity will retire or convert to gas in 2025 alone. Clean energy options like wind and solar are often the cheapest ways to add new power, but developers are struggling with permitting delays, rising costs and a loss of federal incentives. For example, the 30% federal solar tax credit for homeowners will end after 2025. That change is already slowing down new installations. Even if you never use AI tools yourself, you are still likely paying for their growth. The cost of expanding the data center's power supply is spread across all grid users, including regular households. PJM customers have been warned to expect electric bills to increase by $25 or more per month. Commercial users may see prices climb nearly 30%. To help prevent rolling blackouts, PJM is rolling out demand response programs that pay large businesses to temporarily reduce their electricity use during periods of extreme demand. Still, if electricity usage exceeds 166,000 megawatts, some regions may not have enough reserve power to maintain reliability. Looking for ways to lower your electricity bill as prices surge? Here are some effective tips you can start using today: For more tips and expert advice, check out the 7 best ways to save money on your electricity bill AI electricity demand is growing faster than the grid can handle. As more data centers come online to power tools like ChatGPT, the strain is showing up on your utility bill. Without major upgrades to infrastructure or smarter energy policy, prices could keep climbing. The tech may be smart, but the cost of keeping it running is getting harder for everyone else to ignore. Have your electricity bills gone up recently? Let us know by writing us at Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide — free when you join my Copyright 2025 All rights reserved.