
Teens say they are turning to AI for advice, friendship and 'to get out of thinking'
The 15-year-old asks ChatGPT for guidance on back-to-school shopping, makeup colors, low-calorie choices at Smoothie King, plus ideas for her Sweet 16 and her younger sister's birthday party.
The sophomore honors student makes a point not to have chatbots do her homework and tries to limit her interactions to mundane questions. But in interviews with The Associated Press and a new study, teenagers say they are increasingly interacting with AI as if it were a companion, capable of providing advice and friendship.
"Everyone uses AI for everything now. It's really taking over,' said Chege, who wonders how AI tools will affect her generation. "I think kids use AI to get out of thinking.'
For the past couple of years, concerns about cheating at school have dominated the conversation around kids and AI. But artificial intelligence is playing a much larger role in many of their lives. AI, teens say, has become a go-to source for personal advice, emotional support, everyday decision-making and problem-solving.
More than 70% of teens have used AI companions and half use them regularly, according to a new study from Common Sense Media, a group that studies and advocates for using screens and digital media sensibly.
The study defines AI companions as platforms designed to serve as "digital friends,' like Character.AI or Replika, which can be customized with specific traits or personalities and can offer emotional support, companionship and conversations that can feel human-like. But popular sites like ChatGPT and Claude, which mainly answer questions, are being used in the same way, the researchers say.
As the technology rapidly gets more sophisticated, teenagers and experts worry about AI's potential to redefine human relationships and exacerbate crises of loneliness and youth mental health.
"AI is always available. It never gets bored with you. It's never judgmental,' says Ganesh Nair, an 18-year-old in Arkansas. "When you're talking to AI, you are always right. You're always interesting. You are always emotionally justified.'
All that used to be appealing, but as Nair heads to college this fall, he wants to step back from using AI. Nair got spooked after a high school friend who relied on an "AI companion' for heart-to-heart conversations with his girlfriend later had the chatbot write the breakup text ending his two-year relationship.
"That felt a little bit dystopian, that a computer generated the end to a real relationship,' said Nair. "It's almost like we are allowing computers to replace our relationships with people.'
In the Common Sense Media survey, 31% of teens said their conversations with AI companions were "as satisfying or more satisfying' than talking with real friends. Even though half of teens said they distrust AI's advice, 33% had discussed serious or important issues with AI instead of real people.
Those findings are worrisome, says Michael Robb, the study's lead author and head researcher at Common Sense, and should send a warning to parents, teachers, and policymakers. The now-booming and largely unregulated AI industry is becoming as integrated with adolescence as smartphones and social media are.
"It's eye-opening,' said Robb. "When we set out to do this survey, we had no understanding of how many kids are actually using AI companions.' The study polled more than 1,000 teens nationwide in April and May.
Adolescence is a critical time for developing identity, social skills, and independence, Robb said, and AI companions should complement - not replace - real-world interactions.
"If teens are developing social skills on AI platforms where they are constantly being validated, not being challenged, not learning to read social cues or understand somebody else's perspective, they are not going to be adequately prepared in the real world,' he said.
The nonprofit analyzed several popular AI companions in a " risk assessment,' finding ineffective age restrictions and that the platforms can produce sexual material, give dangerous advice, and offer harmful content. The group recommends that minors not use AI companions.
Researchers and educators worry about the cognitive costs for youth who rely heavily on AI, especially in their creativity, critical thinking, and social skills. The potential dangers of children forming relationships with chatbots gained national attention last year when a 14-year-old Florida boy died by suicide after developing an emotional attachment to a Character.AI chatbot.
"Parents really have no idea this is happening,' said Eva Telzer, a psychology and neuroscience professor at the University of North Carolina at Chapel Hill. "All of us are struck by how quickly this blew up.' Telzer is leading multiple studies on youth and AI, a new research area with limited data.
Telzer's research has found that children as young as 8 are using generative AI, and also found that teens are using AI to explore their sexuality and for companionship. In focus groups, Telzer found that one of the top apps teens frequent is SpicyChat AI, a free role-playing app intended for adults.
Many teens also say they use chatbots to write emails or messages to strike the right tone in sensitive situations.
"One of the concerns that comes up is that they no longer have trust in themselves to make a decision,' said Telzer. "They need feedback from AI before feeling like they can check off the box that an idea is OK or not.'
Arkansas teen Bruce Perry, 17, says he relates to that and relies on AI tools to craft outlines and proofread essays for his English class.
"If you tell me to plan out an essay, I would think of going to ChatGPT before getting out a pencil,' Perry said. He uses AI daily and has asked chatbots for advice in social situations, to help him decide what to wear and to write emails to teachers, saying AI articulates his thoughts faster.
Perry says he feels fortunate that AI companions were not around when he was younger.
"I'm worried that kids could get lost in this,' Perry said. "I could see a kid that grows up with AI not seeing a reason to go to the park or try to make a friend.'
Other teens agree, saying the issues with AI and its effect on children's mental health are different from those of social media.
"Social media complemented the need people have to be seen, to be known, to meet new people,' Nair said. "I think AI complements another need that runs a lot deeper - our need for attachment and our need to feel emotions. It feeds off of that.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Arab Times
2 days ago
- Arab Times
Gwyneth Paltrow joins company at center of Coldplay KissCam drama
BOSTON, July 27, (AP): Astronomer - the company whose CEO resigned after being caught on a KissCam at a Coldplay rock concert embracing a woman who was not his wife - is trying to move on from the drama with someone who knows the band pretty well. Actress Gwyneth Paltrow, who was married to Coldplay's frontman Chris Martin for 13 years, announced Friday on X that she has been hired by Astronomer as a spokesperson. Astronomer, a tech company based in New York, found itself in an uncomfortable spotlight when two of its executives were caught on camera in an intimate embrace at a Coldplay concert - a moment that was then flashed on a giant screen in the stadium. CEO Andy Byron and human resource executive Kristin Cabot were caught by surprise when Martin asked the cameras to scan the crowd during a concert earlier this month. "Either they're having an affair or they're just very shy,' Martin joked when the couple appeared on screen and quickly tried to hide their faces. In a short video, the "Shakespeare in Love' and "Ironman' star said she had been hired as a "very temporary' spokesperson for Astronomer. "Astronomer has gotten a lot of questions over the last few days, and they wanted me to answer the most common ones,' Paltrow said, smiling and deftly avoiding mention of the KissCam fuss. "We've been thrilled that so many people have a newfound interest in data workflow automation,' she said. "We will now be returning to what we do best - delivering game-changing results for our customers.' When footage from the KissCam first spread online, it wasn't immediately clear who the couple were. Soon after the company identified the pair, and Byron resigned followed by Cabot. The video clip resulted in a steady stream of memes, parody videos, and screenshots of the pair's shocked faces filling social media feeds. Online streams of Coldplay's songs jumped 20% in the days after the video went viral, according to Luminate, an industry data and analytics company.


Arab Times
6 days ago
- Arab Times
Trump to reveal ‘AI Action Plan' shaped by his Silicon Valley supporters
WASHINGTON, July 23, (AP): An artificial intelligence agenda formed on the podcasts of Silicon Valley billionaires is now being set into US policy as President Donald Trump leans on the ideas of the tech figures who backed his election campaign. Trump plans on Wednesday to reveal an "AI Action Plan' he ordered after revoking President Joe Biden's signature AI guardrails. The plan and related executive orders are expected to include some familiar tech lobby pitches: accelerating the sale of AI technology abroad and making it easier to construct the energy-hungry data center buildings that are needed to form and run AI products, according to a person briefed on Wednesday's event who was not authorized to speak publicly and spoke on condition of anonymity. It might also include some of the AI culture war preoccupations of the circle of venture capitalists who endorsed Trump last year. The tech industry has pushed for easier permitting to get huge data centers connected to power and water - even if it means consumers losing drinking water and paying higher energy bills. On Tuesday, 95 groups including labor unions, parent groups, environmental justice organizations and privacy advocates signed a resolution opposing Trump's embrace of industry-driven AI policy and calling for a "People's AI Action Plan' that would "deliver first and foremost for the American people.' Amba Kak, co-executive director of the AI Now Institute, which helped lead the effort, said the coalition expects Trump's plan to come "straight from Big Tech's mouth.' "Every time we say, 'What about our jobs, our air, water, our children?' they're going to say, 'But what about China?'' she said Tuesday. She said Americans should reject the White House's argument that artificial intelligence is overregulated, and fight to preserve "baseline protections for the public.' Sacks, a former PayPal executive and now Trump's top AI adviser, has been criticizing "woke AI' for more than a year, fueled by Google's February 2024 rollout of an AI image generator that, when asked to show an American Founding Father, created pictures of Black, Asian and Native American men. Google quickly fixed its tool, but the "Black George Washington' moment remained a parable for the problem of AI's perceived political bias, taken up by X owner Elon Musk, venture capitalist Marc Andreessen, Vice President JD Vance and Republican lawmakers. "The AI's incapable of giving you accurate answers because it's been so programmed with diversity and inclusion,' Sacks said at the time. Elon Musk's xAI, pitched as an alternative to "woke AI' companies, had to scramble this month to remove posts made by its Grok chatbot that made antisemitic comments and praised Adolf Hitler. The All-In Podcast is a business and technology show hosted by four tech investors and entrepreneurs including Trump's AI czar, David Sacks. The plan and related executive orders to be announced late Wednesday afternoon are expected to include some familiar tech lobby pitches - including accelerating the sale of AI technology abroad and making it easier to construct the energy-hungry data center buildings needed to run AI products, according to a person briefed on Wednesday's event who was not authorized to speak publicly and spoke on condition of anonymity. It might also include some of the AI culture war preoccupations of the circle of venture capitalists who endorsed Trump last year.


Arab Times
6 days ago
- Arab Times
Teens say they are turning to AI for advice, friendship and 'to get out of thinking'
NEW YORK, July 23, (AP): No question is too small when Kayla Chege, a high school student in Kansas, is using artificial intelligence. The 15-year-old asks ChatGPT for guidance on back-to-school shopping, makeup colors, low-calorie choices at Smoothie King, plus ideas for her Sweet 16 and her younger sister's birthday party. The sophomore honors student makes a point not to have chatbots do her homework and tries to limit her interactions to mundane questions. But in interviews with The Associated Press and a new study, teenagers say they are increasingly interacting with AI as if it were a companion, capable of providing advice and friendship. "Everyone uses AI for everything now. It's really taking over,' said Chege, who wonders how AI tools will affect her generation. "I think kids use AI to get out of thinking.' For the past couple of years, concerns about cheating at school have dominated the conversation around kids and AI. But artificial intelligence is playing a much larger role in many of their lives. AI, teens say, has become a go-to source for personal advice, emotional support, everyday decision-making and problem-solving. More than 70% of teens have used AI companions and half use them regularly, according to a new study from Common Sense Media, a group that studies and advocates for using screens and digital media sensibly. The study defines AI companions as platforms designed to serve as "digital friends,' like or Replika, which can be customized with specific traits or personalities and can offer emotional support, companionship and conversations that can feel human-like. But popular sites like ChatGPT and Claude, which mainly answer questions, are being used in the same way, the researchers say. As the technology rapidly gets more sophisticated, teenagers and experts worry about AI's potential to redefine human relationships and exacerbate crises of loneliness and youth mental health. "AI is always available. It never gets bored with you. It's never judgmental,' says Ganesh Nair, an 18-year-old in Arkansas. "When you're talking to AI, you are always right. You're always interesting. You are always emotionally justified.' All that used to be appealing, but as Nair heads to college this fall, he wants to step back from using AI. Nair got spooked after a high school friend who relied on an "AI companion' for heart-to-heart conversations with his girlfriend later had the chatbot write the breakup text ending his two-year relationship. "That felt a little bit dystopian, that a computer generated the end to a real relationship,' said Nair. "It's almost like we are allowing computers to replace our relationships with people.' In the Common Sense Media survey, 31% of teens said their conversations with AI companions were "as satisfying or more satisfying' than talking with real friends. Even though half of teens said they distrust AI's advice, 33% had discussed serious or important issues with AI instead of real people. Those findings are worrisome, says Michael Robb, the study's lead author and head researcher at Common Sense, and should send a warning to parents, teachers, and policymakers. The now-booming and largely unregulated AI industry is becoming as integrated with adolescence as smartphones and social media are. "It's eye-opening,' said Robb. "When we set out to do this survey, we had no understanding of how many kids are actually using AI companions.' The study polled more than 1,000 teens nationwide in April and May. Adolescence is a critical time for developing identity, social skills, and independence, Robb said, and AI companions should complement - not replace - real-world interactions. "If teens are developing social skills on AI platforms where they are constantly being validated, not being challenged, not learning to read social cues or understand somebody else's perspective, they are not going to be adequately prepared in the real world,' he said. The nonprofit analyzed several popular AI companions in a " risk assessment,' finding ineffective age restrictions and that the platforms can produce sexual material, give dangerous advice, and offer harmful content. The group recommends that minors not use AI companions. Researchers and educators worry about the cognitive costs for youth who rely heavily on AI, especially in their creativity, critical thinking, and social skills. The potential dangers of children forming relationships with chatbots gained national attention last year when a 14-year-old Florida boy died by suicide after developing an emotional attachment to a chatbot. "Parents really have no idea this is happening,' said Eva Telzer, a psychology and neuroscience professor at the University of North Carolina at Chapel Hill. "All of us are struck by how quickly this blew up.' Telzer is leading multiple studies on youth and AI, a new research area with limited data. Telzer's research has found that children as young as 8 are using generative AI, and also found that teens are using AI to explore their sexuality and for companionship. In focus groups, Telzer found that one of the top apps teens frequent is SpicyChat AI, a free role-playing app intended for adults. Many teens also say they use chatbots to write emails or messages to strike the right tone in sensitive situations. "One of the concerns that comes up is that they no longer have trust in themselves to make a decision,' said Telzer. "They need feedback from AI before feeling like they can check off the box that an idea is OK or not.' Arkansas teen Bruce Perry, 17, says he relates to that and relies on AI tools to craft outlines and proofread essays for his English class. "If you tell me to plan out an essay, I would think of going to ChatGPT before getting out a pencil,' Perry said. He uses AI daily and has asked chatbots for advice in social situations, to help him decide what to wear and to write emails to teachers, saying AI articulates his thoughts faster. Perry says he feels fortunate that AI companions were not around when he was younger. "I'm worried that kids could get lost in this,' Perry said. "I could see a kid that grows up with AI not seeing a reason to go to the park or try to make a friend.' Other teens agree, saying the issues with AI and its effect on children's mental health are different from those of social media. "Social media complemented the need people have to be seen, to be known, to meet new people,' Nair said. "I think AI complements another need that runs a lot deeper - our need for attachment and our need to feel emotions. It feeds off of that.'