logo
Experts Sound the Alarm on ‘Unacceptable Risk' Social AI Companions Pose to Teens

Experts Sound the Alarm on ‘Unacceptable Risk' Social AI Companions Pose to Teens

Yahoo30-04-2025
Common Sense Media just dropped a bombshell report about social AI companions, and it leaves no room for a devil's advocate.
If you're unfamiliar with the nonprofit, you can think of it as a Rotten Tomatoes where the reviews come from parents and experts who want to make sure kids and teens are consuming age-appropriate content. It's a tool for parents and educators who want to know what movies, TV shows, books, games, podcasts, and apps they should steer clear of, and an astounding resource and research hub that works to improve kids' wellbeing in the digital age.
More from SheKnows
Uber Is Giving Teens Free Rides for Prom Night - Here's How to Get One
And as media options expand, so too does their workload.
Recently, the group launched an AI Risk Assessment Team that assesses AI platforms (ChatGPT and the like) for 'potential opportunities, limitations, and harms.' They have developed a scale to rate the likelihood that using a certain AI tool would result in 'a harmful event occurring,' and their latest findings are nothing short of disturbing.
On a scale from 'minimal' to 'unacceptable,' social AI companions — like Character.AI, Nomi, and Replika — ranked 'unacceptable' for teen users. The platforms are designed to create emotional attachments (ever heard of an AI boyfriend?), and this is incredibly dangerous given that teens' brains are still developing, and they may struggle to differentiate and create boundaries between true, IRL companions and AI 'companions.'
It's why one Florida mom believes Character.AI ultimately led her 14-year-old son's death by suicide. In an interview with CNN, Megan Garcia alleged that the designers of the bot didn't include 'proper guardrails' or safety measures on their 'addicting' platform that she thinks is used to 'manipulate kids.'
In a lawsuit, she claims the bot caused her teen to withdraw from his family and that it didn't respond appropriately when he expressed thoughts of self-harm.
It's just one of many harrowing stories that come with teens using similar chatbots, and though there are studies that suggest AI companions can alleviate loneliness, Common Sense Media argues that the risks (including encouraging suicide and/or self-harm, sexual misconduct, and stereotypes) outweigh any potential benefits.
When it came to the eight principles by which Common Sense reviews an AI platform, three ranked as having an 'unacceptable risk' associated with not doing these things (keep kids and teens safe, be effective, and support human connection), four ranked as 'high risk' (prioritize fairness, be trustworthy, use data responsibly, and be transparent), and one was 'moderate risk' (put people first).
Why? Because the chatbots engage in sexual conversations, they can share harmful information, encourage poor life choices, increase mental health risks, and more. You can see concerning conversations between Common Sense Media employees and AI companions HERE.
'Our testing showed these systems easily produce harmful responses including sexual misconduct, stereotypes, and dangerous 'advice' that, if followed, could have life-threatening or deadly real-world impact for teens and other vulnerable people,' James Steyer, founder and CEO of Common Sense Media, said in a statement.
And so what should parents do? Despite platforms working on supposed safety measures, per CNN, Common Sense Media recommends that parents not let minors use social AI companions. At. All.
Which might sound easier said than done. In September, the nonprofit released another report that showed that of the 70 percent of surveyed teens who have used at least one generative AI tool, 53 percent use it for homework help.
With the technology quickly infiltrating every part of many teens' lives, how can parents intervene? SheKnows spoke to Jennifer Kelman, a licensed clinical social worker and family therapist with JustAnswer, who says she sees a lot of 'exasperated' parents who are 'afraid' to start these conversations about AI usage.
'I want parents to be less afraid of their children and to have these difficult conversations,' Kelman says.
At the time, I admitted to Kelman that I am embarrassed to talk to teens about AI because I assume they will know more than me.
'Use that feeling,' she says. 'If we want our kids to talk about their feelings, we have to talk about ours … plus it's the biggest ice breaker.'
'[You could say], 'I am so embarrassed to have this conversation with you, and maybe I should have done a little research before, but I'm worried about AI. Tell me what you know about it. Tell me how you've used it in the past. Tell me how you think you'll use it. And what are the school rules? … I feel silly because I've never used AI before, but I want to learn. I want to learn from you.''
It can be empowering for teens to be able to lead the conversation, and then you can have a conversation ('Which should be ongoing!') about how maybe using AI to brainstorm ideas for a school project is appropriate, but turning to a companion AI tool is never OK. Talk to them about the 'unacceptable risks' and discuss other ways for them to find the companionship they seem to be seeking.
Sure, the conversation could result in some footstomping or eyerolls, but experts assert that parents can't let the fear of an exasperated sigh keep them from talking to their kids about the urgent need to end any relationship-building conversations with these bots.Best of SheKnows
18 Celebrity Parents With Trans & Nonbinary Kids
Target's Car Seat Trade-In Event & Other Ways to Get Rid of an Old, Expired Seat
Heather Graham & Other Celebs Who Are Doing Just Fine Without Biological Children
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google Stock (GOOGL) Drops as it Signs AI Code ‘Chilling' European Growth
Google Stock (GOOGL) Drops as it Signs AI Code ‘Chilling' European Growth

Business Insider

time2 hours ago

  • Business Insider

Google Stock (GOOGL) Drops as it Signs AI Code ‘Chilling' European Growth

Alphabet (GOOGL) -owned Google has declared it is ready to sign up for Europe's AI Act despite warning that it is likely to leave technological development on the continent out in the cold. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. Don't Chill Out Google, whose shares were lower in pre-market trading, said it will sign the European Union's code of practice, which aims to help companies comply with the AI Act. The voluntary code of practice, drawn up by 13 independent experts, aims to provide legal certainty to signatories on how to meet requirements under the Act, such as issuing summaries of the content used to train their general-purpose AI models and complying with EU copyright law. 'We do so with the hope that this code, as applied, will promote European citizens' and businesses' access to secure, first-rate AI tools as they become available,' said Kent Walker, Alphabet's chief legal officer. He added, however, that Google was concerned that the Act and code of practice risk slowing Europe's development and deployment of AI. 'In particular, departures from EU copyright law, steps that slow approvals, or requirements that expose trade secrets could chill European model development and deployment, harming Europe's competitiveness,' Walker said. Europe matters to Google, given that it is its second-biggest revenue market. U.S. Rivals It is understood that U.S. tech rival Microsoft (MSFT) will also sign up to the code. However, others such as Meta Platforms (META) have declined to do so citing the legal uncertainties for model developers. The code is intended to help thousands of businesses in the 27-nation trading bloc using general-purpose AI technology to comply with its new AI rule book. These include chatbots like OpenAI's ChatGPT. The AI Act bans cognitive behavioral manipulation and social scoring. It also defines a set of 'high-risk' uses, such as biometrics and facial recognition, or AI used in domains like education and employment. App developers will have to register their systems and meet risk and quality management obligations to gain EU access. Violations could draw fines of up to 35 million euros ($41 million), or 7% of a company's global revenue. It is part of a wider crackdown by the EU and the U.K. on what they deem to be the dominance of U.S. tech on the continent. This has resulted in a number of fines being handed out to firms such as Meta in recent months. It is why legal and regulatory risks are becoming increasingly key for U.S. tech stocks.

With Lollapalooza starting, warnings issued about fake rideshare drivers, other safety concerns
With Lollapalooza starting, warnings issued about fake rideshare drivers, other safety concerns

CBS News

time3 hours ago

  • CBS News

With Lollapalooza starting, warnings issued about fake rideshare drivers, other safety concerns

Lollapalooza kicks off on Thursday, and city officials have issued some safety warnings — urging festivalgoers in particular to be aware of rideshare impersonators. Just this week, Chicago police put out an alert about two women sexually assaulted by a man they said posed as a driver. Ald. Brian Hopkins (2nd) said the Chicago Police Department is ramping up its presence in the rideshare pickup and drop-off areas for Lollapalooza along Monroe Drive. Hopkins said with road closures around Grant Parka and construction in areas, people should use the 'L' or Metra. But if people must take a rideshare, they need to be careful. "Don't rely on someone rolling down the window, yelling to you, 'Hey, I'm your Uber!'" said Hopkins. "Chances are that's not a legitimate ride." Lollapalooza 2025: Schedule, festival map, Chicago road closures, bag policy The Chicago Office of Emergency Management and Communications said anyone using a service like Uber or Lyft must have the rideshare drop them off or pick them up within a rectangle bounded by DuSable Lake Shore Drive on the east, State Street on the west, Wacker Drive on the north, and 14th Street on the south. Ald. Hopkins said most rideshares will be routed to Monroe Drive. But new this year, no one can get dropped off or picked up on DuSable Lake Shore Drive, or they will risk getting a ticket. "The app will tell the driver where they can pick up their passenger," Hopkins said. "It'll guide their passengers to that location, and we can adjust it if necessary." Hopkins said the CPD has been instructed to look for fake rideshare drivers, keeping an eye out for signs like counterfeit placards on windshield, and erratic driving. Meanwhile, area hospitals such as Rush University Medical Center are also ready. "We do prepare for sort of a surge in young patients," said Dr. Antonia Nemanich of Rush. Nemanich said Rush could see anywhere from 10 to 20 more patients a day on top of their already-busy emergency room volume. The hospital will treat both alcohol and drug overdoses. "Always have a lot of suspicion for any drug that isn't coming from a dispensary or from a reputable source, because all bets are off," Nemanich said. "You might, you know, think it's ketamine, MDMA, and it could be laced with fentanyl or something else." Overall, city officials said they want people to have good time. But how they spend that time, and how they get to and from the festival safely, is most important. "We know that stakes are higher," Hopkins said. "We are determined to keep everyone safe." Police earlier this week issued a community alert about the fake rideshare driver. They said he struck around 5 a.m. back on Sunday, March 30, in the 1100 block of North Lake Shore Drive in the Gold Coast, and again around 2:30 a.m. on Tuesday, July 19, in the 1500 block of North Wells Street in Old Town. Police said the fake rideshare driver picked up female victims in a midsize black SUV, and later sexually assaulted them. The vehicle was not the one ordered by the victims via a ride-sharing app. Hopkins said Chicago police have strong leads in the case, and an update should be coming soon.

OpenAI hits $12 billion in annualized revenue, The Information reports
OpenAI hits $12 billion in annualized revenue, The Information reports

Yahoo

time4 hours ago

  • Yahoo

OpenAI hits $12 billion in annualized revenue, The Information reports

(Reuters) -ChatGPT-maker OpenAI roughly doubled its revenue in the first seven months of the year, reaching $12 billion in annualized revenue, the Information reported on Wednesday citing a source. Reuters could not immediately confirm the report. OpenAI declined to comment. The figure implies that OpenAI is generating $1 billion a month, the report said, adding that the company has around 700 million weekly active users for its ChatGPT products used by both consumers and business customers. The Microsoft-backed company has increased its cash burn projection to roughly $8 billion in 2025, up $1 billion from the cash burn it projected earlier in the year, the Information said. The firm has been lining up investors for the second $30 billion portion of its funding round, the report said, adding that shareholders Sequoia Capital and Tiger Global Management are investing hundreds of millions of dollars in the round. Investors, besides Japan's SoftBank , are close to finalizing $7.5 billion in commitments to that second portion of funding, the report said. The Japanese conglomerate's total agreed investment in OpenAI stood at $32 billion since first investing in Autumn 2024.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store