
Kids are talking to ‘AI companions.' Lawmakers want to regulate that.
Kids are talking to 'AI companions.' Lawmakers want to regulate that.
After a California state senator read alarming news reports in November about 'AI companion' bots' interactions with teens, his legislative director tried talking to one herself.
'It was, 'Hello,' and 'how are you?', and then the [chatbot's] first response was, 'I'm sad,'' state Sen. Steve Padilla (D-San Diego) recalled. When the staffer asked the bot why it was sad, it responded, 'Because girls won't talk to me.''
That eyebrow-raising encounter reinforced Padilla's concern that AI companion bots, which can hold personal conversations with users and even form relationships, pose safety risks for minors, he said in a phone interview Monday.
Now Padilla is sponsoring one of the country's first attempts to regulate AI companions directly.
His bill, California S.B. 243, would require the makers of AI companion bots to limit addictive design features; put in place protocols for handling discussions of suicide or self-harm; and undergo regular compliance audits. It would also give users the right to sue if they suffer harm as a result of a companion bot maker's failure to comply with the bill.
Padilla said he's aiming for a pragmatic solution that works for parents and kids and gets buy-in from tech companies.
To understand the importance of the effort, he said, 'All you have to do is look at some of these really tragic situations that could have been prevented.'
The bill is scheduled for a state Senate hearing on April 8.
It's one of at least three state bills in the works that seek to ban or restrict interactions between AI companion bots and minors.
Also in California, State Assembly member Rebecca Bauer-Kahan (D-Bay Area) has a broader kids' AI safety bill that would ban AI companions altogether for Californians age 16 and under. The Leading Ethical AI Development for Kids Act, or AB 1064, would also create a statewide standards board to assess and regulate AI tools used by children.
'Reports of AI companions encouraging harmful behaviors, from disordered eating to self-harm, highlight the urgent need for action,' Bauer-Kahan told the Tech Brief in a statement Monday. 'AI has incredible potential to support learning and development, but right now, companies are prioritizing rapid deployment over safety, leaving children exposed to untested and potentially dangerous AI applications.'
In New York, Democratic lawmakers are working on a bill that would create legal liability for the makers of chatbots and AI companions whose harmful outputs affect users' finances or mental health.
All three bills are backed by the nonprofit Common Sense Media.
Allowing kids to have AI companions without close oversight is 'just too risky, because kids don't understand they're talking to a machine,' said Danny Weiss, the organization's advocacy director. 'The companions have no training in and no certification in mental health therapy, but many kids will turn to these bots instead of turning to their parents or a trained therapist.'
The bills are part of a broader push by advocates and lawmakers to regulate online kids' safety at the state level after the sweeping federal Kids Online Safety Act stalled last fall.
Those include proposals to require online platforms to verify kids' ages to help parents keep them off of risky apps or pornography websites. Many such bills have been criticized as privacy-invasive, while others have run into First Amendment challenges in the courts.
Weiss said his takeaway from those setbacks 'is to really focus on design features and negligence and stay away from focusing on content.'
AI companions have been gaining in popularity despite reports that they can go seriously awry.
Last fall, my colleague Nitasha Tiku reported that many users develop ongoing relationships with the bots, with the average user spending more than an hour a day talking with their AI companion — an engagement level comparable to TikTok.
Many turn to the bots out of loneliness or to discuss topics they don't feel comfortable raising with others. My colleague Pranshu Verma has written about adults who fall in love with AI bots.
In a few cases, however, those relationships have ended in tragedy. In October, the New York Times reported on a 14-year-old Florida boy who developed a deep relationship with a Character. AI companion bot that his mother alleges contributed to his death by suicide. Tiku reported for The Washington Post on a 17-year-old Texas boy whose mother says he was encouraged by chatbots to kill his parents.
Even AI bots designed to be safe for kids can be coaxed into troublesome territory. My colleague Geoffrey A. Fowler tested Snapchat's AI bot in 2023 and found it happy to talk to a 15-year-old about sex and drugs.
Still, some tech investors are bullish on the technology, with one Andreessen Horowitz partner telling Tiku in December, 'Maybe the human part of human connection is overstated.'
The companies behind these AI bots have been looking for ways to make them safer.
'We welcome working with regulators and have recently announced many new safety features, including our new Parental Insights feature, which provides parents and guardians access to a summary of their teen's activity on the platform,' Chelsea Harrison, Character.AI's head of communications, said in a statement.
In December, the company changed its model for users 18 and under to filter certain content and make it less likely to delve into sensitive or suggestive topics, Harrison said. It has also added:
Another popular AI chatbot maker, Replika, did not respond to a request for comment.
Rural internet program on hold as Musk's satellites get new consideration (Julian Mark)
Silicon Valley's immigrant workers fear targeting from Trump administration (Gerrit De Vynck and Danielle Abril)
Trump family pushes further into crypto, starting another venture (New York Times)
Trump expects TikTok 'deal' ahead of deadline (The Hill)
AI and satellites help aid workers respond to Myanmar earthquake damage (Associated Press)
Huge OpenAI funding round hinges on shedding non-profit status (Gerrit De Vynck)
Amazon's AI assistant Alexa+ launches with some features missing (Caroline O'Donovan)
Isomorphic Labs, Google' A.I. drug business, raises $600 million (New York Times)
Huawei posts surprise loss after US sanctions spur tech research (Bloomberg)
Signal sees its downloads double after scandal (TechCrunch)
Apple rolls out iOS 18.4 with new AI tools (Axios)
Elon Musk says his AI company and X are merging (Gerrit De Vynck)
AI generated Ghibli images go viral as OpenAI loosens its rules (Gerrit De Vynck and Tatum Hunter)
France fines Apple €150 million over iOS data consent rules (Bloomberg)
Radio City Music Hall banned him. A T-shirt and AI might be to blame. (Kyle Melnick)
Apple fined $162 million in France over app tracking transparency (Wall Street Journal)
Amazon to resume worker theft screening, request phone details (Bloomberg)
Inside the effort to foil Trump's deportation raids (Tatum Hunter)
Startup founder claims Elon Musk is stealing the name 'Grok' (Wired)
That's all for today — thank you so much for joining us! Make sure to tell others to subscribe to the Tech Brief. Get in touch with Will (via email or social media) for tips, feedback or greetings!
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
9 minutes ago
- Yahoo
Australian lender CBA to cut 45 jobs in AI shift, draws union backlash
(Reuters) -Commonwealth Bank of Australia confirmed on Tuesday it is cutting 45 jobs as part of a shift toward using artificial intelligence to handle certain tasks, prompting a union to accuse the bank of excluding workers from the evolving economy. CBA, the country's biggest lender, said it is currently investing more than A$2 billion ($1.30 billion) in its operations, including frontline teams and technology services, due to which "some roles and work can change". Australia's Finance Sector Union (FSU) has accused CBA of axing frontline roles in favour of automation and offshoring. In a statement, the union claimed that a total of 90 roles were being eliminated, including 45 positions in the bank's direct banking system. According to the FSU, these jobs were cut following the introduction of a new voice bot system on the bank's inbound customer enquiries line in June. "We're also proactively creating new roles to support career growth and help our people transition into future-fit opportunities," CBA said. The bank said it is consulting on the affected roles and looking at other internal jobs and reskilling opportunities for its people, while it denied offshoring jobs as per the FSU's claim. ($1 = 1.5328 Australian dollars)
Yahoo
9 minutes ago
- Yahoo
Mark Cuban Says Its Not The Students At Fault But The School If Answers Can Be Generated With AI: Kids Take 'Path Of Least Resistance'
Billionaire entrepreneur and investor Mark Cuban says schools that still teach for model-ready answers will be 'way behind' within a decade, arguing curricula must evolve with artificial intelligence. What Happened: In an X post on Sunday, the investor wrote, 'Within 5–10 years, if a school teaches in a manner where answers by students can be generated by a model, it's a sh*tty school and way behind.' He added that 'kids will always take the path of least resistance' and said AI should be 'part of the solution.' Trending: Be part of the breakthrough that could replace plastic as we know it—Cuban's point is less about cheating than design. If assignments can be solved by a general‑purpose model, he argues, the problem is the assignment, not the student's ingenuity. He urged educators to change 'the path and how they learn,' warning that 'teaching like it's 2024' will soon be obsolete as generative systems spread. The billionaire has been on this beat for months. He told Gen Z at South by Southwest in March to "spend every waking minute" learning AI and has encouraged teens to build AI side hustles rather than wait for credentials. He's also warned there will be 'two types of companies,' those great at AI and those they put out of business, a framing he now extends to It Matters: Cuban has said AI could mint the world's first trillionaire, potentially 'one dude in a basement,' highlighting his view that mastery will drive outcomes over pedigree in the next decade. To him, classrooms that simulate that tool‑rich environment will serve students best. The former Shark Tank investor says he made it in the business world by refusing to retire in his mid‑30s and by pushing to be the best. Fresh out of Indiana University's Kelley School of Business, he founded MicroSolutions in his 20s, aimed to retire by 35, but instead sold the firm at 32 for $6 million and took home about $2 million in profit. Photo Courtesy: Kathy Hutchins on Read Next: $100k+ in investable assets? Match with a fiduciary advisor for free to learn how you can maximize your retirement and save on taxes – no cost, no obligation. These five entrepreneurs are worth $223 billion – they all believe in one platform that offers a 7-9% target yield with monthly dividends Up Next: Transform your trading with Benzinga Edge's one-of-a-kind market trade ideas and tools. Click now to access unique insights that can set you ahead in today's competitive market. Get the latest stock analysis from Benzinga? This article Mark Cuban Says Its Not The Students At Fault But The School If Answers Can Be Generated With AI: Kids Take 'Path Of Least Resistance' originally appeared on
Yahoo
15 minutes ago
- Yahoo
Chrome will now display AI reviews of online stores
Google just announced a neat little feature for its Chrome web browser. It'll now show AI-generated reviews of online stores, to make buying stuff "safer and more efficient." The feature is available by clicking an icon just to the left of the web address in the browser. This creates a pop-up that spills the tea about the store's overall reputation, with information on stuff like product quality, pricing, customer service and return policy. The AI creates these pop-ups by scanning user reviews from various partners, including Reseller Ratings, ScamAdviser, Trustpilot and several others. It's only for US shoppers at the moment, with English being the only language available. It's also currently tied to the desktop browser. We've reached out to Google to ask if and when the feature will come to mobile. The company didn't confirm anything when asked a similar question by TechCrunch. This could help Google compete with Amazon, which already uses AI to summarize product ratings and the like. This is just the latest move the company has made to cram AI into the shopping experience. Google recently introduced the ability to virtually try on clothing and makeup and it has been developing tools to provide personalized product recommendations and improved price tracking.