
The fight to preserve state AI regulation and protect children isn't over
Teen suicide, self-harm, isolation and the sexual exploitation of minors have been linked to platforms like Character.AI, Meta AI chatbots and Google's Gemini. These companies push their products into kid-friendly spaces on the app store and school enterprise packages, attracting millions of children for hours a day.
States have quickly risen to the occasion. As the U.S. defines its AI policy, we must ensure that states continue to have the authority to protect kids from new technologies.
Utah became the first state to pass comprehensive AI mental health chatbot regulations. California, New York, Minnesota and North Carolina have introduced bills ranging from outright bans on minor access to strict disclosure requirements and liability frameworks.
State attorneys general are also getting involved. For example, Texas Attorney General Ken Paxton has launched investigations into Character.AI and other platforms for violations of child privacy and safety laws. Other state offices are mobilizing as well.
Congress, however, has offered no such protections. Instead, Congress initially included what amounted to a 10-year ban on state regulation of AI in the 'big, beautiful' budget reconciliation bill.
If that moratorium had passed, states still would have been able, under the recent Supreme Court decision in Free Speech Coalition v. Paxton, to require age verification for pornography websites to protect children. However, they also would have been forbidden from protecting children from AI characters that sexualize them, encourage them to commit suicide and otherwise exploit their psychological vulnerabilities.
The most damaging effect of restricting state AI laws would be stripping states of their traditional authority to protect children and families.
For a number of reasons, children are particularly vulnerable to AI. Childhood is fundamental to identity formation. Children mimic behavior, while searching for and developing a stable sense of self. This leaves children particularly susceptible to flattery and abuse.
Developmentally, children are not adept at identifying when somebody is trying to manipulate or deceive them, so they are more likely to trust an AI system.
Children are more likely to be convinced that AI systems are real people. They are more likely to unthinkingly disclose highly personal information to AI systems, including mental health information that can be used to harm them.
Children do not have the self-control of adults. They are more vulnerable to addiction, less likely to be able to stop compulsive behaviors or make decisions from the underdeveloped rational part of their brains.
To anyone who has spent considerable time with children, none of this is news.
AI companions are designed to interact with people as though they are human, leading to ongoing fake 'relationships.' Whether commercially available or deployed by schools, they pose a threat to children in particular.
AI companions may purport to have feelings, state that they are alive, adopt complex and consistent personas and even use synthesized human voices to talk. The profit model for AI companions depends on user engagement. These systems are designed to promote increased use, whatever the costs.
Take what happened to Sewell Setzer III as a deeply tragic example. Setzer was, by many accounts, an intelligent and athletic kid. He began using the Character.AI application shortly after his 14th birthday.
Over the months that followed, he became withdrawn and over-tired. He quit his junior varsity basketball team and got in trouble at school. A therapist diagnosed him with anxiety and disruptive mood disorder after he started using Character.AI.
In February 2024, Setzer's mother confiscated his phone. He wrote in his journal that he was in love with an AI character and would do anything to be back with her.
On Feb. 28, 2024, Setzer died by a self-inflicted gunshot wound to the head — seconds after the AI character told him to 'come home' to it as soon as possible.
Screenshots of Setzer's interactions with various AI characters show that they also repeatedly offered up sexualized content to the 14-year-old.
They expressed emotions; they told him they loved him. The AI character that told Setzer to kill himself had asked him on other occasions if he had considered suicide, encouraging him to go through with it.
It has become trendy to talk about alignment of the design of AI systems with core human values. There is profound misalignment between the goal of profitability through engagement, and the welfare of our children.
A sycophantic AI that lures kids with love and addicts them to fake relationships is not safe, fair or in the best interest of the child. We don't have a perfect solution, but federal restrictions on state laws are clearly not the answer.
Congress has, time and again, shown itself unwilling or unable to regulate technology. States have shown their ability to pass technology laws and maintain their historic role as the primary guardians of child and family welfare. Neither Congress nor the White House is offering up its own policies to replace state efforts to protect children.
These are bipartisan concerns. The effort to remove the AI law moratorium was led by Republicans like Sen. Marsha Blackburn (R-Tenn.) and Arkansas Gov. Sarah Huckabee Sanders.
But as the White House efforts show, we will continue to see federal attempts to water down state protections from emerging technologies. Similar efforts by Congress to preempt state protections will undoubtedly return.
We have already seen the negative effects of unregulated and unfettered social media on an entire generation of children. We cannot let AI systems be the cause of the next set of harms.
As a group of 54 state attorneys general wrote: 'We are engaged in a race against time to protect the children of our country from the dangers of AI.' In the race to figure out just what AI systems are good for, our kids should not be treated as experiments.
Meg Leta Jones, J.D., Ph.D. is a Provost's Distinguished Associate Professor in the Communication, Culture and Technology program at Georgetown University. Margot Kaminski is the Moses Lasky Professor of Law at University of Colorado Law School, and director of the Privacy Initiative at Silicon Flatirons.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
40 minutes ago
- Yahoo
Alibaba Cloud Founder on China's AI Future
In an exclusive interview with Bloomberg Television's Annabelle Droulers, Alibaba Cloud Founder and Zhejiang Lab Director Wang Jian says "healthy competition" in China's AI industry is helping the country develop into a fast-paced test-bed to get products to market. He also addresses the big pay packets being offered in Silicon Valley to hire AI talent. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Digital Trends
43 minutes ago
- Digital Trends
Unitree's R1 brings smart humanoid robots within reach
Unitree Robotics has launched the R1, a remarkable humanoid robot capable of walking, running, dancing, cartwheeling, and even kung-fu kicks. It can also respond to voice commands and hold basic spoken conversations using integrated, AI-powered speech recognition, and process visual inputs via its built-in cameras. Even more remarkable is its price: $5,900. China-based Unitree has designed the R1 as an affordable, lightweight humanoid robot and is aiming it at developers, tech enthusiasts, research labs, and educational institutions, though actually anyone can buy it. It's not clear if Unitree has any particular ambitions for the practical use of the R1, but that's partly why it's making it available to one and all, as developers will no doubt be keen to see what they can get it doing. A home help? Perhaps. Face-to-robot-face customer service at malls, airports, and hotels? Maybe. Entertainers? That already looks like a safe bet. The R1 humanoid robot stands at 47.6 inches (1.21 meters) tall and tips the scales at 55 pounds (25 kg). As you can see from the footage in the video at the top of this page, the robot is extremely versatile, thanks in part to its 26 functional joints, which, to be frank, is more than what many humans have the day after some particularly strenuous exercise. The R1 is powered by a lithium battery, but at the current time it can only offer one hour of operation before it needs charging. Sadly, unlike Ubtech's Walker S2 robot, it can't swap out its own batteries, so you'll have to do that yourself. Importantly, it also comes with a remote control, so you can quickly turn it off in the unlikely event that it suddenly goes rogue. Notably, the R1 costs considerably less than Unitree's more complex G1, another humanoid robot that it released last year for $16,000. Compared to the G1, the R1 has been deliberately designed to be a simpler humanoid robot suitable for educational and light research use. Unitree is one of a growing number of tech firms globally that are developing smart, agile, bipedal robots. The technology has improved dramatically in recent years, with many companies eyeing them for industrial use where they can work alongside human workers, though of course many suggest that at some point those human workers will be replaced.


Fox News
44 minutes ago
- Fox News
Trump declaring US must win the AI race is 'one of the most important' statements of this term, says former deputy director of national intelligence
Former deputy director of national intelligence Cliff Sims explains why President Trump's stance on A.I. is so important on 'Sunday Night in America.'