Mattel and OpenAI have partnered up – here's why parents should be concerned about AI in toys
Mattel may seem like an unchanging, old-school brand. Most of us are familiar with it – be it through Barbie, Fisher-Price, Thomas & Friends, Uno, Masters of the Universe, Matchbox, MEGA or Polly Pocket.
But toys are changing. In a world where children grow up with algorithm-curated content and voice assistants, toy manufacturers are looking to AI for new opportunities.
Mattel has now partnered with OpenAI, the company behind ChatGPT, to bring generative AI into some of its products. As OpenAI's services are not designed for children under 13, in principle Mattel will focus on products for families and older children.
But this still raises urgent questions about what kind of relationships children will form with toys that can talk back, listen and even claim to 'understand' them. Are we doing right by kids, and do we need to think twice before bringing these toys home?
Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK's latest coverage of news and research, from politics and business to the arts and sciences.
For as long as there have been toys, children have projected feelings and imagined lives onto them. A doll could be a confidante, a patient or a friend.
But over recent decades, toys have become more responsive. In 1960, Mattel released Chatty Cathy, which chirped 'I love you' and 'Let's play school'. By the mid-1980s, Teddy Ruxpin had introduced animatronic storytelling. Then came Furby and Tamagotchi in the 1990s, creatures requiring care and attention, mimicking emotional needs.
The 2015 release of 'Hello Barbie', which used cloud-based AI to listen to and respond to children's conversations, signalled another important, albeit short-lived, change. Barbie now remembered what children told her, sending data back to Mattel's servers. Security researchers soon showed that the dolls could be hacked, exposing home networks and personal recordings.
Putting generative AI in the mix is a new development. Unlike earlier talking toys, such systems will engage in free-flowing conversation. They may simulate care, express emotion, remember preferences and give seemingly thoughtful advice. The result will be toy that don't just entertain, but interact on a psychological level. Of course, they won't really understand or care, but they may appear to.
Details from Mattel or Open AI are scarce. One would hope that safety features will be built in, including limitations on topics and pre-scripted responses for sensitive themes and when conversations go off course.
But even this won't be foolproof. AI systems can be 'jailbroken' or tricked into bypassing restrictions through roleplay or hypothetical scenarios. Risks can only be minimised, not eradicated.
The risks are multiple. Let's start with privacy. Children can't be expected to understand how their data is processed. Parents often don't either – and that includes me. Online consent systems nudge us all to click 'accept all', often without fully grasping what's being shared.
Then there's psychological intimacy. These toys are designed to mimic human empathy. If a child comes home sad and tells their doll about it, the AI might console them. The doll could then adapt future conversations accordingly. But it doesn't actually care. It's pretending to, and that illusion can be powerful.
This creates potential for one-sided emotional bonds, with children forming attachments to systems that cannot reciprocate. As AI systems learn about a child's moods, preferences and vulnerabilities, they may also build data profiles to follow children into adulthood.
These aren't just toys, they're psychological actors.
A UK national survey I conducted with colleagues in 2021 about possibilities of AI in toys that profile child emotion found that 80% of parents were concerned about who would have access to their child's data. Other privacy questions that need answering are less obvious, but arguably more important.
When asked whether toy companies should be obliged to flag possible signs of abuse or distress to authorities, 54% of UK citizens agreed – suggesting the need for a social conversation with no easy answer. While vulnerable children should be protected, state surveillance into the family domain has little appeal.
Yet despite concerns, people also see benefits. Our 2021 survey found that many parents want their children to understand emerging technologies. This leads to a mixed response of curiosity and concern. Parents we surveyed also supported having clear consent notices, printed on packaging, as the most important safeguard.
My more recent 2025 research with Vian Bakir on online AI companions and children found stronger concerns. Some 75% of respondents were concerned about children becoming emotionally attached to AI. About 57% of people thought that it is inappropriate for children to confide in AI companions about their thoughts, feelings or personal issues (17% thought it is appropriate, and 27% were neutral).
Our respondents were also concerned about the impact on child development, seeing scope for harm.
In other research, we have argued that current AI companions are fundamentally flawed. We provide seven suggestions to redesign them, involving remedies for over-attachment and dependency, removal of metrics based on extending engagement though personal information disclosure and promotion of AI literacy among children and parents (which represents a huge marketing opportunity by positively leading social conversation).
It's hard to know how successful the new venture will be. It might be that that Empathic Barbie goes the way of Hello Barbie, to toy history. If it does not, the key question for parents is this: whose interests is this toy really serving, your child's or that of a business model?
Toy companies are moving ahead with empathic AI products, but the UK, like many countries, doesn't yet have a specific AI law. The new Data (Use and Access) Act 2025 updates the UK's data protection and privacy and electronic communications regulations, recognising need for strong protections for children. The EU's AI Act also makes important provisions.
International governance efforts are vital. One example is IEEE P7014.1, a forthcoming global standard on the ethical design of AI systems that emulate empathy (I chair the working group producing the standard).
The organisation behind the standard, the IEEE, ultimately identifies potential harms and offers practical guidance on what responsible use looks like. So while laws should set limits, detailed standards can help define good practice.
The Conversation approached Mattel about the issues raised in this article and it declined to comment publicly.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Andrew McStay is funded by EPSRC Responsible AI UK (EP/Y009800/1) and is affiliated with IEEE.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
2 hours ago
- Yahoo
Meta hires four more OpenAI researchers, The Information reports
(Reuters) -Meta Platforms is hiring four more OpenAI artificial intelligence researchers, The Information reported on Saturday. The researchers, Shengjia Zhao, Jiahui Yu, Shuchao Bi and Hongyu Ren have each agreed to join, the report said, citing a person familiar with their hiring. Earlier this week, the Instagram parent hired Lucas Beyer, Alexander Kolesnikov and Xiaohua Zhai, who were all working in OpenAI's Zurich office, the Wall Street Journal reported. Meta and ChatGPT maker OpenAI did not immediately respond to a Reuters request for comment. The company has recently been pushing to hire more researchers from OpenAI to join chief executive Mark Zuckerberg's superintelligence efforts. Reuters could not immediately verify the report.

WIRED
2 hours ago
- WIRED
OpenAI Loses Four Key Researchers to Meta
Jun 28, 2025 4:16 PM Mark Zuckerberg has been working to poach talent from rival labs for his new superintelligence team. Photograph:Four OpenAI researchers are leaving the company to go to Meta, two sources confirm to WIRED. Shengjia Zhao, Shuchao Bi, Jiahui Yu, and Hongyu Ren have joined Meta's superintelligence team. Their OpenAI Slack profiles have been deactivated. The Information first reported on the departures. It's the latest in a series of aggressive moves by Mark Zuckerberg, who is racing to catch up to OpenAI, Anthropic and Google in building artificial general intelligence. Earlier this month, OpenAI CEO Sam Altman said that Meta has been making 'giant offers' to OpenAI staffers with '$100 million signing bonuses.' He added that, 'none of our best people have decided to take them up on that.' A source at OpenAI confirmed the offers. Hongyu Ren was OpenAI's post-training lead for the o3 and o4 mini models, along with the open source model that's set to be released this summer, sources say. Post-training is the process of refining a model after it has been trained on a primary dataset. Shengjia Zhao is highly skilled in deep learning research, according to another source. He joined OpenAI in the summer of 2022, and helped build the startup's GPT-4 model. Jiahui Yu did a stint at Google DeepMind before joining OpenAI in late 2023. Shuchao Bi was a manager of OpenAI's multimodal models. The departures from OpenAI come shortly after the company lost three researchers from its Zurich office, the Wall Street Journal reported. OpenAI and Meta did not immediately respond to a request for comment. This is a developing story. Please check back for updates .
Yahoo
3 hours ago
- Yahoo
Meta reportedly hires four more researchers from OpenAI
Looks like Meta isn't done poaching talent from OpenAI. Earlier this week, TechCrunch reported that Meta had hired influential OpenAI researcher Trapit Bansal, and according to The Wall Street Journal, it also hired three other researchers from the company. Now The Information is reporting four more Meta hires from OpenAI: Researchers Shengjia Zhao, Jiahui Yu, Shuchao Bi, and Hongyu Ren. This hiring spree comes after the April launch of Meta's Llama 4 AI models, which reportedly did not perform as well as CEO Mark Zuckerberg had hoped. (The company was also criticized over the version of Llama that it used for a popular benchmark.) There's been some back-and-forth between the two companies, with OpenAI CEO Sam Altman suggesting that Meta was offering '$100 million signing bonuses' while adding that 'so far, none of our best people' have left. Meta CTO Andrew Bosworth then told employees that while senior leaders may have been offered that kind of money, 'the actual terms of the offer' were more complex than a simple one-time signing bonus.