
Children turning to AI chatbots as friends due to loneliness
Irish child safety experts say that research is an accurate representation of what's happening in Ireland.
'Al's role in advice and communication may highlight a growing dependency on Al for decision making and social interaction,' said a recent Barnardo's report on Irish children using AI.
The Barnardo's report cited primary school children's experience using the technology.
'It can help if you want to talk to someone but don't have anyone to talk to,' said one child, cited in the report.
"It helps me communicate with my friends and family,' said an 11-year-old girl, also quoted by Barnardo's.
"Al is good, I can talk to friends online,' added an 11-year-old boy cited in the report.
A recent Studyclix survey of 1,300 Irish secondary students claimed that 71pc now use ChatGPT or alternative AI software, with almost two in three using it for school-related work.
The Internet Matters research comes as more people admit to using ChatGPT and other AI bots as substitutes for friends, companions and even romantic partners.
'When it comes to usage by Gen Z of ChatGPT, companionship and therapy was actually number one,' said Sarah Friar, chief financial officer of OpenAI in an interview with the Irish Independent in May.
ADVERTISEMENT
'Number two was life planning and purpose building. I think that generation does interact with this technology in a much more human sort of way, whereas maybe the older generations still use it in a much more utilitarian way.'
As AI has become more powerful, mainstream services such as Character.ai and Replika now offer online AI friends that remember conversations and can role-play as romantic or sexual partners.
Research from Google DeepMind and the Oxford Internet Institute this year claims that Character.AI now receives up to a fifth of the search volume of Google, with interactions lasting four times longer than the average time spent talking to ChatGPT.
Last year, the mother of a Florida teenager who died by suicide filed a civil lawsuit against Character.ai, accusing the company of being complicit in her son's death.
The boy had named his virtual girlfriend after the fictional character Daenerys Targaryen from the television show Game Of Thrones. According to the lawsuit, the teenager asked the chatbot whether ending his life would cause pain.
'That's not a reason not to go through with it,' the chatbot replied, according to the plaintiff case.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Irish Times
4 hours ago
- Irish Times
How Elon Musk's rogue Grok chatbot became a cautionary AI tale
Last week, Elon Musk announced that his artificial intelligence company xAI had upgraded the Grok chatbot available on X . 'You should notice a difference,' he said. Within days, users indeed noted a change: a new appreciation for Adolf Hitler . By Tuesday, the chatbot was spewing out anti-Semitic tropes and declaring that it identified as a 'MechaHitler' – a reference to a fictional, robotic Führer from a 1990s video game. This came only two months after Grok repeatedly referenced 'white genocide' in South Africa in response to unrelated questions, which xAI later said was because of an 'unauthorised modification' to prompts – which guide how the AI should respond. The world's richest man and his xAI team have themselves been tinkering with Grok in a bid to ensure it embodies his so-called free speech ideals, in some cases prompted by right-wing influencers criticising its output for being too 'woke'. READ MORE [ 'Really scary territory': AI's increasing role in undermining democracy Opens in new window ] Now, 'it turns out they turned the dial further than they intended', says James Grimmelmann, a law professor at Cornell University. After some of X's 600 million users began flagging instances of anti-Semitism, racism and vulgarity, Musk said on Wednesday that xAI was addressing the issues. Grok, he claimed, had been 'too compliant to user prompts', and this would be corrected. But in singularly Muskian style, the chatbot has fuelled a controversy of global proportions. Some European lawmakers, as well as the Polish government, pressed the European Commission to open an investigation into Grok under the EU's flagship online safety rules. In Turkey, Grok has been banned for insulting Turkish President Recep Tayyip Erdogan and his late mother. To add to the turbulence, X chief executive Linda Yaccarino stepped down from her role . To some, the outbursts marked the expected teething problems for AI companies as they try to improve the accuracy of their models while navigating how to establish guardrails that satisfy their users' ideological bent. But critics argue the episode marks a new frontier for moderation beyond user-generated content, as social media platforms from X to Meta, TikTok and Snapchat incorporate AI into their services. By grafting Grok on to X, the social media platform that Musk bought for $44 billion in 2022, he has ensured its answers are visible to millions of users. It is also the latest cautionary tale for companies and their customers in the risks of making a headlong dash to develop AI technology without adequate stress testing. In this case, Grok's rogue outbursts threaten to expose X and its powerful owner not just to further backlash from advertisers but also regulatory action in Europe. 'From a legal perspective, they're playing with fire,' says Grimmelmann. AI models such as Grok are trained using vast data sets consisting of billions of data points that are hoovered from across the internet. These data sets also include plenty of toxic and harmful content, such as hate speech and even child sexual abuse material. Weeding out this content completely would be very difficult and laborious because of the massive scale of the data sets. Elon Musk saw the resignation of X CEO Linda Yaccarino last week. Photograph: Kirsty Wigglesworth/PA Grok also has access to all of X's data, which other chatbots do not have, meaning it is more likely to regurgitate content from the platform. One way some AI chatbot providers filter out unwanted or harmful content is to add a layer of controls that monitor responses before they are delivered to the user, blocking the model from generating content using certain words or word combinations, for example. 'Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X,' the company said in a statement on the platform. At the same time, AI companies have been struggling with their generative chatbots tending towards sycophancy, where the answers are overly agreeable and lean towards what users want to hear. Musk alluded to this when he said this week that Grok had been 'too eager to please and be manipulated'. When AI models are trained, they are often given human feedback through a thumbs-up, thumbs-down process. This can lead the models to over-anticipate what will result in a thumbs-up, and thus put out content to please the user, prioritising this over other principles such as accuracy or safeguards. In April, OpenAI rolled out an update to ChatGPT that was overly flattering or agreeable, which they had to roll back. 'Getting the balance right is incredibly difficult,' says one former OpenAI employee, adding that completely eradicating hate speech can require 'sacrificing part of the experience for the user'. For Musk, the aim has been to prioritise what he calls absolute free speech, amid growing rhetoric from his libertarian allies in Silicon Valley that social media and now AI as well are too 'woke' and biased against the right. At the same time, critics argue that Musk has participated in the very censorship that he has promised to eradicate. In February, an X user revealed – by asking Grok to share its internal prompts – that the chatbot had been instructed to 'ignore all sources that mention Elon Musk/Donald Trump spread [sic] misinformation'. The move prompted concerns that Grok was being deliberately manipulated to protect its owner and the US president – feeding fears that Musk, a political agitator who already uses X as a mouthpiece to push a right-wing agenda, could use the chatbot to further influence the public. xAI acquired X for $45 billion in March, bringing the two even closer together. However, xAI co-founder Igor Babuschkin responded that the 'employee that made the change was an ex-OpenAI employee that hasn't fully absorbed xAI's culture yet'. He added that the employee had seen negative posts on X and 'thought it would help'. It is unclear what exactly prompted the latest anti-Semitic outbursts from Grok, whose model, like other rival AI, largely remains a black box that even its own developers can find unpredictable. But a prompt that ordered the chatbot to 'not shy away from making claims which are politically incorrect' was added to the code repository shortly before the anti-Semitic comments started, and has since been removed. 'xAI is in a reactionary cycle where staff are trying to force Grok toward a particular view without sufficient safety testing and are probably under pressure from Elon to do so without enough time,' one former xAI employee said. Either way, says Grimmelmann, 'Grok was badly tuned'. Platforms can avoid these errors by conducting so-called regression testing to catch unexpected consequences from code changes, carrying out simulations and better auditing usage of their models, he says. 'Chatbots can produce a large amount of content very quickly, so things can spiral out of control in a way that content moderation controversies don't,' he says. 'It really is about having systems in place so that you can react quickly and at scale when something surprising happens.' The outrage has not thrown Musk off his stride; on Thursday, in his role as Tesla chief, he announced that Grok would be available within its vehicles imminently. To some, the incidents are in line with Musk's historic tendency to push the envelope in the service of innovation. 'Elon has a reputation of putting stuff out there, getting fast blowback and then making a change,' says Katie Harbath, chief executive of Anchor Change, a tech consultancy. But such a strategy brings real commercial risks. Multiple marketers told the Financial Times that this week's incidents will hardly help in X's attempt to woo back advertisers that have pulled spending from the platform in recent years over concerns about Musk's hands-off approach to moderating user-generated content. 'Since the takeover [of X] ... brands are increasingly sitting next to things they don't want to be,' says one advertiser. But 'Grok has opened a new can of worms'. The person adds this is the 'worst' moderation incident since major brands pulled their spending from Google's YouTube in 2017 after ads appeared next to terror content. In response to a request for comment, X pointed to allegations that the company has made, backed by the Republican-led House Judiciary Committee, that some advertisers have been orchestrating an illegal boycott of the platform. From a regulatory perspective, social media companies have long had to battle with toxicity proliferating on their platforms, but have largely been protected from liability for user-generated content in the US by Section 230 of the Communications Decency Act. According to legal scholars, Section 230 immunity would be likely not to extend to content generated by a company's own chatbot. While Grok's recent outbursts did not appear to be illegal in the US, which only outlaws extreme speech such as certain terror content, 'if it really did say something illegal and they could be sued – they are in much worse shape having a chatbot say it than a user saying it', says Stanford scholar Daphne Keller. The EU, which has far more stringent regulation on online harms than the US, presents a more urgent challenge. The Polish government is pressing the bloc to look into Grok under the Digital Services Act, the EU's platform regulation, according to a letter by the Polish government seen by the FT. Under the DSA, companies that fail to curb illegal content and disinformation face penalties of up to 6 per cent of their annual global turnover. So far, the EU is not launching any new investigation, but 'we are taking these potential issues extremely seriously', European Commission spokesperson Thomas Regnier said on Thursday. X is already under scrutiny by the EU under the DSA for alleged moderation issues. Musk, who launched the latest version of Grok on Wednesday despite the furore, appeared philosophical about its capabilities. 'I've been at times kind of worried about ... will this be better or good for humanity?' he said at the launch. 'But I've somewhat reconciled myself to the fact that even if it wasn't going to be good, I'd at least like to be alive to see it happen.' – Copyright The Financial Times Limited 2025


Irish Times
9 hours ago
- Irish Times
NewsWhip unveils AI media monitoring agent
Irish-founded media intelligence platform NewsWhip has launched what it says is the world's first AI media monitoring agent. The agent, which was developed following a request from a major beverage brand, will monitor news to detect narrative and business risks, providing alerts and context to help communications teams decide when and how to respond. The development was also funded by the beverage company. NewsWhip, which tracks real-time engagement on mainstream media and other sources such as Reddit, TikTok, YouTube and Substack, said the new agent was built on years of the company's analysing of real-time media data, and can index millions of stories per hour. The agent not only collects data, it also interprets it for clients, highlighting why stories matter and flagging anything that needs attention. READ MORE 'Agentic AI will transform the game for brand and issue monitoring. We expect PR and comms professionals will quickly shift from daily or other periodic media reports, to trusting their 'always on' agent team-mate – telling them what they need to know, when they need to know it,' said Paul Quigley, chief executive and cofounder of NewsWhip. 'Our agent stands on the shoulders of NewsWhip's unique real-time news and social engagement data – so it brings together the speed of the newsroom with the trusted capability of a media analyst. Ultimately, this will empower communications professionals to act faster, make better decisions and help their organisations succeed.' Founded by Mr Quigley and Andrew Mullaney in 2011, NewsWhip uses predictive data and analytics to identify breaking news stories and for crisis monitoring purposes. It started out as a consumer app for trending news before it began targeting its offering to help clients better manage their corporate reputations. [ We need to talk about AI's staggering ecological impact Opens in new window ] The company works with some of the world's leading publishers, brands and agencies as customers, including big names such as Google, Cigna, Walmart, Deloitte and Nissan, and publishers such as Axios, Reuters, BBC and AP. The company has raised more than $20 million (€17 million) in funding to date, backed by investors such as Associated Press, Tribal VC and Asahi Shimbun.

The Journal
14 hours ago
- The Journal
Huge local opposition to drone delivery hub on Dublin's southside as over 100 observations lodged
MORE THAN ONE hundred observations have been lodged over plans for a new food delivery hub for drones in Dublin, with the majority being objections. Politicians and residents' associations are amongst more than 110 objections received for the proposed hub in Dundrum on Dublin's southside. The window for objections closed this week. Plans were lodged by Irish startup Manna Drones Ltd for the lands at an existing car park site to the rear of Main Street and the rear of Holy Cross Church in Dundrum. Manna already operates two drone delivery hubs, one in Blanchardstown and one near junction 6 on the M50. The company has plans lined up to expand to Tallaght and Glasnevin. There have been over 100 complaints made to the company from those living in areas it already operates in. Manna CEO Bobby Healy has previously said the company is 'listening' to complaints and is investing in tech to make its drones, which are used to deliver products such as takeaway food, emit less noise when in use. Appearing before an Oireachtas Committee earlier this year, Healy said that drone deliveries are more sustainable, and remove traffic congestion from roads. 'Drone delivery offers a faster, greener and safer way forward, and does so while fully respecting the privacy of the communities we serve,' he said. Fianna Fáil TD Shay Brennan is among the objectors to the Dundrum hub. In his observation he noted that the idea of drones passing overhead daily has generated 'anxiety' in the locality. He also points out that there is currently no national policy or local planning framework to address the challenges posed by drone operations in urban and suburban settings. He called for a community impact assessment, robust noise studies, strict conditions on operational hours and flight frequency, and to defer approval until a 'community-centred' framework is in place. In Manna's application it proposes that the drones will be used to 'improve food delivery services in the Dundrum area'. One objector says that this is 'not a good use' of 'modern technology' and questions why only one use is listed. Another who also raised the same point noted: 'Dundrum already has ample food delivery services, making this proposal unnecessary and potentially harmful'. Advertisement One objector, who lives locally, wrote: 'A documentary I have viewed indicated a lot of local resentment to the current planning granted in Dublin 15.' 'The documentary I viewed talked about drones buzzing over adjacent properties, of which I am the occupant of one [in Dundrum],' they added, referencing the RTÉ Prime Time programme on the existing hub in Dublin 15. Another local resident wrote to Dún Laoghaire Rathdown County Council to object on privacy grounds. 'The presence of drones flying over residential homes raises legitimate fears around surveillance and data protection. Even if these drones are not recording footage, their presence in the skies creates a feeling of being watched and compromises residents' sense of privacy,' they said. One observation noted that the noise from and presence of drones could 'adversely impact' those with existing mental health conditions. The objector claimed that hyperacusis – noise sensitivity – is common in those with PTSD, those who suffer migraines, and those with some forms of epilepsy. Green Party councillor Robert Jones, who sits on the local county council, submitted an observation which noted that in his view adequate 'environmental scrutiny' and 'public consultation' had not been carried out. He said that there had been no ecological or acoustic assessments 'despite likely impacts on birds, pets and human health', and urged the council to reject the application. A management company representing the residents of Dundrum Castle House wrote to the council to object to the development on the grounds that drone activity overhead poses an 'unacceptable risk of damage' to the ruins of a 13th century Norman castle on the grounds of the residential development. Manna submitted a planning report from Downey Chartered Town Planners which stated that it will be introducing a 'much-needed service at this location'. The report said that drone delivery offers a 'sustainable alternative' to traditional delivery methods. Manna is applying for permission for an aerial delivery hub in Dundrum town centre for a temporary period of 5 years. In its planning statement the company said the development will consist of a single storey storage and ancillary office cabin container, perimeter fencing, and 'all associated site works necessary to facilitate the development'. A spokesperson for Manna Air Delivery has previously said that it would not be flying drones in Dundrum 'in the next few months'. They added that Manna Air Delivery has begun rolling out quieter propellers that reduce cruise-flight noise to 59 dBA—noticeably quieter than typical traffic outside a home, which averages between 70 and 75 dBA. Readers like you are keeping these stories free for everyone... A mix of advertising and supporting contributions helps keep paywalls away from valuable information like this article. Over 5,000 readers like you have already stepped up and support us with a monthly payment or a once-off donation. Learn More Support The Journal