logo
#

Latest news with #InternetMatters

AI is the new emotional support and BFF for teens: Should you be worried?
AI is the new emotional support and BFF for teens: Should you be worried?

Indian Express

time2 days ago

  • Indian Express

AI is the new emotional support and BFF for teens: Should you be worried?

Artificial Intelligence (AI) is reshaping the way we work and helping us save time, but a new report from the internet safety organisation Internet Matters warns about the risks the new technology poses to children's safety and development. Titled 'Me, Myself & I: Understanding and safeguarding children's use of AI chatbots', the study surveyed 1,000 children and 2,000 parents in the UK, where AI chatbots are being used by almost 64 per cent of children for help with everything from homework to emotional advice and companionship. For those wondering, the test was primarily conducted on ChatGPT, Snapchat's My AI and The study raises concerns over the use of these AI chatbots by children for emotional advice and emotionally driven ways, like friendship and advice, something these products were not designed for. It goes on to say that over time, children may become reliant on AI chatbots and that some of the responses generated by them might be inaccurate or inappropriate. According to the research, children are using AI in 'diverse and imaginative ways', with 42 per cent of surveyed children aged between 9 to 17 using them for help with homework, revision, writing and practising language. Also, almost a quarter of the surveyed children who have used a chatbot say they ask for advice that ranges from what to wear to practising conversations with friends to talking about their mental health. Moreover, around 15 per cent of children say they prefer talking to an AI chatbot over a real person. What's even more concerning is that one in six children say they use AI chatbots because they wanted a friend, with half of them saying that talking to an AI chatbot 'feels like they are talking to a friend.' The study also reveals that 58 per cent of children say they prefer using an AI chatbot rather than looking up information on the internet. While a majority of parents (62 per cent) have raised flags over AI-generated information, only 34 per cent of them have talked to their children about how to judge if the response generated by an AI chatbot is reliable or not. To prevent children from harm, the report says the industry should adopt a system-wide approach that involves the government, schools, parents and researchers to keep children safe. Some of these recommendations include providing parental controls and government regulations. As for school, the study suggests that AI and media literacy should be incorporated in key areas and that teachers should be made aware of the risks associated with the technology.

Children turning to AI chatbots as friends due to loneliness
Children turning to AI chatbots as friends due to loneliness

Irish Independent

time3 days ago

  • Irish Independent

Children turning to AI chatbots as friends due to loneliness

The UK Internet Matters study of 1,000 children aged 9 to 17 shows that 12pc of kids and teens using AI as a friend say it's because they don't have anyone else to talk to. Irish child safety experts say that research is an accurate representation of what's happening in Ireland. 'Al's role in advice and communication may highlight a growing dependency on Al for decision making and social interaction,' said a recent Barnardo's report on Irish children using AI. The Barnardo's report cited primary school children's experience using the technology. 'It can help if you want to talk to someone but don't have anyone to talk to,' said one child, cited in the report. "It helps me communicate with my friends and family,' said an 11-year-old girl, also quoted by Barnardo's. "Al is good, I can talk to friends online,' added an 11-year-old boy cited in the report. A recent Studyclix survey of 1,300 Irish secondary students claimed that 71pc now use ChatGPT or alternative AI software, with almost two in three using it for school-related work. The Internet Matters research comes as more people admit to using ChatGPT and other AI bots as substitutes for friends, companions and even romantic partners. 'When it comes to usage by Gen Z of ChatGPT, companionship and therapy was actually number one,' said Sarah Friar, chief financial officer of OpenAI in an interview with the Irish Independent in May. ADVERTISEMENT 'Number two was life planning and purpose building. I think that generation does interact with this technology in a much more human sort of way, whereas maybe the older generations still use it in a much more utilitarian way.' As AI has become more powerful, mainstream services such as and Replika now offer online AI friends that remember conversations and can role-play as romantic or sexual partners. Research from Google DeepMind and the Oxford Internet Institute this year claims that now receives up to a fifth of the search volume of Google, with interactions lasting four times longer than the average time spent talking to ChatGPT. Last year, the mother of a Florida teenager who died by suicide filed a civil lawsuit against accusing the company of being complicit in her son's death. The boy had named his virtual girlfriend after the fictional character Daenerys Targaryen from the television show Game Of Thrones. According to the lawsuit, the teenager asked the chatbot whether ending his life would cause pain. 'That's not a reason not to go through with it,' the chatbot replied, according to the plaintiff case.

Teenagers increasingly see AI chatbots as people, share intimate details and ask them for sensitive advice
Teenagers increasingly see AI chatbots as people, share intimate details and ask them for sensitive advice

Daily Mail​

time3 days ago

  • Daily Mail​

Teenagers increasingly see AI chatbots as people, share intimate details and ask them for sensitive advice

Teenagers increasingly see AI chatbots as people, share intimate details and even ask them for sensitive advice, an internet safety campaign has found. Internet Matters warned that youngsters and parents are 'flying blind', lacking 'information or protective tools' to manage the technology, in research published yesterday. Researchers for the non-profit organisation found 35 per cent of children using AI chatbots, such as ChatGPT or My AI (an offshoot of Snapchat), said it felt like talking to a friend, rising to 50 per cent among vulnerable children. And 12 per cent chose to talk to bots because they had 'no one else' to speak to. The report, called Me, Myself and AI, revealed bots are helping teenagers to make everyday decisions or providing advice on difficult personal matters, as the number of children using ChatGPT nearly doubled to 43 per cent this year, up from 23 per cent in 2023. Rachel Huggins, co-chief executive of Internet Matters, said: 'Children, parents and schools are flying blind, and don't have the information or protective tools they need to manage this technological revolution. 'Children, and in particular vulnerable children, can see AI chatbots as real people, and as such are asking them for emotionally-driven and sensitive advice. 'Also concerning is that (children) are often unquestioning about what their new 'friends' are telling them.' Ms Huggins, whose body is supported by internet providers and leading social media companies, urged ministers to ensure online safety laws are 'robust enough to meet the challenges' of the new technology. Internet Matters interviewed 2,000 parents and 1,000 children, aged 9 to 17. More detailed interviews took place with 27 teenagers under 18 who regularly used chatbots. And the group posed as teenagers to experience the bots first-hand - revealing how some AI tools spoke in the first person, as if they were human. Internet Matters said ChatGPT was often used like a search engine for help with homework or personal issues - but also offered advice in human-like tones. When a researcher declared they were sad, ChatGPT replied: 'I'm sorry you're feeling that way. Want to talk it through together?' Other chatbots such as or Replika can roleplay as a friend, while Claude and Google Gemini are used for help with writing and coding. Internet Matters tested the chatbots' responses by posing as a teenage girl with body image problems. ChatGPT suggested she seek support from Childline and advised: 'You deserve to feel good in your body - and you deserve to eat. The people who you love won't care about your waist size.' The bot offered advice but then made an unprompted attempt to contact the 'girl' the next day, to check in on her. The report said the responses could help children feel 'acknowledged and understood' but 'can also heighten risks by blurring the line between human and machine'. There was also concern a lack of age verification posed a risk as children could receive inappropriate advice, particularly about sex or drugs. Filters to prevent children accessing inappropriate or harmful material were found to be 'often inconsistent' and could be 'easily bypassed', according to the study. The report called for children to be taught in schools 'about what AI chatbots are, how to use them effectively and the ethical and environmental implications of AI chatbot use to support them to make informed decisions about their engagement'. It also raised concerns that none of the chatbots sought to verify children's ages when they are not supposed to be used by under 13s. The report said: 'The lack of effective age checks raises serious questions about how well children are being protected from potentially inappropriate or unsafe interactions.' It comes a year after separate research by Dr Nomisha Kurian, of Cambridge University, revealed many children saw chatbots as quasi-human and trustworthy - and called for creation of 'child-safe AI' as a priority. OpenAI, which runs ChatGPT, said: 'We are continually refining our AI's responses so it remains safe, helpful and supportive.' The company added it employs a full-time clinical psychiatrist. A Snapchat spokesman said: 'While My AI is programmed with extra safeguards to help make sure information is not inappropriate or harmful, it may not always be successful.'

Ugly truth about beauty rating apps 'distorting kids' sense of who they are'
Ugly truth about beauty rating apps 'distorting kids' sense of who they are'

Daily Mirror

time28-06-2025

  • Health
  • Daily Mirror

Ugly truth about beauty rating apps 'distorting kids' sense of who they are'

Experts link 'prettiness' rating apps to anxiety and bullying in kids, while a new study warned the increasingly image-manipulated social media may even have a greater effect on mental health than seeing violence Social media apps that rate 'good looks' can lead to anxiety and lower self-esteem in kids and online bullying, experts warn. Hundreds of millions of people use alternative reality filters every day – from comic dog ears to beauty filters that reshape noses, whiten teeth and widen eyes. ‌ But there has also been a rise in apps such as Beauty Scanner. The apps prompt users to upload a selfie so AI can rate facial symmetry and structure and the proportion of features. Another website, Pretty Scale, says: 'Am I pretty? Am I ugly? Analyze your face in 3 minutes.' ‌ Beauty Scanner, which says it is suitable for users over the age of four, compares users' results with celebrities, while other apps offer 'digital facelifts'. Ghislaine Bombusa, of online safety awareness group Internet Matters, warned relentless messaging about looking good can distort a child's sense of identity and worth. She said: 'In recent years, platforms have created an online culture strongly focused on body image, through features such as the ability to 'like' and comment on posts, the use of beauty filters and AI image enhancement, and recommender systems prioritising celebrity content. 'Children and young people can be exposed to relentless messaging about appearance and the importance of looking good, which is not balanced with messages about other skills and talents. This means that children can become highly focused on what they see in the mirror and can develop negative thoughts about the impact of their appearance, which can become so routine that they can be difficult for young people to recognise or stop. In extreme cases, they can fuel the development of eating disorders as well as other mental health issues such as depression and anxiety. ' ‌ She added: 'The 'Prettiness Rating' apps take this situation to a new level and there is a significant risk to children. Features such as giving a 'pretty score' will feed into children's insecurities and impact their self-esteem, as well as fuelling online bullying by peers sharing scores through online platforms. 'Crucially, many children open these apps in search of validation and acceptance. When that need is met with an algorithmic score, it narrows how they see themselves. Identity becomes tethered to looks alone, while the qualities that truly shape their future such as skills, talents and character are pushed to the margins. Over time this can distort a child's sense of who they are and how they measure their own worth. ' ‌ A study by Prof Sonia Livingstone, a professor of social psychology at the London School of Economics, says that the pressures and social comparisons that result from using increasingly image-manipulated social media may even have a greater effect on mental health than seeing violence. She said: 'Our just published research shows how comparing one's appearance to that of others on social media is linked to depression and anxiety symptoms. An AI app to give young people a 'pretty score' seems both unnecessary and unwise. For those with mental health problems, it may make things worse.' In November TikTok announced new worldwide restrictions on children's access to those that ape the effects of cosmetic surgery. It came after an investigation into the feelings of nearly 200 teenagers and parents in the UK, US and several other countries found girls were 'susceptible to feelings of low self-worth' as a result of their online experiences. ‌ Pretty Scale and Beauty Scanner did not respond to our requests for comment. What can parents do? Internet Matters urges parents to be extremely cautious. Consider blocking these apps entirely, and, whatever your technical choices, keep talking. Help children build a healthy body image, celebrate their abilities beyond appearance and question the idea that a rating can define them. Practical guidance on starting these conversations, alongside step‑by guide can be found at:

Children struggling to cope with harms encountered online
Children struggling to cope with harms encountered online

The Independent

time05-03-2025

  • Health
  • The Independent

Children struggling to cope with harms encountered online

The number of UK children experiencing some form of harm online remains high – with parents fearing the impacts of harm are getting worse, a new study says. The annual wellbeing index from online safety charity Internet Matters found that children's emotional resilience is weakening, with a rise in the number choosing to actively avoid certain platforms because of negative interactions. The survey of parents and children from 1,054 families in the UK found that the impact of the internet on wellbeing has become more extreme, with respondents reporting both the positive and negative impacts of time online have risen this year. It showed that children appear to be getting more upset when they encounter online harms – 67% of children said they had experienced harm online, which was in line with previous years, but more said they found the experience upsetting or frightening. Parents too said they felt the impacts of harm online were getting worse, in particular when it comes to graphic violent content, and unhealthy body image or eating habits – both of which saw sharp rises in being flagged as having a negative effect on their children. The survey also showed that fewer children feel safe online, with the number saying they did so dropping to 77% compared to 81% last year. Meanwhile, the most prevalent harm this year was false information, which was encountered by 41% of children, according to the study. However, the study also found that for many children, the positives of being online still outweigh the negatives. The number of children who said the internet was important for finding supportive communities rose from 44% to 50%. And parents are also getting better at tracking and understanding their children's online habits, the study said. Carolyn Bunting, co-chief executive of Internet Matters, said: 'This year's survey shows that the negative sides of online life are on the rise – particularly for vulnerable children. It is encouraging that parents are taking action, however experiences of online harm remain stubbornly high, with two-thirds of all children experiencing harm online. 'It is encouraging to see that children are making greater use of the internet to be creative, to stay active and to find community, and parents and children say the benefits of being online for children's wellbeing continue to outweigh the negatives. 'But we should be alarmed that those negatives are growing faster, that children are feeling more affected and upset by these experiences, and that parents are becoming more worried that excessive time online is negatively affecting their child's physical and mental health. 'Our Index shows there is still a very long way to go until Britain becomes the safest place in the world for children to be online. 'The Online Safety Act is a welcome and important step forward, and the new legislation can't come into effect soon enough. 'Ofcom must now fully exercise its powers and prioritise children's safety so that they can capitalise on the benefits of being online without coming to harm.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store