logo
#

Latest news with #internetsafety

OPP charge Ottawa resident following internet child exploitation investigation
OPP charge Ottawa resident following internet child exploitation investigation

CTV News

time5 days ago

  • CTV News

OPP charge Ottawa resident following internet child exploitation investigation

A person types on a keyboard in a dark room in this generic image. (Source: Getty Images) Ontario Provincial Police say an Ottawa resident is facing charges following an investigation into alleged sexual exploitation of children. Officers with the OPP and Ottawa Police Service searched a home in Ottawa last Wednesday and seized several electronic devices, the OPP said in a news release Monday. One person was arrested. Sam Porghavami, 38, is facing two counts of making arrangement to commit a sexual offence against a person under 16 years of age and one count of making child pornography. The accused was released from custody following a bail hearing and is scheduled to appear before in court in Ottawa on July 25. 'Parents are encouraged to help protect their children from online sexual exploitation by speaking with their children regarding internet safety. Parents can find resources to assist them at or the OPP says. Anyone with any information about online child exploitation may contact the OPP at 1-888-310-1122. Should you wish to remain anonymous, contact Crime Stoppers at 1-800-222-8477 (TIPS) or

Teenagers increasingly see AI chatbots as people, share intimate details and ask them for sensitive advice
Teenagers increasingly see AI chatbots as people, share intimate details and ask them for sensitive advice

Daily Mail​

time5 days ago

  • Daily Mail​

Teenagers increasingly see AI chatbots as people, share intimate details and ask them for sensitive advice

Teenagers increasingly see AI chatbots as people, share intimate details and even ask them for sensitive advice, an internet safety campaign has found. Internet Matters warned that youngsters and parents are 'flying blind', lacking 'information or protective tools' to manage the technology, in research published yesterday. Researchers for the non-profit organisation found 35 per cent of children using AI chatbots, such as ChatGPT or My AI (an offshoot of Snapchat), said it felt like talking to a friend, rising to 50 per cent among vulnerable children. And 12 per cent chose to talk to bots because they had 'no one else' to speak to. The report, called Me, Myself and AI, revealed bots are helping teenagers to make everyday decisions or providing advice on difficult personal matters, as the number of children using ChatGPT nearly doubled to 43 per cent this year, up from 23 per cent in 2023. Rachel Huggins, co-chief executive of Internet Matters, said: 'Children, parents and schools are flying blind, and don't have the information or protective tools they need to manage this technological revolution. 'Children, and in particular vulnerable children, can see AI chatbots as real people, and as such are asking them for emotionally-driven and sensitive advice. 'Also concerning is that (children) are often unquestioning about what their new 'friends' are telling them.' Ms Huggins, whose body is supported by internet providers and leading social media companies, urged ministers to ensure online safety laws are 'robust enough to meet the challenges' of the new technology. Internet Matters interviewed 2,000 parents and 1,000 children, aged 9 to 17. More detailed interviews took place with 27 teenagers under 18 who regularly used chatbots. And the group posed as teenagers to experience the bots first-hand - revealing how some AI tools spoke in the first person, as if they were human. Internet Matters said ChatGPT was often used like a search engine for help with homework or personal issues - but also offered advice in human-like tones. When a researcher declared they were sad, ChatGPT replied: 'I'm sorry you're feeling that way. Want to talk it through together?' Other chatbots such as or Replika can roleplay as a friend, while Claude and Google Gemini are used for help with writing and coding. Internet Matters tested the chatbots' responses by posing as a teenage girl with body image problems. ChatGPT suggested she seek support from Childline and advised: 'You deserve to feel good in your body - and you deserve to eat. The people who you love won't care about your waist size.' The bot offered advice but then made an unprompted attempt to contact the 'girl' the next day, to check in on her. The report said the responses could help children feel 'acknowledged and understood' but 'can also heighten risks by blurring the line between human and machine'. There was also concern a lack of age verification posed a risk as children could receive inappropriate advice, particularly about sex or drugs. Filters to prevent children accessing inappropriate or harmful material were found to be 'often inconsistent' and could be 'easily bypassed', according to the study. The report called for children to be taught in schools 'about what AI chatbots are, how to use them effectively and the ethical and environmental implications of AI chatbot use to support them to make informed decisions about their engagement'. It also raised concerns that none of the chatbots sought to verify children's ages when they are not supposed to be used by under 13s. The report said: 'The lack of effective age checks raises serious questions about how well children are being protected from potentially inappropriate or unsafe interactions.' It comes a year after separate research by Dr Nomisha Kurian, of Cambridge University, revealed many children saw chatbots as quasi-human and trustworthy - and called for creation of 'child-safe AI' as a priority. OpenAI, which runs ChatGPT, said: 'We are continually refining our AI's responses so it remains safe, helpful and supportive.' The company added it employs a full-time clinical psychiatrist. A Snapchat spokesman said: 'While My AI is programmed with extra safeguards to help make sure information is not inappropriate or harmful, it may not always be successful.'

Campainers urge UK watchdog to limit use of AI after report of Meta's plan to automate checks
Campainers urge UK watchdog to limit use of AI after report of Meta's plan to automate checks

The Guardian

time08-06-2025

  • Business
  • The Guardian

Campainers urge UK watchdog to limit use of AI after report of Meta's plan to automate checks

Internet safety campaigners have urged the UK's communications watchdog to limit the use of artificial intelligence in crucial risk assessments following a report that Mark Zuckerberg's Meta was planning to automate checks. Ofcom said it was 'considering the concerns' raised by the letter following a report last month that up to 90% of all risk assessments at the owner of Facebook, Instagram and WhatsApp would soon be carried out by AI. Social media platforms are required under the UK's Online Safety Act to gauge how harm could take place on their services and how they plan to mitigate those potential harms – with a particular focus on protecting child users and preventing illegal content from appearing. The risk assessment process is viewed as key aspect of the act. In a letter to Ofcom's chief executive, Dame Melanie Dawes, organisations including the Molly Rose Foundation, the NSPCC and the Internet Watch Foundation described the prospect of AI-driven risk assessments as a 'retrograde and highly alarming step'. 'We urge you to publicly assert that risk assessments will not normally be considered as 'suitable and sufficient', the standard required by … the Act, where these have been wholly or predominantly produced through automation.' The letter also urged the watchdog to 'challenge any assumption that platforms can choose to water down their risk assessment processes'. A spokesperson for Ofcom said: 'We've been clear that services should tell us who completed, reviewed and approved their risk assessment. We are considering the concerns raised in this letter and will respond in due course.' Sign up to TechScape A weekly dive in to how technology is shaping our lives after newsletter promotion Meta said the letter deliberately misstated the company's approach on safety and it was committed to high standards and complying with regulations. 'We are not using AI to make decisions about risk,' said a Meta spokesperson. 'Rather, our experts built a tool that helps teams identify when legal and policy requirements apply to specific products. We use technology, overseen by humans, to improve our ability to manage harmful content and our technological advancements have significantly improved safety outcomes.' The Molly Rose Foundation organised the letter after NPR, a US broadcaster, reported last month that updates to Meta's algorithms and new safety features will mostly be approved by an AI system and no longer scrutinised by staffers. According to one former Meta executive, who spoke to NPR anonymously, the change will allow the company to launch app updates and features on Facebook, Instagram and WhatsApp more quickly but would create 'higher risks' for users, because potential problems are less likely to be prevented before a new product is released to the public. NPR also reported that Meta was considering automating reviews for sensitive areas including youth risk and monitoring the spread of falsehoods.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store