logo
#

Latest news with #InternationalCentre

Calls to criminalise possession and use of AI tools that create child abuse material
Calls to criminalise possession and use of AI tools that create child abuse material

SBS Australia

time6 days ago

  • Politics
  • SBS Australia

Calls to criminalise possession and use of AI tools that create child abuse material

Child safety advocates have met at Parliament House to confront the problem of child safety in the age of Artificial Intelligence. From the rising prevalence of deepfake imagery to the availability of so-called nudify apps, the use of A-I to sexually exploit children is growing exponentially - and there is concern Australian laws are falling behind. The roundtable was convened by the International Centre for Exploited and Missing Children Australia, or ICMEC. CEO Colm Gannon says Australia's current child protection framework, introduced just three years ago, fails to address the threat of AI - and he's calling on the government to make it a priority. "(By) bringing it into the national framework that was brought around in 2021, a 10 year framework that doesn't mention AI. We're working to hopefully develop solutions for government to bring child safety in the age of AI at the forefront." Earlier this year, the United Kingdom became the first country in the world to introduce A-I sexual abuse offences to protect children from predators generating images with artificial intelligence. Colm Gannon is leading the calls for similar legislation in Australia. "What we need to do is look at legislation that's going to be comprehensive - comprehensive for protection, comprehensive for enforcement and comprehensive to actually be technology neutral." Former Australian of the Year Grace Tame says online child abuse needs to be addressed at a society-wide level. "Perpetrators are not just grooming their individual victims, they're grooming their entire environments to create a regime of control in which abuse can operate in plain sight. And there are ways through education that needs to be aimed at, not just children and the people who work with children but the entire community, there are ways to identify these precipitating behaviours that underpin the contact offending framework, it's things like how offenders target victims, which victims they are targeting specifically." In 2023, intelligence company Graphika reported that the use of synthetic, non-consensual, intimate imagery was becoming more widespread - moving from being available in niche internet forums, to an automated and scaled online business. It found that 34 of such image providers received more than 24 million unique visitors to their websites, while links accessing these services increased on platforms including Reddit and X. As part of Task Force Argos, former policeman Professor Jon Rouse pioneered Australia's first proactive operations against internet child sex offenders, and has also chaired INTERPOL's Covert Internet Investigations Group. Now working with Childlight Australia, he says big tech providers like Apple have a responsibility to build safety controls into their products. "Apple has got accountability here as well, they just put things on the app store, they get money every time somebody downloads it, but there's no regulation around this, you create an app you put it on the app store. Who checks and balances what the damages that, that's going to cause is. No-one. The tragedy is we're at a point now that we have to ban our kids from social media. Because we can't rely on any sector of the industry to ban our kids. Which is pretty sad." Apple has not responded to a request for comment. But in 2021 it announced it had introduced new features designed to help keep young people safe - such as sending warnings when children receive, or attempt to send, images or videos containing nudity. Although potentially dangerous, AI can also be used to detect grooming behaviour and child sexual abuse material. Colm Gannon from ICMEC says these opportunities need to be harnessed. "The other thing that we want to do is use law enforcement as a tool to help identify victims. There is technology out there that can assist in rapid and easy access to victim ID and what's happening at the moment is law enforcement are not able to use that technology."

'Urgent' demand to outlaw AI tools being used to generate child sexual abuse material
'Urgent' demand to outlaw AI tools being used to generate child sexual abuse material

ABC News

time7 days ago

  • Politics
  • ABC News

'Urgent' demand to outlaw AI tools being used to generate child sexual abuse material

Former Australian of the year Grace Tame says there is an urgent national need to act to prevent AI tools being used to create child abuse material, and that the country must criminalise the possession of freely available child exploitation apps. Child safety advocates including Ms Tame will meet at Parliament House today ahead of a new term of parliament to address the rise of AI being used to sexually exploit children, as well as opportunities to use AI to detect grooming behaviour and child sexual abuse material. The meeting comes as a spotlight has turned on what governments are doing to protect children in the week of another horrific alleged case of abuse at a Melbourne child care centre. Ms Tame, who rose to prominence campaigning for the right to speak under her own name about her abuse, says the government is moving too slowly. "I don't think previous governments and, unfortunately, the current government, have acted swiftly enough when it comes to child safety online," Ms Tame said. The International Centre for Missing and Exploited Children, which is convening the parliament round table, has advocated for Australia to make it an offence to possess or distribute custom-built AI tools designed to produce child sexual abuse material (CSAM). Similar legislation has been introduced in the United Kingdom, which the government has said it is following closely. Intelligence company Graphika reported late in 2023 that non-consensual explicit generative AI tools had moved from being available on niche internet forums into a "scaled" online and monetised business. It found there had been more than 24 million unique visits to the websites of 34 of these tools, and links to access them had risen sharply across platforms like Reddit, X and Telegram. Worse still, the spread of AI-generated exploitation material is diverting police resources from investigations involving real victims. While possession of CSAM is a criminal offence, advocates say Australia should be following other nations, including the United Kingdom and European Union, in outlawing the AI tools themselves. "The reason why this round table is really important … is because when we look at the national framework for child protection that was drafted in 2021, it's a ten-year framework and the presence of AI and the harms being caused by AI are actually not mentioned in that framework," ICMEC Australia chief executive Colm Gannon said. "There has to be regulations put in place to say you need to prevent this from happening, or your platform being used as a gateway to these areas." "This software [has] no societal benefit, they should be regulated and made illegal, and it should be an offence to actually have these models that are generating child sexual abuse material. "It is urgent." Ms Tame said currently, perpetrators were able to purchase AI tools and download them for offline use, where their creation of offending material could not be detected. "It is a wild west, and it doesn't require much sophistication at all," she said. An independent review of the Online Safety Act handed to the government in October last year also recommended "nudify" AI apps used to create non-consensual explicit material should be banned. The government has promised to adopt a recommendation from that review to impose a "duty of care" on platforms to keep children safe, though it is yet to be legislated, and the 66 other recommendations of that review have not been responded to. In a statement, Attorney-General Michelle Rowland said the use of AI to facilitate the creation of child sexual abuse was sickening "and cannot continue". "I am committed to working across government to further consider how we can strengthen responses to evolving harms. This includes considering regulatory approaches to AI in high-risk settings," Ms Rowland said. "Australia has a range of laws that regulate AI. These include economy-wide laws on online safety." Advocates are also raising that government can do to remove barriers limiting law enforcement from being able to use AI tools to detect and fight perpetrators of child abuse. Police have limited their use of facial recognition tools to investigate child abuse online since 2021 when the Privacy Commissioner determined Clearview AI breached Australians' privacy by scraping biometric data from the web without consent, and ordered Australian data to be deleted and the app be banned. Mr Gannon, a former specialist investigator who has helped in national and international child sexual exploitation cases, said, however, there were existing tools that could be used by law enforcement while protecting the privacy of Australians. "That's something the government need to actually start looking at: how do we actually provide tools for law enforcement in the identification of victims of child sexual abuse [that are] compliant with privacy laws in Australia? "We shouldn't disregard the idea of using AI to help us identify victims of child sexual abuse. "There are solutions out there that would also have good oversight by government allowing investigators to access those tools." Clearview AI continues to be used overseas by law enforcement to identify child abuse victims and offenders, but Mr Gannon said there were solutions that could allow "good oversight by government" while also enabling investigators to access the tool. He added that Australia should be working with international partners to harmonise its approach to AI safety so that expectations for developers could be clearly set. Advocates have also warned that the spread of unregulated AI tools has enabled child sex offenders to scale up their offending. Ms Tame said the need for a framework to regulate AI tools extended beyond obviously harmful apps, with even mainstream AI chatbots used by offenders to automate grooming behaviour and gain advice on evading justice or speaking with law enforcement. "In my own experience, the man who offended against me, as soon as he was notified that he was suspended from my high school, he checked himself into a psych ward," she said. "We are seeing offenders not only advancing their methods … we're also seeing their sophistication in evading justice." The government acknowledged last year that current regulations did not sufficiently address the risks posed by AI, and would consider "mandatory safeguards". Last month, the eSafety commissioner said technology platforms had an obligation to protect children. "While responsibility must primarily sit with those who choose to perpetrate abuse, we cannot ignore how technology is weaponised. The tech industry must take responsibility to address the weaponisation of their products and platforms," the commissioner wrote.

Delhiwale: A retired man
Delhiwale: A retired man

Hindustan Times

time04-07-2025

  • Politics
  • Hindustan Times

Delhiwale: A retired man

After attending a panel discussion on 'Artificial Intelligence and Northeast' at the India International Centre, Tassadaque Hussain walks into adjacent Lodhi Garden for an evening stroll. Sitting on a bench, the Dwarka dweller agrees to become a part of our Proust Questionnaire series, in which citizens are nudged to make 'Parisian parlour confessions', all to explore our distinct experiences. Tassadaque Hussain walks into adjacent Lodhi Garden for an evening stroll. (HT) Your favourite occupation. I retired in 2020 as deputy director of the National Archives of India. Having worked there for 32 years, one gets used to seeing old documents. But I vividly remember the thrill when I touched, for the first time, the handwritten 18th century manuscripts on the weekly activities of the East India Company. Or the Gilgit manuscripts from the 5th and 6th centuries… Where would you like to live? I have lived a good part of my life in Delhi, almost 47 years. Studied history in Hindu College… but I miss my native Jorhat in Assam. What do you appreciate the most in your friends? I'm 66, and these days my mind goes back more and more towards the childhood friendships. Your idea of happiness. Walking in Lodhi Garden, and seeing Bada Gumbad and Sheesh Gumbad monuments at different times of the day… by the way, if you consider the plain flooring pattern of Bada Gumbad, and the fact that all its four sides are open, then it becomes clear that it wasn't a tomb but a gateway—that's my reading. Actually, I'm happy being around any Delhi monument. Your heroes in real life. My father, the late Inamul Husain, went far beyond the milieu of his world. My mother, Eliza, for raising four children. And my home tutor Mazifur Rahman, who stirred my interest in academics, helping me make the life-changing transition from Balya Bhavan, a Jorhat school, to Indore's Daly College, one of India's best public schools. Your favourite food. Homemade meals, especially when made by Mita, my wife. She teaches history in Bhagat Singh College. What is your present state of mind? I'm thinking of these birds chirping and flying… I have no idea how far they are intending to travel.

Controversial singer's UK tour date cancelled after sex abuse allegations
Controversial singer's UK tour date cancelled after sex abuse allegations

Metro

time29-06-2025

  • Entertainment
  • Metro

Controversial singer's UK tour date cancelled after sex abuse allegations

Marilyn Manson's Brighton show has been cancelled following his sex abuse allegations. The Tainted Love hitmaker, 56, was due to perform at the Brighton Centre on October 29 to kick off his One Assassination Under God Tour. But mounting pressure from concerned music fans and protests has led to the show being axed. A slew of sexual abuse allegations were levelled against the singer, real name Brian Hugh Warner, since 2021, including a rape case brought by an ex-girlfriend, known anonymously as Jane Doe. The singer settled the case one week before it was due to begin, and last year signed a new record deal with Nuclear Blast. Manson has denied all the claims, stating his relationships 'have always been entirely consensual', and called the accusations 'horrible distortions of reality'. A message on the Ticketmaster website about the event reads: 'This event has been cancelled. 'Ticket sales have stopped, but there may be tickets available for other dates.' Brighton Pavilion MP Sian Berry wrote an open letter to Brighton and Hove City Council earlier this month calling for the show to be axed. Co-signed by various groups, it read: 'Many survivors in Brighton and Hove, and organisations supporting them, will have serious concerns about this booking and its wider impact on other people visiting the city centre, local residents and the wider community.' Additional Marilyn Manson shows in Bournemouth, Cardiff, Nottingham, Manchester, and London are still planned to go ahead. Per GB News, Liberal Democrat leader Millie Earl told a Bournemouth council meeting that the International Centre shows should be cancelled to 'reinforce the message that violence against women and girls isn't something that's acceptable in our community.' In January, it was announced that US prosecutors would not file charges against the Beautiful People singer following investigations into allegations of domestic violence and sexual assault. Los Angeles County District Attorney Nathan Hochman announced that the allegations lacked sufficient evidence to bring charges and were deemed too old under the law. Game of Thrones actress Esme Bianco and Westworld star Evan Rachel Wood were among the accusers. With regards to Bianco's lawsuit, Manson settled out of court, while parts of Manson's own defamation lawsuit against Evan Rachel Wood were thrown out of court. More Trending Manson then dropped the defamation lawsuit and paid her almost $327,000 (£238,303) in attorney fees. Wood previously shared her allegations of rape and abuse In 2017, when the #MeToo movement gained momentum, before giving testimony to a Congressional committee in 2018, both without naming anyone. In a lengthy Instagram post in 2021, she finally named Manson and claimed that he groomed her as a teenager and said that he 'horrifically abused her for years'. Metro has contacted representatives for Marilyn Manson for comment Got a story? If you've got a celebrity story, video or pictures get in touch with the entertainment team by emailing us celebtips@ calling 020 3615 2145 or by visiting our Submit Stuff page – we'd love to hear from you. MORE: Beloved singer, 83, cancels concerts after suffering 'chronic and intense pain' MORE: Iconic star, 99, sparks concern after dropping out of event due to illness MORE: John Travolta reprises iconic Grease role 46 years after the original

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store