logo
#

Latest news with #GraceTame

Calls to criminalise possession and use of AI tools that create child abuse material
Calls to criminalise possession and use of AI tools that create child abuse material

SBS Australia

time2 days ago

  • Politics
  • SBS Australia

Calls to criminalise possession and use of AI tools that create child abuse material

Child safety advocates have met at Parliament House to confront the problem of child safety in the age of Artificial Intelligence. From the rising prevalence of deepfake imagery to the availability of so-called nudify apps, the use of A-I to sexually exploit children is growing exponentially - and there is concern Australian laws are falling behind. The roundtable was convened by the International Centre for Exploited and Missing Children Australia, or ICMEC. CEO Colm Gannon says Australia's current child protection framework, introduced just three years ago, fails to address the threat of AI - and he's calling on the government to make it a priority. "(By) bringing it into the national framework that was brought around in 2021, a 10 year framework that doesn't mention AI. We're working to hopefully develop solutions for government to bring child safety in the age of AI at the forefront." Earlier this year, the United Kingdom became the first country in the world to introduce A-I sexual abuse offences to protect children from predators generating images with artificial intelligence. Colm Gannon is leading the calls for similar legislation in Australia. "What we need to do is look at legislation that's going to be comprehensive - comprehensive for protection, comprehensive for enforcement and comprehensive to actually be technology neutral." Former Australian of the Year Grace Tame says online child abuse needs to be addressed at a society-wide level. "Perpetrators are not just grooming their individual victims, they're grooming their entire environments to create a regime of control in which abuse can operate in plain sight. And there are ways through education that needs to be aimed at, not just children and the people who work with children but the entire community, there are ways to identify these precipitating behaviours that underpin the contact offending framework, it's things like how offenders target victims, which victims they are targeting specifically." In 2023, intelligence company Graphika reported that the use of synthetic, non-consensual, intimate imagery was becoming more widespread - moving from being available in niche internet forums, to an automated and scaled online business. It found that 34 of such image providers received more than 24 million unique visitors to their websites, while links accessing these services increased on platforms including Reddit and X. As part of Task Force Argos, former policeman Professor Jon Rouse pioneered Australia's first proactive operations against internet child sex offenders, and has also chaired INTERPOL's Covert Internet Investigations Group. Now working with Childlight Australia, he says big tech providers like Apple have a responsibility to build safety controls into their products. "Apple has got accountability here as well, they just put things on the app store, they get money every time somebody downloads it, but there's no regulation around this, you create an app you put it on the app store. Who checks and balances what the damages that, that's going to cause is. No-one. The tragedy is we're at a point now that we have to ban our kids from social media. Because we can't rely on any sector of the industry to ban our kids. Which is pretty sad." Apple has not responded to a request for comment. But in 2021 it announced it had introduced new features designed to help keep young people safe - such as sending warnings when children receive, or attempt to send, images or videos containing nudity. Although potentially dangerous, AI can also be used to detect grooming behaviour and child sexual abuse material. Colm Gannon from ICMEC says these opportunities need to be harnessed. "The other thing that we want to do is use law enforcement as a tool to help identify victims. There is technology out there that can assist in rapid and easy access to victim ID and what's happening at the moment is law enforcement are not able to use that technology."

'Urgent' demand to outlaw AI tools being used to generate child sexual abuse material
'Urgent' demand to outlaw AI tools being used to generate child sexual abuse material

ABC News

time3 days ago

  • Politics
  • ABC News

'Urgent' demand to outlaw AI tools being used to generate child sexual abuse material

Former Australian of the year Grace Tame says there is an urgent national need to act to prevent AI tools being used to create child abuse material, and that the country must criminalise the possession of freely available child exploitation apps. Child safety advocates including Ms Tame will meet at Parliament House today ahead of a new term of parliament to address the rise of AI being used to sexually exploit children, as well as opportunities to use AI to detect grooming behaviour and child sexual abuse material. The meeting comes as a spotlight has turned on what governments are doing to protect children in the week of another horrific alleged case of abuse at a Melbourne child care centre. Ms Tame, who rose to prominence campaigning for the right to speak under her own name about her abuse, says the government is moving too slowly. "I don't think previous governments and, unfortunately, the current government, have acted swiftly enough when it comes to child safety online," Ms Tame said. The International Centre for Missing and Exploited Children, which is convening the parliament round table, has advocated for Australia to make it an offence to possess or distribute custom-built AI tools designed to produce child sexual abuse material (CSAM). Similar legislation has been introduced in the United Kingdom, which the government has said it is following closely. Intelligence company Graphika reported late in 2023 that non-consensual explicit generative AI tools had moved from being available on niche internet forums into a "scaled" online and monetised business. It found there had been more than 24 million unique visits to the websites of 34 of these tools, and links to access them had risen sharply across platforms like Reddit, X and Telegram. Worse still, the spread of AI-generated exploitation material is diverting police resources from investigations involving real victims. While possession of CSAM is a criminal offence, advocates say Australia should be following other nations, including the United Kingdom and European Union, in outlawing the AI tools themselves. "The reason why this round table is really important … is because when we look at the national framework for child protection that was drafted in 2021, it's a ten-year framework and the presence of AI and the harms being caused by AI are actually not mentioned in that framework," ICMEC Australia chief executive Colm Gannon said. "There has to be regulations put in place to say you need to prevent this from happening, or your platform being used as a gateway to these areas." "This software [has] no societal benefit, they should be regulated and made illegal, and it should be an offence to actually have these models that are generating child sexual abuse material. "It is urgent." Ms Tame said currently, perpetrators were able to purchase AI tools and download them for offline use, where their creation of offending material could not be detected. "It is a wild west, and it doesn't require much sophistication at all," she said. An independent review of the Online Safety Act handed to the government in October last year also recommended "nudify" AI apps used to create non-consensual explicit material should be banned. The government has promised to adopt a recommendation from that review to impose a "duty of care" on platforms to keep children safe, though it is yet to be legislated, and the 66 other recommendations of that review have not been responded to. In a statement, Attorney-General Michelle Rowland said the use of AI to facilitate the creation of child sexual abuse was sickening "and cannot continue". "I am committed to working across government to further consider how we can strengthen responses to evolving harms. This includes considering regulatory approaches to AI in high-risk settings," Ms Rowland said. "Australia has a range of laws that regulate AI. These include economy-wide laws on online safety." Advocates are also raising that government can do to remove barriers limiting law enforcement from being able to use AI tools to detect and fight perpetrators of child abuse. Police have limited their use of facial recognition tools to investigate child abuse online since 2021 when the Privacy Commissioner determined Clearview AI breached Australians' privacy by scraping biometric data from the web without consent, and ordered Australian data to be deleted and the app be banned. Mr Gannon, a former specialist investigator who has helped in national and international child sexual exploitation cases, said, however, there were existing tools that could be used by law enforcement while protecting the privacy of Australians. "That's something the government need to actually start looking at: how do we actually provide tools for law enforcement in the identification of victims of child sexual abuse [that are] compliant with privacy laws in Australia? "We shouldn't disregard the idea of using AI to help us identify victims of child sexual abuse. "There are solutions out there that would also have good oversight by government allowing investigators to access those tools." Clearview AI continues to be used overseas by law enforcement to identify child abuse victims and offenders, but Mr Gannon said there were solutions that could allow "good oversight by government" while also enabling investigators to access the tool. He added that Australia should be working with international partners to harmonise its approach to AI safety so that expectations for developers could be clearly set. Advocates have also warned that the spread of unregulated AI tools has enabled child sex offenders to scale up their offending. Ms Tame said the need for a framework to regulate AI tools extended beyond obviously harmful apps, with even mainstream AI chatbots used by offenders to automate grooming behaviour and gain advice on evading justice or speaking with law enforcement. "In my own experience, the man who offended against me, as soon as he was notified that he was suspended from my high school, he checked himself into a psych ward," she said. "We are seeing offenders not only advancing their methods … we're also seeing their sophistication in evading justice." The government acknowledged last year that current regulations did not sufficiently address the risks posed by AI, and would consider "mandatory safeguards". Last month, the eSafety commissioner said technology platforms had an obligation to protect children. "While responsibility must primarily sit with those who choose to perpetrate abuse, we cannot ignore how technology is weaponised. The tech industry must take responsibility to address the weaponisation of their products and platforms," the commissioner wrote.

‘Urgent' forum to combat AI child abuse
‘Urgent' forum to combat AI child abuse

Yahoo

time6 days ago

  • Yahoo

‘Urgent' forum to combat AI child abuse

Experts and authorities on child exploitation material will meet for emergency meetings this week as the amount of AI-generated abuse explodes. The National Children's Commissioner will meet fellow experts in Canberra on Thursday for the roundtable discussions. 'We are seeing AI generate entirely new types of child abuse material. This is a turning point,' international expert Jon Rouse said. Figures from the US-based National Centre for Missing and Exploited Children show AI use has massively increased among predators. The centre reports a 1325 per cent increase in child sexual exploitation material reports involving generative AI, up from 4700 in 2023 to more than 67,000 in 2024. While based in the US, the centre works closely with law enforcement around the world. The meeting in Canberra has been called to discuss responses to AI-generated child sexual abuse material, deepfakes, automated grooming and childlike AI personas. 'This roundtable represents a pivotal moment for child protection in Australia,' International Centre for Missing and Exploited Children Australia chief executive Colm Gannon said. 'AI is being weaponised to harm children, and Australia must act swiftly to prevent these technologies from outpacing our systems of protection.' Australian of the Year Grace Tame will lend her expertise to the roundtable, as will representatives from the eSafety Commissioner, child protection organisation Bravehearts, and Childlight Australia. 'If we act now, Australia can set a global benchmark for ethical AI and child protection,' Mr Gannon said.

‘Act now': National meeting to combat AI child abuse
‘Act now': National meeting to combat AI child abuse

News.com.au

time6 days ago

  • News.com.au

‘Act now': National meeting to combat AI child abuse

Experts and authorities on child exploitation material will meet for emergency meetings this week as the amount of AI-generated abuse explodes. The National Children's Commissioner will meet fellow experts in Canberra on Thursday for the roundtable discussions. 'We are seeing AI generate entirely new types of child abuse material. This is a turning point,' international expert Jon Rouse said. Figures from the US-based National Centre for Missing and Exploited Children show AI use has massively increased among predators. The centre reports a 1325 per cent increase in child sexual exploitation material reports involving generative AI, up from 4700 in 2023 to more than 67,000 in 2024. While based in the US, the centre works closely with law enforcement around the world. The meeting in Canberra has been called to discuss responses to AI-generated child sexual abuse material, deepfakes, automated grooming and childlike AI personas. 'This roundtable represents a pivotal moment for child protection in Australia,' International Centre for Missing and Exploited Children Australia chief executive Colm Gannon said. 'AI is being weaponised to harm children, and Australia must act swiftly to prevent these technologies from outpacing our systems of protection.' Australian of the Year Grace Tame will lend her expertise to the roundtable, as will representatives from the eSafety Commissioner, child protection organisation Bravehearts, and Childlight Australia. 'If we act now, Australia can set a global benchmark for ethical AI and child protection,' Mr Gannon said.

Nike axes Australian activist Grace Tame over Gaza support
Nike axes Australian activist Grace Tame over Gaza support

Al Bawaba

time11-06-2025

  • Business
  • Al Bawaba

Nike axes Australian activist Grace Tame over Gaza support

ALBAWABA - American sportswear brand Nike recently made headlines after dropping Australian activist Grace Tame as a brand ambassador over her pro-Palestine posts about the ongoing Israeli aggression on Gaza since Oct. 7, 2023. Nike officially stated on Friday that it "agreed to part ways" with Tame without disclosing the reason publicly. However, on Monday, The Daily Mail Australia received a statement from the sportswear brand that read, "Nike does not stand for any form of discrimination, including antisemitism." Tame, who is active on Instagram, recently shared several stories about the 12 activists on board Freedom Flotilla Coalition's 'Madleen,' who were recently detained by Israel after sailing from Sicily to Gaza to urge Israeli authorities to lift the siege on the Strip and let humanitarian aid in. According to The New Arab, she also occasionally shares posts discussing the ongoing Israeli aggression on Gaza, further showcasing her pro-Palestinian stance. Last year, Grace Tame described Israel's actions in Palestine as a "Genocide." Additionally, Tame also called for a Gaza ceasefire several times and signed a global petition by Oxfam in November 2023. Besides her stance on Gaza, the well-known Australian athlete and activist is also a vocal advocate against sexual assault and has consistently supported its victims. She also wrote in a post about International Women's Day last year, "We're watching an accelerated genocide unfold before our eyes in Gaza, where innocent women and children account for around 70 per cent of the rising death toll. Many of our so-called leaders with the power and platforms to act are inert, apathetic, or worse, aiding and abetting." Speculations erupted after Tame took down all her posts featuring Nike's brand, despite signing a $100,000 deal with the American brand in January. According to Arab News, the activist and runner recently took jabs at retired American businessman Rupert Murdoch during a public event with Prime Minister Anthony Albanese by wearing a t-shirt that read, "F**k Murdoch." This was later accompanied by an Instagram post with a caption that described Murdoch and the likes of him as "dynastically wealthy white supremacist corporate oligarchs ruining our planet, funding genocide, war, and destruction." The Australian, a newspaper owned by Murdoch, responded to Tame and accused her of being too fixated on Israel. It added that Nike's "face-saving statement of a mutual separation with Tame is arguably misplaced in its generosity."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store