logo
Privacy watchdog probes AI porn case at Hong Kong university

Privacy watchdog probes AI porn case at Hong Kong university

The Citizen3 days ago
A Hong Kong law student allegedly fabricated pornographic images of at least 20 women using AI.
Hong Kong's privacy watchdog said Tuesday it has launched a criminal investigation into an AI-generated porn scandal at the city's oldest university, after a student was accused of creating lewd images of his female classmates and teachers.
Three people alleged over the weekend that a University of Hong Kong (HKU) law student fabricated pornographic images of at least 20 women using artificial intelligence, in what is the first high-profile case of its kind in the Chinese financial hub.
University response sparks public outrage
The university sparked outrage over a perceived lenient punishment after it said Saturday it had only sent a warning letter to the student and demanded he apologise.
But Hong Kong's Office of the Privacy Commissioner for Personal Data said Tuesday that disclosing someone else's personal data without consent, and with an intent to cause harm, could be an offence.
The watchdog 'has begun a criminal investigation into the incident and has no further comment at this stage', it said, without mentioning the student.
ALSO READ: Pope Leo warns AI could disrupt young minds' grip on reality
Legal loopholes leave victims without recourse
The accusers said in a statement Saturday that Hong Kong law only criminalises the distribution of 'intimate images', including those created with AI, but not the generation of them.
There is no allegation so far that the student spread the deepfake images, and so 'victims are unable to seek punishment… through Hong Kong's criminal justice system', they wrote.
The accusers said a friend discovered the images on the student's laptop.
Experts warn of a broader emerging threat
Experts warn the alleged use of AI in the scandal may be the tip of a 'very large iceberg' surrounding non-consensual imagery.
'The HKU case shows clearly that anyone could be a perpetrator, no space is 100 percent safe,' Annie Chan, a former associate professor at Hong Kong's Lingnan University, told AFP.
ALSO READ: Man who created fake explicit images of Ramaphosa and Cele sentenced to 5 years
Advocates call for urgent legal reform
Women's rights advocates said Hong Kong was 'lagging behind' in terms of legal protections.
'Some people who seek our help feel wronged, because they never took those photos,' said Doris Chong, executive director at the Association Concerning Sexual Violence Against Women, referring to cases at the group's crisis centre.
'The AI generations are so life-like that their circulation would be very upsetting.'
Asked about the case at a Tuesday press briefing, Hong Kong leader John Lee said most of the city's laws 'are applicable to activities on the internet'.
HKU said on Saturday it will review the case and take further action if appropriate.
NOW READ: Scary new deepfake app can turn you into a pornstar without your consent
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Unchecked AI is already wreaking havoc in the real world
Unchecked AI is already wreaking havoc in the real world

The Citizen

time17 minutes ago

  • The Citizen

Unchecked AI is already wreaking havoc in the real world

AI is now used to create fake nudes, spread propaganda and blackmail victims. Some have died. Where are the safeguards? In the classic movie 2001: A Space Odyssey, one of the central characters is HAL (Heuristically Programmed Algorithmic Computer), described as a sentient artificial general intelligence (AI) computer. It's responsible for the functioning of the Discovery One spacecraft – until it isn't and starts acting up in unpredictable ways. It's difficult to believe that this warning about the malign possible behaviour of AI premiered 57 years ago. Director Stanley Kubrick and author Arthur C Clarke's vision of the future is coming true before our eyes, as AI starts misbehaving or being used by humans for anti-social and illegal behaviour. ALSO READ: GirlCode Hackathon set to empower women in tech across Africa Today, we report on how AI programmes and sites are being used to fake nude and sexually suggestive images of young people, who are then blackmailed by the creators. Some of the victims have committed suicide. Then there is AI propaganda – alleged to be spread by Russian 'bots' – which seeks to portray Burkina Faso's military dictator, Ebrahim Traore, as an African messiah as he supposedly opposes 'Western colonialism'. In both cases, AI produces material which is believable, easily hoodwinking the casual observer. ALSO READ: Meet the South African who started a Silicon Valley AI firm at 22, made it on Forbes and won a Google award Is it not time our lawmakers started putting in place laws to control what could be a major threat to society as we know it?

Female student expelled In China over sex with foreigner: University says it hurt 'national dignity'
Female student expelled In China over sex with foreigner: University says it hurt 'national dignity'

IOL News

time14 hours ago

  • IOL News

Female student expelled In China over sex with foreigner: University says it hurt 'national dignity'

Social media has reacted with outrage as a Chinese student was expelled for a sexual encounter with foreign visitor. A female university student in China is facing expulsion after being accused of 'hurting national dignity' for having a consensual one-night stand with a foreign visitor in a case that has sparked widespread outrage and accusations of blatant gender discrimination. According to the South China News, the student, surnamed Li, reportedly had a brief encounter with 37-year-old Ukrainian ex-pro gamer Danylo Teslenko, also known as 'Zeus,' during his visit to Shanghai in December 2024. Teslenko later shared intimate photos and videos in his online fan group, allegedly without Li's consent. These were leaked, and her personal information - including her real name, family details, and social media accounts - was doxxed online. What followed was an onslaught of online harassment, with some men reportedly pressuring Dalian Polytechnic University to take disciplinary action. The university responded by naming Li publicly and announcing plans to expel her, citing her behaviour as 'misconduct' that brought shame on the institution and the nation.

AI-powered 'nudify' apps are driving a deadly wave of digital blackmail, and children are particularly at risk
AI-powered 'nudify' apps are driving a deadly wave of digital blackmail, and children are particularly at risk

IOL News

time19 hours ago

  • IOL News

AI-powered 'nudify' apps are driving a deadly wave of digital blackmail, and children are particularly at risk

AI tools that digitally strip off clothing or generate sexualised imagery are being used in extortion scams. After a Kentucky teenager died by suicide this year, his parents discovered he had received threatening texts demanding $3,000 to suppress an AI-generated nude image of him. The tragedy underscores how so-called sextortion scams targeting children are growing around the world, particularly with the rapid proliferation of "nudify" apps - AI tools that digitally strip off clothing or generate sexualised imagery. Elijah Heacock, 16, was just one of thousands of American minors targeted by such digital blackmail, which has spurred calls for more action from tech platforms and regulators. His parents told US media that the text messages ordered him to pay up or an apparently AI-generated nude photo would be sent to his family and friends. "The people that are after our children are well organised," John Burnett, the boy's father, said in a CBS News interview. "They are well financed, and they are relentless. They don't need the photos to be real, they can generate whatever they want, and then they use it to blackmail the child." US investigators were looking into the case, which comes as nudify apps - which rose to prominence targeting celebrities - are being increasingly weaponised against children. The FBI has reported a "horrific increase" in sextortion cases targeting US minors, with victims typically males between the ages of 14 and 17. The threat has led to an "alarming number of suicides," the agency warned. Instruments of abuse In a recent survey, Thorn, a non-profit focused on preventing online child exploitation, found that six percent of American teens have been a direct victim of deepfake nudes. "Reports of fakes and deepfakes - many of which are generated using these 'nudifying' services - seem to be closely linked with reports of financial sextortion, or blackmail with sexually explicit images," the British watchdog Internet Watch Foundation (IWF) said in a report last year. "Perpetrators no longer need to source intimate images from children because images that are convincing enough to be harmful - maybe even as harmful as real images in some cases - can be produced using generative AI." The IWF identified one "pedophile guide" developed by predators that explicitly encouraged perpetrators to use nudifying tools to generate material to blackmail children. The author of the guide claimed to have successfully blackmailed some 13-year-old girls. The tools are a lucrative business. A new analysis of 85 websites selling nudify services found they may be collectively worth up to $36 million (R644 million) a year. The analysis from Indicator, a US publication investigating digital deception, estimates that 18 of the sites made between $2.6 million and $18.4 million over the six months to May. Most of the sites rely on tech infrastructure from Google, Amazon, and Cloudflare to operate, and remain profitable despite crackdowns by platforms and regulators, Indicator said. 'Whack-a-mole' The proliferation of AI tools has led to new forms of abuse impacting children, including pornography scandals at universities and schools worldwide, where teenagers created sexualised images of their own classmates. A recent Save the Children survey found that one in five young people in Spain have been victims of deepfake nudes, with those images shared online without their consent. Earlier this year, Spanish prosecutors said they were investigating three minors in the town of Puertollano for allegedly targeting their classmates and teachers with AI-generated pornographic content and distributing it in their school. In the United Kingdom, the government this year made creating sexually explicit deepfakes a criminal offense, with perpetrators facing up to two years in jail. And in May, US President Donald Trump signed the bipartisan "Take It Down Act," which criminalises the non-consensual publication of intimate images, while also mandating their removal from online platforms. Meta also recently announced it was filing a lawsuit against a Hong Kong company behind a nudify app called Crush AI, which it said repeatedly circumvented the tech giant's rules to post ads on its platforms. But despite such measures, researchers say AI nudifying sites remain resilient. "To date, the fight against AI nudifiers has been a game of whack-a-mole," Indicator said, calling the apps and sites "persistent and malicious adversaries." AFP

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store