logo
#

Latest news with #PrivacyCommissioner

AI-generated porn scandal rocks University of Hong Kong after law student allegedly created deepfakes of 20 women
AI-generated porn scandal rocks University of Hong Kong after law student allegedly created deepfakes of 20 women

CBS News

timea day ago

  • Politics
  • CBS News

AI-generated porn scandal rocks University of Hong Kong after law student allegedly created deepfakes of 20 women

Hong Kong's privacy watchdog said Tuesday it has launched a criminal investigation into an AI-generated porn scandal at the city's oldest university, after a student was accused of creating lewd images of his female classmates and teachers. Three people alleged over the weekend that a University of Hong Kong (HKU) law student fabricated pornographic deepfakes of at least 20 women using artificial intelligence, in what is the first high-profile case of its kind in the Chinese financial hub. The university sparked outrage over a perceived lenient punishment after it said Saturday it had only sent a warning letter to the student and demanded he apologize. But Hong Kong's Office of the Privacy Commissioner for Personal Data said Tuesday that disclosing someone else's personal data without consent, and with an intent to cause harm, could be an offense. The watchdog "has begun a criminal investigation into the incident and has no further comment at this stage," it said, without mentioning the student. The accusers said in a statement Saturday that Hong Kong law only criminalises the distribution of "intimate images," including those created with AI, but not the generation of them. There is no allegation so far that the student spread the deepfake images, and so "victims are unable to seek punishment... through Hong Kong's criminal justice system", they wrote. The accusers said a friend discovered the images on the student's laptop. Experts warn the alleged use of AI in the scandal may be the tip of a "very large iceberg" surrounding non-consensual imagery. "The HKU case shows clearly that anyone could be a perpetrator, no space is 100 percent safe," Annie Chan, a former associate professor at Hong Kong's Lingnan University, told AFP. Women's rights advocates said Hong Kong was "lagging behind" in terms of legal protections. "Some people who seek our help feel wronged, because they never took those photos," said Doris Chong, executive director at the Association Concerning Sexual Violence Against Women, referring to cases at the group's crisis center. "The AI generations are so life-like that their circulation would be very upsetting." Asked about the case at a Tuesday press briefing, Hong Kong leader John Lee said most of the city's laws "are applicable to activities on the internet." HKU said on Saturday it will review the case and take further action if appropriate. AI-generated pornography has also made headlines in the U.S. A study found 6% of American teens have been targets of nude deepfake images that look like them. Last month, Meta removed a number of ads promoting "nudify" apps — AI tools used to create sexually explicit deepfakes using images of real people — after a CBS News investigation found hundreds of such advertisements on its platforms. In May, one of the largest websites dedicated to deepfake pornography announced that it shut down after a critical service provider withdrew its support, effectively halting the site's operations.

Hong Kong opens probe into AI-generated porn scandal at university
Hong Kong opens probe into AI-generated porn scandal at university

CTV News

timea day ago

  • Politics
  • CTV News

Hong Kong opens probe into AI-generated porn scandal at university

Hong Kong's privacy watchdog said Tuesday it has launched a criminal investigation into an AI-generated porn scandal at the city's oldest university, after a student was accused of creating lewd images of his female classmates and teachers. Three people alleged over the weekend that a University of Hong Kong (HKU) law student fabricated pornographic images of at least 20 women using artificial intelligence, in what is the first high-profile case of its kind in the Chinese financial hub. The university sparked outrage over a perceived lenient punishment after it said Saturday it had only sent a warning letter to the student and demanded he apologise. But Hong Kong's Office of the Privacy Commissioner for Personal Data said Tuesday that disclosing someone else's personal data without consent, and with an intent to cause harm, could be an offence. The watchdog 'has begun a criminal investigation into the incident and has no further comment at this stage', it said, without mentioning the student. The accusers said in a statement Saturday that Hong Kong law only criminalises the distribution of 'intimate images', including those created with AI, but not the generation of them. There is no allegation so far that the student spread the deepfake images, and so 'victims are unable to seek punishment... through Hong Kong's criminal justice system', they wrote. The accusers said a friend discovered the images on the student's laptop. Experts warn the alleged use of AI in the scandal may be the tip of a 'very large iceberg' surrounding non-consensual imagery. 'The HKU case shows clearly that anyone could be a perpetrator, no space is 100 percent safe,' Annie Chan, a former associate professor at Hong Kong's Lingnan University, told AFP. Women's rights advocates said Hong Kong was 'lagging behind' in terms of legal protections. 'Some people who seek our help feel wronged, because they never took those photos,' said Doris Chong, executive director at the Association Concerning Sexual Violence Against Women, referring to cases at the group's crisis centre. 'The AI generations are so life-like that their circulation would be very upsetting.' Asked about the case at a Tuesday press briefing, Hong Kong leader John Lee said most of the city's laws 'are applicable to activities on the internet'. HKU said on Saturday it will review the case and take further action if appropriate.

Hong Kong opens probe into AI-generated porn scandal at university
Hong Kong opens probe into AI-generated porn scandal at university

CNA

timea day ago

  • Politics
  • CNA

Hong Kong opens probe into AI-generated porn scandal at university

HONG KONG: Hong Kong's privacy watchdog said on Tuesday (Jul 15) it has launched a criminal investigation into an AI-generated porn scandal at the city's oldest university, after a student was accused of creating lewd images of his female classmates and teachers. Three people alleged over the weekend that a University of Hong Kong (HKU) law student fabricated pornographic images of at least 20 women using artificial intelligence, in what is the first high-profile case of its kind in the Chinese financial hub. The university sparked outrage over a perceived lenient punishment after it said on Saturday it had only sent a warning letter to the student and demanded that he apologise. But Hong Kong's Office of the Privacy Commissioner for Personal Data said on Tuesday that disclosing someone else's personal data without consent, and with an intent to cause harm, could be an offence. The watchdog "has begun a criminal investigation into the incident and has no further comment at this stage", it said, without mentioning the student. The accusers said in a statement on Saturday that Hong Kong law only criminalises the distribution of "intimate images", including those created with AI, but not the generation of them. There is no allegation so far that the student spread the deepfake images, and so "victims are unable to seek punishment ... through Hong Kong's criminal justice system", they wrote. The accusers said a friend discovered the images on the student's laptop. Experts warn the alleged use of AI in the scandal may be the tip of a "very large iceberg" surrounding non-consensual imagery. "The HKU case shows clearly that anyone could be a perpetrator, no space is 100 per cent safe," Annie Chan, a former associate professor at Hong Kong's Lingnan University, told AFP. Women's rights advocates said Hong Kong was "lagging behind" in terms of legal protections. "Some people who seek our help feel wronged, because they never took those photos," said Doris Chong, executive director at the Association Concerning Sexual Violence Against Women, referring to cases at the group's crisis centre. "The AI generations are so life-like that their circulation would be very upsetting." Asked about the case at a Tuesday press briefing, Hong Kong leader John Lee said most of the city's laws "are applicable to activities on the internet".

B.C.'s privacy watchdog weighs in on health AI boom – as doctors warn it's not a substitute
B.C.'s privacy watchdog weighs in on health AI boom – as doctors warn it's not a substitute

CTV News

time13-06-2025

  • Health
  • CTV News

B.C.'s privacy watchdog weighs in on health AI boom – as doctors warn it's not a substitute

As a growing number of doctors adopt artificial intelligence tools in their offices and hospitals, British Columbia's privacy commissioner is urging them to do their homework on privacy requirements. CTV News sat with Michael Harvey, B.C.'s Information and Privacy Commissioner, for an in-depth discussion around AI and found the area isn't just nuanced, it's being developed and assessed as the technology evolves. 'There is no question that our laws need to be reformed to adapt to the changing technological circumstances,' he stated. 'That said, it's not like there's no laws that apply to AI in the health sector or other sectors.' Notifying patients they're using the technology is the minimum, Harvey said, but health-care practitioners should go further if they're using scribes or other software in their practice. 'Even in situations where it might not be strictly legally necessary to do more than notify, I think it's a good advice for clinicians to really take that extra step and have a bit of a conversation,' said Harvey. 'Because we're talking about new types of applications here, organizations would be well advised to hold themselves to a higher bar of express consent.' What's clear and not so clear There are two aspects privacy watchdogs are monitoring as AI permeates the health-care system, in particular: the data being used to train the models, and the experience of the patients receiving care. Harvey said if a provider is gathering information they need to notify the patient, and if it's being used for a secondary purpose like training an AI, 'generally speaking, you should have to consent for that purpose, but there are exceptions in the law.' He was clear that while a program could be approved for one type of use, it can't just be used for another purpose without a fresh assessment of the privacy impact 'because sometimes that can even change the whole legal basis for the program,' and whether it's in compliance with B.C. privacy laws, which aren't the same as our federal legislation or U.S. HIPAA laws. Harvey encouraged patients to ask their health-care provider questions about what kind of data is being used, and to contact his office to report any red flags that he may need to take a look at, since 'protecting people's trust in the health system should be a very high priority.' Doctors warn of limits As the province's health-care system continues to see long waits for patients to see family doctors or emergency room physicians, Doctors of B.C. worries that patients will try to self-diagnose with AI when the technology isn't meant for that purpose. 'We don't want serious illnesses or serious conditions to be missed,' said president Dr. Charlene Lui, who insisted that the public and health-care providers alike should consider AI to be a tool, rather than a substitute. She said that every doctor has stories of experiences throughout their career where they spotted signs of cancer, diabetes, or other serious medical conditions while meeting with a doctor for another reason. Lui is a family doctor herself, and will always remember an appointment with a woman who'd gone to see her for her own health issue and had brought her baby – who immediately caught Lui's attention and was quickly rushed to hospital. 'The baby had heart surgery that day,' she said. 'There is something about seeing a physician that quick scan that a physician does that I think is often underappreciated.' This is the second part in a CTV Vancouver series taking a deep dive into the use of artificial intelligence in health care. You can read part one on everyday uses here.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store