logo
#

Latest news with #NCII

Sexual assault probe: HC flays police insensitivity
Sexual assault probe: HC flays police insensitivity

Time of India

time7 days ago

  • Time of India

Sexual assault probe: HC flays police insensitivity

Chennai: In yet another incident of police insensitivity while handling cases of sexual harassment, a woman victim was made to view her leaked non-consensual intimate images (NCII) in the presence of seven male police officers. The videos, shared online by her boyfriend, were shown to her during the investigation process to "identify" the accused. "It was not only the accused who violated her dignity guaranteed under Article 21 of the Constitution, but also each of the seven police officers present during the inquiry," Justice Anand Venkatesh said. "It was like adding insult to injury. Don't the officers have a little bit of sense? Society will not question the boy who committed the crime; it will question only the girl who the victim is," the judge said. You Can Also Check: Chennai AQI | Weather in Chennai | Bank Holidays in Chennai | Public Holidays in Chennai In the name of investigation, more harassment was meted out to the victim. The police are dealing with the case as if they were dealing with thugs, he added. This is not an ordinary case, in which the police are dealing with morons. The accused in such cases are the most intelligent, deadliest persons sitting in a room that we don't know where, Justice Anand Venkatesh said. The court then censured the police for revealing the name of the victim in the FIR and directed that it must be redacted from the FIR and from all those documents where it was incorporated during the investigation. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Đây có thể là thời điểm tốt nhất để giao dịch vàng trong 5 năm qua IC Markets Tìm hiểu thêm Undo In no place must the name of the victim girl be shown, the judge said. As to the removal of the NCII of the woman lawyer from the internet, the Union govt informed the court that immediate steps were taken to block all the websites from where the contents could not be removed. However, Senior advocate Abudu Kumar Rajaratnam for the petitioner submitted that the videos/intimate images once again resurfaced on 39 sites. He produced the particulars of those 39 sites to the court. Recording the submission, the court directed the Union govt to file an affidavit explaining the various steps that were initiated and give a prototype as to what a victim girl must do when she is faced with a situation like this. In the meantime, the Union govt shall ensure that the NCII does not resurface and the technology discussed in the order passed by the Delhi and Karnataka high courts shall be adopted, he said. If ultimately, the Union can completely block the NCII and prevent it from resurfacing, it will be a test case that can be applied in the future to handle the situation more effectively, he added.

Kids are making deepfakes of each other, and laws aren't keeping up
Kids are making deepfakes of each other, and laws aren't keeping up

Yahoo

time07-07-2025

  • Yahoo

Kids are making deepfakes of each other, and laws aren't keeping up

Last October, a 13-year-old boy in Wisconsin used a picture of his classmate celebrating her bat mitzvah to create a deepfake nude he then shared on Snapchat. This is not an isolated incident. Over the past few years, there has been case after case of school-age children using deepfakes to prank or bully their classmates. And it keeps getting easier to do. When they emerged online eight years ago, deepfakes were initially difficult to make. Nowadays, advances in technology, through generative artificial intelligence, have provided tools to the masses. One troubling consequence is the prevalence of deepfake apps among young users. 'If we would have talked five or six years ago about revenge porn in general, I don't think that you would have found so many offenders were minors,' said Rebecca Delfino, a law professor at Loyola Marymount University who studies deepfakes. Federal and state legislators have sought to tackle the scourge of nonconsensual intimate image (NCII) abuse, sometimes referred to as 'revenge porn,' though advocates prefer the former term. Laws criminalizing the nonconsensual distribution of intimate images — for authentic images, at least — are in effect in every U.S. state and Washington, D.C., and last month President Donald Trump signed a similar measure into law, known as Take It Down. But unlike the federal measure, many of the state laws don't apply to explicit AI-generated deepfakes. Fewer still appear to directly grapple with the fact that perpetrators of deepfake abuse are often minors. Fifteen percent of students reported knowing about AI-generated explicit images of a classmate, according to a survey released in September by the Center for Democracy and Technology (CDT), a center-left think tank. Students also reported that girls were much more likely to be depicted in explicit deepfakes. According to CDT, the findings show that 'NCII, both authentic and deepfake, is a significant issue in K-12 public schools.' 'The conduct we see minors engaged in is not all that different from the pattern of cruelty, humiliation and exploitation and bullying that young people have always done to each other,' said Delfino. 'The difference lies in not only the use of technology to carry out some of that behavior, but the ease with which it is disseminated.' Policymakers at the state and federal level have come at perpetrators of image-based sexual abuse 'hard and fast,' no matter their age, Delfino said. The reason is clear, she said: The distribution of nonconsensual images can have long-lasting, serious mental health harms on the target of abuse. Victims can be forced to withdraw from life online because of the prevalence of nonconsensual imagery. Image-based sexual abuse has similar negative mental health impacts on survivors as those who experienced offiline sexual violence. Delfino said that under most existing laws, youth offenders are likely to be treated similarly to minors who commit other crimes: They can be charged, but prosecutors and courts would likely take into account their age in doling out punishment. Yet while some states have developed penal codes that factor a perpetrator's age into their punishment, including by imposing tiered penalties that attempt to spare first-time or youth offenders from incarceration, most do not. While most agree there should be consequences for youth offenders, there's less consensus about what those consequences should be — and a push for reeducation over extreme charges.. A 2017 survey by the Cyber Civil Rights Initiative (CCRI), a nonprofit that combats online abuse, found that people who committed image-based sexual abuse reported the threat of jail time as one of the strongest deterrents against the crime. That's why the organization's policy recommendations have always pushed for criminalization, said Mary Anne Franks, a law professor at George Washington University who leads the initiative. Many states have sought to address the issue of AI-generated child sexual abuse material, which covers deepfakes of people under 18, by modifying existing laws banning what is legally know as child pornography. These laws tend to have more severe punishments: felonies instead of misdemeanors, high minimum jail time or significant fines. For example, Louisiana mandates a minimum five-year jail sentence no matter the age of the perpetrator. While incidents of peer-on-peer deepfake abuse are increasingly cropping up in the news, information on what criminal consequences youth offenders have faced remains scarce. There is often a significant amount of discretion involved in how minors are charged. Generally, juvenile justice falls under state rather than federal law, giving local officials leeway to impose punishments as they see fit. If local prosecutors are forced to decide between charging minors with severe penalties that are aimed at adults or declining to prosecute, most will likely choose the latter, said Lindsay Hawthorne, the communications coordinator at Enough Abuse, a Massachussetts-based nonprofit fighting against child sexual abuse. But then this throws away an opportunity to teach youth about the consequences of their actions and prevent reoffending. Charges that come at a prosecutor's discretion are more likely to disproportionately criminalize youth of color and LGBTQ+ youth, she said. Delfino said that in an ideal case, a judge in juvenile court would weigh many factors in sentencing: the severity of the harm caused by deepfake abuse, the intent of the perpetrator, and adolescent psychology. Experts say that building these factors directly into policy can help better deal with offenders who may not understand the consequences of their actions and allow for different enforcement mechanisms for people who say they weren't seeking to cause harm. For example, recent laws passed this session in South Carolina and Florida have 'proportional penalties' that take into account circumstances including age, intent and prior criminal history. Both laws mirrored model legislation written by MyOwn Image, a nonprofit dedicated to preventing technology-facilitated sexual violence. Founded by image-based sexual abuse survivor Susanna Gibson, the organization has been involved in advocating for strengthened laws banning nonconsensual distribution of intimate images at the state level, bringing a criminal justice reform lens into the debate. Under the Florida law, which took effect May 22, offenders who profit from nonconsensual intimate images distribution are charged with felonies, even if for a first offense. But first-time offenders who use intimate images to harass victims are charged with a misdemeanor; if they do it again, they then are charged with a felony. This avoids 'sweeping criminalization of people who may not fully understand the harm caused by their actions,' Will Rivera, managing director at MyOwn Image, said in a statement. South Carolina's newly passed law addressing AI-generated child sexual abuse material, meanwhile, explicitly states that minors with no prior related criminal record should be referred to family court, and recommends behavioral health counseling as part of the adjudication. A separate South Carolina law banning nonconsensual distribution of intimate imagery also has tiered charges depending on intent and previous convictions. Experts are mostly united in believing that incarcerating youth offenders would not solve the problem of image-based sexual abuse. Franks said that while her group has long recommended criminal penalties as part of the answer, there need to be more policy solutions for youth offenders than just threatening jail time. Amina Fazlullah, head of tech policy advocacy at Common Sense Media, said that laws criminalizing NCII and abusive deepfakes need to be accompanied by digital literacy and AI education measures. That could fill a massive gap. According to Stanford, there currently isn't any comprehensive research on how many schools specifically teach students about online exploitation. Since most teens aren't keeping abreast of criminal codes, AI literacy education initiatives could teach young users what crosses the line into illegal behavior and provide resources for victims of nonconsensual intimate imagery to seek redress. Digital literacy could also emphasize ethical use of technology and create space for conversations about app use. Hawthorne noted that Massachusetts's law banning deepfakes, which went into effect last year, directs adolescents who violate it to take part in an education program that explains laws and the impacts of sexting. Ultimately, Franks said, the behavior that underlies deepfake abuse isn't new, and so we do not need to rewrite our responses from scratch 'We should just stick to the things that we know, which don't change with technology, which is consent, autonomy, agency, safety. Those are all things that should be at the heart of what we talk to kids about,' she said. Like abstinence-only education, schools shaming and scaring kids about more common practices like sexting is not an effective way to prevent abuse, Franks said, and can discourage kids from seeking help from adults when they are being exploited. Franks noted that parents, too, have the power to instill in their children agency over their own images every time they take a photo. She also said there are myriad other ways to regulate the ecosystem around sexually explicit deepfakes. After all, most policy around deepfakes addresses harm already done, and laws like the federal Take It Down Act put a burden on the victim to request the removal of their images from online platforms. Part of addressing the problem is making it more difficult to create and rapidly distribute nonconsensual imagery — and keeping tools for deepfakes out of kids' hands, experts said. One avenue for change that advocates see is applying pressure on companies whose tools are used to create nonconsensual deepfakes. Third parties that help distribute them are also becoming a target. After a CBS News investigation, Meta took action to remove advertisements of so-called 'nudify apps' on its platforms. Frank also suggested app stores could delist them. Payment processors, too, have a lot of power over the ecosystem. When Visa, Mastercard and Discover cut off payments to PornHub after a damning New York Times report revealed how many nonconsensual videos it hosted, the largest pornography site in the world deleted everything it couldn't confirm was above board — nearly 80 percent of its total content. Last month, Civitai finally cracked down on generative AI models tailored around real people after payment processors refused to work with the company. This followed extensive reporting by tech news site 404 Media on the image-platform's role in the spread of nonconsensual deepfakes. And of course, Franks said, revamping the liability protections digital services enjoy under Section 230 could force tech companies' hands when it comes to liability, compelling them be more proactive about preventing digital sexual violence. A version of this article first appeared in Tech Policy Press. The post Kids are making deepfakes of each other, and laws aren't keeping up appeared first on The 19th. News that represents you, in your inbox every weekday. Subscribe to our free, daily newsletter.

Kids are making deepfakes of each other, and laws aren't keeping up
Kids are making deepfakes of each other, and laws aren't keeping up

Yahoo

time06-07-2025

  • Yahoo

Kids are making deepfakes of each other, and laws aren't keeping up

Last October, a 13-year-old boy in Wisconsin used a picture of his classmate celebrating her bat mitzvah to create a deepfake nude he then shared on Snapchat. This is not an isolated incident. Over the past few years, there has been case after case of school-age children using deepfakes to prank or bully their classmates. And it keeps getting easier to do. When they emerged online eight years ago, deepfakes were initially difficult to make. Nowadays, advances in technology, through generative artificial intelligence, have provided tools to the masses. One troubling consequence is the prevalence of deepfake apps among young users. 'If we would have talked five or six years ago about revenge porn in general, I don't think that you would have found so many offenders were minors,' said Rebecca Delfino, a law professor at Loyola Marymount University who studies deepfakes. Federal and state legislators have sought to tackle the scourge of nonconsensual intimate image (NCII) abuse, sometimes referred to as 'revenge porn,' though advocates prefer the former term. Laws criminalizing the nonconsensual distribution of intimate images — for authentic images, at least — are in effect in every U.S. state and Washington, D.C., and last month President Donald Trump signed a similar measure into law, known as Take It Down. But unlike the federal measure, many of the state laws don't apply to explicit AI-generated deepfakes. Fewer still appear to directly grapple with the fact that perpetrators of deepfake abuse are often minors. Fifteen percent of students reported knowing about AI-generated explicit images of a classmate, according to a survey released in September by the Center for Democracy and Technology (CDT), a center-left think tank. Students also reported that girls were much more likely to be depicted in explicit deepfakes. According to CDT, the findings show that 'NCII, both authentic and deepfake, is a significant issue in K-12 public schools.' 'The conduct we see minors engaged in is not all that different from the pattern of cruelty, humiliation and exploitation and bullying that young people have always done to each other,' said Delfino. 'The difference lies in not only the use of technology to carry out some of that behavior, but the ease with which it is disseminated.' Policymakers at the state and federal level have come at perpetrators of image-based sexual abuse 'hard and fast,' no matter their age, Delfino said. The reason is clear, she said: The distribution of nonconsensual images can have long-lasting, serious mental health harms on the target of abuse. Victims can be forced to withdraw from life online because of the prevalence of nonconsensual imagery. Image-based sexual abuse has similar negative mental health impacts on survivors as those who experienced offiline sexual violence. Delfino said that under most existing laws, youth offenders are likely to be treated similarly to minors who commit other crimes: They can be charged, but prosecutors and courts would likely take into account their age in doling out punishment. Yet while some states have developed penal codes that factor a perpetrator's age into their punishment, including by imposing tiered penalties that attempt to spare first-time or youth offenders from incarceration, most do not. While most agree there should be consequences for youth offenders, there's less consensus about what those consequences should be — and a push for reeducation over extreme charges.. A 2017 survey by the Cyber Civil Rights Initiative (CCRI), a nonprofit that combats online abuse, found that people who committed image-based sexual abuse reported the threat of jail time as one of the strongest deterrents against the crime. That's why the organization's policy recommendations have always pushed for criminalization, said Mary Anne Franks, a law professor at George Washington University who leads the initiative. Many states have sought to address the issue of AI-generated child sexual abuse material, which covers deepfakes of people under 18, by modifying existing laws banning what is legally know as child pornography. These laws tend to have more severe punishments: felonies instead of misdemeanors, high minimum jail time or significant fines. For example, Louisiana mandates a minimum five-year jail sentence no matter the age of the perpetrator. While incidents of peer-on-peer deepfake abuse are increasingly cropping up in the news, information on what criminal consequences youth offenders have faced remains scarce. There is often a significant amount of discretion involved in how minors are charged. Generally, juvenile justice falls under state rather than federal law, giving local officials leeway to impose punishments as they see fit. If local prosecutors are forced to decide between charging minors with severe penalties that are aimed at adults or declining to prosecute, most will likely choose the latter, said Lindsay Hawthorne, the communications coordinator at Enough Abuse, a Massachussetts-based nonprofit fighting against child sexual abuse. But then this throws away an opportunity to teach youth about the consequences of their actions and prevent reoffending. Charges that come at a prosecutor's discretion are more likely to disproportionately criminalize youth of color and LGBTQ+ youth, she said. Delfino said that in an ideal case, a judge in juvenile court would weigh many factors in sentencing: the severity of the harm caused by deepfake abuse, the intent of the perpetrator, and adolescent psychology. Experts say that building these factors directly into policy can help better deal with offenders who may not understand the consequences of their actions and allow for different enforcement mechanisms for people who say they weren't seeking to cause harm. For example, recent laws passed this session in South Carolina and Florida have 'proportional penalties' that take into account circumstances including age, intent and prior criminal history. Both laws mirrored model legislation written by MyOwn Image, a nonprofit dedicated to preventing technology-facilitated sexual violence. Founded by image-based sexual abuse survivor Susanna Gibson, the organization has been involved in advocating for strengthened laws banning nonconsensual distribution of intimate images at the state level, bringing a criminal justice reform lens into the debate. Under the Florida law, which took effect May 22, offenders who profit from nonconsensual intimate images distribution are charged with felonies, even if for a first offense. But first-time offenders who use intimate images to harass victims are charged with a misdemeanor; if they do it again, they then are charged with a felony. This avoids 'sweeping criminalization of people who may not fully understand the harm caused by their actions,' Will Rivera, managing director at MyOwn Image, said in a statement. South Carolina's newly passed law addressing AI-generated child sexual abuse material, meanwhile, explicitly states that minors with no prior related criminal record should be referred to family court, and recommends behavioral health counseling as part of the adjudication. A separate South Carolina law banning nonconsensual distribution of intimate imagery also has tiered charges depending on intent and previous convictions. Experts are mostly united in believing that incarcerating youth offenders would not solve the problem of image-based sexual abuse. Franks said that while her group has long recommended criminal penalties as part of the answer, there need to be more policy solutions for youth offenders than just threatening jail time. Amina Fazlullah, head of tech policy advocacy at Common Sense Media, said that laws criminalizing NCII and abusive deepfakes need to be accompanied by digital literacy and AI education measures. That could fill a massive gap. According to Stanford, there currently isn't any comprehensive research on how many schools specifically teach students about online exploitation. Since most teens aren't keeping abreast of criminal codes, AI literacy education initiatives could teach young users what crosses the line into illegal behavior and provide resources for victims of nonconsensual intimate imagery to seek redress. Digital literacy could also emphasize ethical use of technology and create space for conversations about app use. Hawthorne noted that Massachusetts's law banning deepfakes, which went into effect last year, directs adolescents who violate it to take part in an education program that explains laws and the impacts of sexting. Ultimately, Franks said, the behavior that underlies deepfake abuse isn't new, and so we do not need to rewrite our responses from scratch 'We should just stick to the things that we know, which don't change with technology, which is consent, autonomy, agency, safety. Those are all things that should be at the heart of what we talk to kids about,' she said. Like abstinence-only education, schools shaming and scaring kids about more common practices like sexting is not an effective way to prevent abuse, Franks said, and can discourage kids from seeking help from adults when they are being exploited. Franks noted that parents, too, have the power to instill in their children agency over their own images every time they take a photo. She also said there are myriad other ways to regulate the ecosystem around sexually explicit deepfakes. After all, most policy around deepfakes addresses harm already done, and laws like the federal Take It Down Act put a burden on the victim to request the removal of their images from online platforms. Part of addressing the problem is making it more difficult to create and rapidly distribute nonconsensual imagery — and keeping tools for deepfakes out of kids' hands, experts said. One avenue for change that advocates see is applying pressure on companies whose tools are used to create nonconsensual deepfakes. Third parties that help distribute them are also becoming a target. After a CBS News investigation, Meta took action to remove advertisements of so-called 'nudify apps' on its platforms. Frank also suggested app stores could delist them. Payment processors, too, have a lot of power over the ecosystem. When Visa, Mastercard and Discover cut off payments to PornHub after a damning New York Times report revealed how many nonconsensual videos it hosted, the largest pornography site in the world deleted everything it couldn't confirm was above board — nearly 80 percent of its total content. Last month, Civitai finally cracked down on generative AI models tailored around real people after payment processors refused to work with the company. This followed extensive reporting by tech news site 404 Media on the image-platform's role in the spread of nonconsensual deepfakes. And of course, Franks said, revamping the liability protections digital services enjoy under Section 230 could force tech companies' hands when it comes to liability, compelling them be more proactive about preventing digital sexual violence. A version of this article first appeared in Tech Policy Press. The post Kids are making deepfakes of each other, and laws aren't keeping up appeared first on The 19th. News that represents you, in your inbox every weekday. Subscribe to our free, daily newsletter.

How to report leaked or fake intimate photos in India
How to report leaked or fake intimate photos in India

India Today

time23-06-2025

  • India Today

How to report leaked or fake intimate photos in India

What if your most private photos or videos were leaked online, or worse, faked using AI? This crime has a name: Non-Consensual Intimate Image Abuse (NCII), and it's becoming more common in the age of deepfakes and digital blackmail. In this Know Your Law episode, we explain how to protect yourself, from collecting evidence to filing a complaint and getting the content taken down. Whether you're a victim or a witness, Indian law gives you the right to act. #KnowYourLaws #Deepfakes #ArtificialIntelligence #AI #Blackmail #IndiaNews #Law #Legal

Malaysia aims 100 cybersecurity experts accredited as C-CISO this year
Malaysia aims 100 cybersecurity experts accredited as C-CISO this year

The Sun

time03-06-2025

  • Business
  • The Sun

Malaysia aims 100 cybersecurity experts accredited as C-CISO this year

KUALA LUMPUR: The Ministry of Digital is targeting at least 100 cybersecurity experts to be accredited as Certified Chief Information Security Officers (C-CISO) by the end of this year, said Minister Gobind Singh Deo. He said that through the C-CISO Certification Programme, the ministry is committed to producing more talents and experts in the field, to strengthen the public sector's preparedness against cyber threats. The initiative, he said, was implemented through a strategic collaboration with the EC-Council and the Human Resource Development Corporation (HRD Corp), with the first seven participants receiving the certification today. 'Our target is that by the end of this year, we can produce a total of 100 C-CISOs as a start and moving forward, we want to see more people showing interest and participating in this programme,' he told a press conference after launching the Cyber ​​Security Professional Capability Development Programme here today. He said the programme would also be expanded to other sectors to ensure local talents in the field of cybersecurity could continue to be polished and developed continuously. Also present were the Digital Ministry secretary-general Fabian Bigar, CyberSecurity Malaysia chief executive officer Datuk Dr Amirudin Abdul Wahab and EC-Council president Sanjay Bavisi. Earlier in his speech, Gobind Singh said the certification program is one of the key components in supporting the implementation of the Cyber ​​Security Act 2024 (Act 854), especially among National Critical Infrastructure (NCII) entities. 'As a Chief Information Security Officer (CISO), the person holds a strategic role in ensuring the organisation's compliance with the provisions under Act 854. 'The responsibilities of the CISO include formulating cybersecurity policies, implementing technical controls, risk management and organisational preparedness in dealing with cyber incidents, in addition to serving as a strategic link between the government, industry, technology providers and the NCII community,' he said. He said a CISO also plays a crucial role in shaping and driving a security-first culture within an organisation by promoting continuous training and certification, while ensuring that all systems and technologies in use adhere to established security standards. The C-CISO programme covers five main domains, including governance, security audit, data protection, operations management and strategic planning, which Gobind described as a long-term investment in the development of the country's digital leadership. 'If in the past, the strength of a country was measured through the military, today it depends on the security and trust in digital systems. Digital defence is the main pillar of the country's prosperity and stability,' he said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store