
Doctor couple nabbed for khalwat in heated stand-off at home while roommates were away
The 26-year-old couple drew attention after the male doctor refused to open the door during a raid by Melaka Islamic Religious Department (JAIM) officers at around 3pm, Kosmo reported.
In a video posted on Instagram, he was seen standing with his arms akimbo and questioning why officers wanted to enter his rented home.
He eventually relented some 20 minutes later, after the officers returned with police, an imam, and local residents.
Melaka Education, Higher Education and Religious Affairs committee chairman Datuk Rahmad Mariman confirmed the arrest, saying the pair were found alone and hiding in a house allegedly shared with two other male tenants who were not present at the time.
'The couple, who work at Melaka Hospital, were detained for further investigation and taken to the JAIM office before being released on bail,' he was quoted as saying.
The case is being investigated under Section 53 of the Melaka Syariah Offences Enactment 1991.
If convicted, they face a fine of up to RM3,000, imprisonment of up to two years, or both.
Sources familiar with the case said the couple were fully clothed during the raid.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Star
12 hours ago
- The Star
Political motivations behind rising cyberattacks in Indonesia in mid-2025, report says
JAKARTA: Online intimidation against critics of public officials and government policies in the form of cyberattacks has risen in Indonesia in the second quarter of this year, according to a recent report. The report, released by digital rights advocacy groups South-east Asia Freedom of Expression Network (SAFEnet) on Wednesday (July 30), recorded 168 incidents of cyberattack between April and June. The figure rose from 139 in the first quarter of the year and 90 in the same period in 2024. The attacks, which peaked in May with 65 cases, ranged from hacking and account suspensions to doxing and identity theft. But digital intimidation remained the most prominent type of attack with 42 cases. Among the list of victims, students were found to be the most targeted with 40 incidents involving them. They were followed by private employees, ordinary citizens and activists with 25, 23 and 16 incidents, respectively. 'Compared to the first quarter, we found more cyberattacks with political overtones,' SAFEnet's head of freedom of expression M. Hafizh Nabiyyin said during the report's launch. Some incidents were found to be related to many hot-button political issues. An example was several Instagram users who had their accounts suspended after criticising the controversial revision of the Indonesian Military Law, which was passed by the House of Representatives in March despite public opposition. Account suspensions also occurred against users criticising the controversial nickel mining operation in Raja Ampat, Southwest Papua, after environmental group Greenpeace Indonesia shared videos showing nickel mining activities in one of the islands in the area. SAFEnet also noted that at least eight people experienced cyberattacks after posting satirical comments on X pertaining to the alleged involvement of Deputy House Speaker Sufmi Dasco Ahmad and former communications minister Budi Arie Setiadi in illegal gambling operations. Both politicians have denied the allegations. After posting their contents, the eight users reportedly received a barrage of anonymous calls and messages through WhatsApp from domestic and international numbers, including ones coming from Thailand and the Philippines. 'The callers forced the victims to delete their posts and threatened to leak sensitive personal data such as their mothers' name, licence plate numbers as well as ID card photos and numbers,' the group wrote in the report, alleging that the attacks showed 'a clear political motive to suppress public dissent'. Another case saw a stand-up comedian being targeted after making jokes about West Java Governor Dedi Mulyadi during a performance in the provincial capital of Bandung in May. After the clip of his performance went viral, several Instagram and TikTok accounts doxxed the comedian and his relatives' home addresses. 'This phenomenon indicates a new form of politically motivated digital repression, which not only intimidates, but also undermines the sense of security in expressing opinions in online public spaces,' the reports said. SAFEnet noted that platforms under tech company Meta remained the most common sites of attacks, with 68 incidents recorded on Instagram, 53 on Whatsapp and 18 on Facebook. Other platforms, such as X and TikTok, saw at least 10 cases each. The report also highlighted other pressing digital rights issues, including a spike in online gender-based violence with 665 reported cases, higher than 422 reported in the first quarter of 2025 or the 465 during the same period in 2024. The rights group still welcomed a much-lauded Constitutional Court ruling on the Electronic Information and Transactions Law in April, in which justices prohibited its use by the government and corporations against their critics. However, SAFEnet emphasised that the ruling has not fully guaranteed freedom of expression in the digital realm so far. 'To date, there is still a tendency to criminalise others using the cyberlaw's loose articles, which often continues into a formal investigation and even the court process,' the group wrote in the report, 'which could harm the victims' financial, physical or psychological state.' The Jakarta Post asked the Presidential Communication Office (PCO) for a comment on the report. But PCO expert staff Insaf Albert Tarigan declined to comment, saying on Thursday (Aug 1) that he had not heard about the report. - The Jakarta Post/ANN


The Star
a day ago
- The Star
Should Hong Kong plug legal gaps to stamp out AI-generated porn?
Betrayal was the first thought to cross the mind of a University of Hong Kong (HKU) law student when she found out that a classmate, whom she had considered a friend, had used AI to depict her naked. 'I felt a great sense of betrayal and was traumatised by the friendship and all the collective memories,' said the student, who has asked to be called 'B'. She was not alone. More than 700 deepfake pornographic images were found on her male classmate's computer, including AI-generated pictures of other HKU students. B said she felt 'used' and that her bodily autonomy and dignity had been violated. Angered by what she saw as inaction by HKU in the aftermath of the discovery and the lack of protection under existing city laws, B joined forces with two other female students to set up an Instagram account to publicise the case. HKU said it had issued a warning letter to the male student and told him to make a formal apology. It later said it was reviewing the incident and vowed to take further action, with the privacy watchdog also launching a criminal investigation. But the women said they had no plans to file a police report as the city lacked the right legal 'framework'. The case has exposed gaps in legislation around the non-consensual creation of images generated by artificial intelligence (AI), especially when the content was only used privately and not published to a wider audience. Authorities have also weighed in and pledged to examine regulations in other jurisdictions to decide on the path forward. Hong Kong has no dedicated laws tackling copyright issues connected to AI-generated content or covering the creation of unauthorised intimate images that have not been published. Legal experts told the Post that more rules were needed, while technology sector players urged authorities to avoid stifling innovation through overregulation. The HKU case is not the only one to have put the spotlight on personal information being manipulated or used by AI without consent. In May, a voice that was highly similar to that of an anchor from local broadcaster TVB was found in an advertisement and a video report from another online news outlet. The broadcaster, which said it suspected another news outlet and an advertiser had used AI to create the voice-over, warned that the technology was capable of creating convincing audio of a person's voice without prior authorisation. Separately, a vendor on an e-commerce platform was found to be selling AI-created videos mimicking a TVB news report with lifelike anchors delivering birthday messages. In response to a Post inquiry, TVB's assistant general manager in corporate communications, Bonnie Wong, said the broadcaster had issued a warning to the vendor and alerted the platform to demand the immediate removal of the infringing content. What the law says Lawyers and a legal academic who spoke to the Post pointed to gaps in criminal, privacy and copyright laws regarding the production of intimate images using AI without consent or providing sufficient remedies for victims. Barrister Michelle Wong Lap-yan said prosecutors would have a hard time laying charges using existing offences such as access to a computer with criminal or dishonest intent and publication, or threatening to publish intimate images without consent. Both offences are under the Crimes Ordinance. The barrister, who has experience advising victims of sexual harassment, said the first offence governing computer use only applied when a perpetrator accessed another person's device. 'Hence, if the classmate generated the images using his own computer, the act may not be caught under [the offence],' Wong said. As for the second offence, a perpetrator needed to publish the images or at least threaten to do so. 'Unless the classmate threatened to publish the deepfake images, he would not be caught under this newly enacted legislation [that came into force in 2021],' Wong said. She said that 'publishing' was defined broadly under the Crimes Ordinance to cover situations such as when a person distributed, circulated, made available, sold, gave or lent images to another person. The definition also included showing the content to another person in any manner. Craig Choy Ki, a US-based barrister, told the Post that challenges also existed in using privacy and copyright laws. He said the city's anti-doxxing laws only applied if the creator of the image disclosed the intimate content to other people. Privacy laws could come into play depending on how the creator had collected personal information when generating the images, and whether they had been published without permission to infringe upon the owner's 'moral rights'. The three alleged victims in the HKU case told the Post that a friend of the male student had discovered the AI-generated pornographic images when borrowing his computer in February. 'As far as we know, [the male student] did not disclose the images to his friend himself,' the trio said. They added that the person who made the discovery told them they were among the files. Stuart Hargreaves, an associate professor from the Chinese University of Hong Kong's law faculty, said legislation covering compensation for harm was inadequate for helping victims of non-consensual AI-produced intimate images. In an academic paper, Hargreaves argued that a court might not deem such images, which were known to be fake and fabricated, as able to cause reputational harm if created or published without consent. Suing the creator for deliberately causing emotional distress could also be difficult given the level of harm required and the threshold usually required by a court. 'In the case of targeted synthetic pornography, the intent of the creator is often for private gratification or esteem building in communities that trade the images. Any harm to the victim may often be an afterthought,' Hargreaves wrote. A Post check found that major AI chatbots prohibited the creation of pictures using specific faces of real people or creating sexual content, but free-of-charge platforms specialising in the generation of such content were readily available online. The three female HKU students also told the Post that one of the pieces of software used to produce the more than 700 intimate images found on their classmate's laptop was ' On its website, the software is described as a 'free AI undressing tool that allows users to generate images of girls without clothing'. The provider is quoted as saying: 'There are no laws prohibiting the use of [the] application for personal purposes.' It also describes the software as operating 'within the bounds of current legal frameworks'. Clicking on a button to create a 'deepnude', a sign-in page tells users they must be aged over 18, that they cannot use another person's photo without their permission and that they will be responsible for the images created. But users can simply ignore the messages. The Post found that the firm behind this software, Itai Tech Limited, is under investigation by the UK's communications services regulator for allegedly providing ineffective protection to prevent children from accessing pornographic content. The UK-registered firm is also being sued by the San Francisco City Attorney for operating websites producing non-consensual intimate images of adults. The Post has reached out to Itai Tech for comment. How can deepfake porn be regulated? Responding to an earlier Post inquiry in July, the Innovation, Technology and Industry Bureau said it would continue to monitor the development and application of AI in the city, draw references from elsewhere and 'review the existing legislation' if necessary. Chief Executive John Lee Ka-chiu also pledged to examine global regulatory trends and research international 'best practices' on regulating the emerging technology. Some overseas jurisdictions have introduced offences for deepfake porn that extend beyond the publication of such content. In Australia, the creation and transmission of deepfake pornographic images made without consent was banned last year, with offenders facing up to seven years in jail if they had also created the visuals. South Korea also established offences last year for possessing and viewing deepfake porn. Offenders can be sentenced to three years' imprisonment or fined up to 30 million won (US$21,666). The United Kingdom has floated a proposal to ban the creation of sexually explicit images of adults without their consent, regardless of whether the creator intended to share the content. Perpetrators would have the offence added to their criminal record and face a fine, while those who shared their unauthorised creations could face jail time. The lawyers also noted that authorities would need to lay out how a law banning the non-consensual creation or possession of sexually explicit images could be enforced. Wong, the barrister, said law enforcement and prosecutors should have adequate experience in establishing and implementing the charges with reference to existing legislation targeting child pornography. In such cases, authorities can access an arrested person's electronic devices with their consent or with a court warrant. Is a legal ban the right solution? Hargreaves suggested Hong Kong consider establishing a broad legal framework to target unauthorised sexually explicit deepfake content with both civil and criminal law reforms. He added that criminal law alone was insufficient to tackle such cases, which he expected to become more common in the future. 'It is a complicated problem that has no single silver bullet solution,' Hargreaves said. The professor suggested the introduction by statute of a new right of action based upon misappropriation of personality to cover the creation, distribution or public disclosure of visual or audiovisual material that depicted an identifiable person in a false sexual context without their consent. Its goal would be to cover harm to a victim's dignity and allow them to seek restitution and quicker remedies. United States-based barrister Choy said the city could establish a tiered criminal framework with varying levels of penalties, while also allowing defences such as satire or news reports. He said the law should only apply when an identifiable person was depicted in the content. But Hargreaves and Choy cautioned that any new law would have to strike a balance with freedom of expression, especially for cases in which content had not been published. The lawyers said it would be difficult to draw up a legal boundary between what might be a private expression of sexual thought and protection for people whose information was appropriated without their consent. Hargreaves said films would not be deemed illegal for portraying a historical or even a current figure engaging in sexual activity, adding that a large amount of legitimate art could be sexually explicit without infringing upon the law. But the same could not be said for pornographic deepfake content depicting a known person without their consent. 'Society should express disapproval of the practice of creating sexually explicit images of people without their consent, even if there is no intent that those images be shared,' Hargreaves said. Choy said the city would need to consult extensively across society on the acceptable moral standards to decide which legal steps would be appropriate and to avoid setting up a 'thought crime' for undisclosed private material. 'When we realise there is a moral issue, the use of the law is to set the lowest acceptable bar in society as guidelines for everyone. The discussion should include how the legal framework should be laid out and whether laws should be used,' he said. But penalties might only provide limited satisfaction for victims, as they had potentially suffered emotional trauma that was not easily remedied under the law. The barrister added that the city should also consider if the creator of the unauthorised content should be the sole bearer of legal consequences, as well as whether AI service providers should have some responsibility. 'I think we sometimes overweaponise the law, thinking that it would solve all problems. With AI being such a novel technology which everyone is still figuring out its [moral] boundaries, education is also important,' Choy said. Will laws hurt tech progress? Lawmakers with ties to the technology sector have expressed concerns over possible legislation banning the creation of deepfake pornography without others' consent, saying that the law could not completely stamp out misuse and the city risked stifling a rapidly developing area. Duncan Chiu, a legislator representing the technology sector, likened the potential ban on creating the content to prohibition on possession of pornography. He said a jurisdiction could block access to websites within its remit but was unable to prevent internet users from accessing the same material in another country that permitted the content. 'This situation is also found elsewhere on the internet. One jurisdiction enacts a law, but you can't stop people from accessing the said software in other countries or regions,' Chiu said. The lawmaker said AI platforms had been working together to establish regulations on ethical concerns, such as labelling images with watermarks to identify them as digital creations. He also gave the example of 'hallucinations', which are incorrect or misleading results generated by AI models, saying the city did not need to legislate over the generation of false content. 'Many generative AI programmes are able to reveal sources behind their answers with a click of a button. Hallucinations can be adjusted away too. It would have been wrong to legislate against this back then, as technology would advance past this,' Chiu said. Johnny Ng Kit-chong, a lawmaker who has incubated tech start-ups in the city, said a social consensus on the use of AI was needed before moving to the legislative stage. Ng said that banning the creation of sexually explicit deepfake content produced without consent would leave the burden on technology firms to establish rules on their platforms. 'This might not affect society much as most people would think regulations on sexual images are [inappropriate]. However, to start-ups, they would be limited in their development of artificial intelligence functions,' Ng said. Ng said Hong Kong could reference the EU Artificial Intelligence Act, which classified AI by the amount of risk with corresponding levels of regulation. Some practices, such as using the technology for social scoring, were banned. But the use of AI to generate sexual content was permitted and not listed as a high-risk practice. Yeung Kwong-chak, CEO of start-up DotAI, told the Post that businesses might welcome more clarity on legal boundaries. His firm uses AI to provide corporate services and also offers training courses for individuals and businesses to learn how to harness the technology. 'Large corporations are concerned about risks [associated with using new technologies], such as data leaks or if AI-generated content may offend internal policies. Having laws will give them a sense of certainty in knowing where the boundaries are, encouraging their uptake of the technology,' Yeung said. The three students in the HKU case said they hoped the law would keep up with the times to offer protection to victims, provide a deterrent and outlaw the creation of pornographic deepfake images without consent. 'The risk of the creation of deepfake images does not lie only in 'possession', but more importantly in the acquisition of indecent images without consent, and the potential distribution of images [of which the] authenticity may become more and more difficult to distinguish as AI continues to develop,' they warned. They added that 'permanent and substantial' punishment to creators of these non-consensual images would do the most to ease their concerns. 'We hope the punishment can be sufficient so that he can be constantly, impactfully reminded of his wrongdoing and not commit such an irritating act ever again,' the students said. - SOUTH CHINA MORNING POST


New Straits Times
2 days ago
- New Straits Times
Prosecutors seek trial for PSG's Hakimi over rape charge
NANTERRE, France: French prosecutors on Friday called for Paris Saint-Germain star Achraf Hakimi to face trial for the alleged rape of a woman in 2023 which the Moroccan international denies. The Nanterre prosecutor's office told AFP that they had requested that the investigating judge refer the rape charge to a criminal court. "It is now up to the investigating magistrate to make a decision within the framework of his order," the prosecutor's office told AFP in a statement. Hakimi, 26, played a major role in PSG's run to their first Champions League title, the full-back scoring the opener in the 5-0 rout of Inter Milan in the final in May. Hakimi, who helped Morocco to their historic progress to the semi-finals of the 2022 World Cup, was charged in March, 2023 with raping a 24-year-old woman. Hakimi allegedly paid for his accuser to travel to his home on 25 Feb, 2023, in the Paris suburb of Boulogne-Billancourt while his wife and children were away on holiday. The woman went to a police station following the encounter alleging rape and was questioned by police. Although the woman refused to make a formal accusation, prosecutors decided to press charges against the player. She told police at the time that she had met Hakimi in January 2023 on Instagram. On the night in question she said she had travelled to his house in a taxi paid for by Hakimi. She told police Hakimi had started kissing her and making non-consensual sexual advances, before raping her, a police source told AFP at the time. She said she managed to break free to text a friend who came to pick her up. Contacted by AFP after Friday's development Hakimi's lawyer Fanny Colin described the call by prosecutors for a trial as "incomprehensible and senseless in light of the case's elements." "We, along with Achraf Hakimi, remain as calm as we were at the start of the proceedings. "If these requisitions were to be followed, we would obviously pursue all avenues of appeal," she continued. According to Colin, her client had "been the target of an attempted extortion." "Nothing in this case suggests an attempted extortion," Rachel-Flore Pardo, the lawyer representing the woman, said. "My client welcomes this news with immense relief," she told AFP. "We will not tolerate any smear or destabilisation campaign, as is unfortunately still too often the case for women who have the courage to report the rape of which they are victims," she added. The son of a cleaning lady and a street vendor, both Moroccans who have lived in Spain since the 1980s, Hakimi was born in Getafe, a southern suburb of Madrid. Hakimi came through the youth system at Real Madrid before joining Bundesliga side Borussia Dortmund in 2018. He went on to make 73 appearances for the German club. He moved to Inter Milan in 2020 and then on to PSG in 2021 where he has established himself as an integral part of the team. In Qatar, Hakimi was a cornerstone of the Morocco team that became the first African or Arab nation to reach the semi-finals of a World Cup.--AFP