logo
Labuan woman loses over RM33,000 in Instagram investment scam

Labuan woman loses over RM33,000 in Instagram investment scam

Daily Express25-04-2025
Published on: Friday, April 25, 2025
Published on: Fri, Apr 25, 2025
By: Bernama Text Size: For illustrative purposes only. LABUAN: A local woman has become the latest victim of an online investment scam in this duty-free island, losing a total of RM33,508.25 after being duped through a fraudulent scheme promoted on the social media platform Instagram. Labuan police chief Supt Mohd Hamizi Halim said the 30-year-old woman was lured into the scam after coming across an advertisement on Instagram that promised lucrative returns through investment opportunities. 'Enticed by the offer, she contacted the account operator through WhatsApp, and was convinced to make multiple transactions into several bank accounts between January and April,' he told Bernama today. He said the victim only realised she had been scammed after failing to receive any returns or updates regarding her supposed investment. 'Repeated attempts to contact the individual behind the Instagram account went unanswered, prompting her to lodge a police report on April 23 at 7.05 pm,' he said. Mohd Hamzi said the victim initially made an investment of RM300 and received a return of 10 per cent, and later on, she made additional transfers from various accounts amounting to RM33,508.25 in phases, believing she was participating in a legitimate investment. 'On April 11, she attempted to transfer the profit from the amount of her investment but failed, as the investment runner told her there was a system error, and the woman started to feel she had been deceived,' he added. The case is being investigated under Section 420 of the Penal Code for cheating, which carries a punishment of imprisonment for up to 10 years, whipping, and a fine if convicted. He urged the public to remain cautious and avoid falling prey to seemingly attractive investment schemes promoted through social media, especially those that promise quick and guaranteed profits. * Follow us on Instagram and join our Telegram and/or WhatsApp channel(s) for the latest news you don't want to miss. * Do you have access to the Daily Express e-paper and online exclusive news? Check out subscription plans available.
Stay up-to-date by following Daily Express's Telegram channel. Daily Express Malaysia
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Fire near Zaporizhzhia nuclear plant contained: Russia
Fire near Zaporizhzhia nuclear plant contained: Russia

The Sun

time9 hours ago

  • The Sun

Fire near Zaporizhzhia nuclear plant contained: Russia

A fire that broke out near the Zaporizhzhia Nuclear Power Plant after Ukrainian shelling has been brought under control, the Russian-installed administration of the Russia-held plant in Ukraine said on Saturday. Russian forces seized the Zaporizhzhia plant in the first weeks of Russia's February 2022 invasion of Ukraine. Both sides have accused each other of firing or taking other actions that could trigger a nuclear accident. The plant's administration said on Telegram that a civilian had been killed in the shelling, but that no plant employees or members of the emergency services had been injured. Reuters could not independently verify the Russian report. The station, Europe's biggest nuclear power plant, is not operating but still requires power to keep its nuclear fuel cool. The plant's Russia-installed management said radiation levels remained within normal levels and the situation was under control. - Reuters

36 Malaysians, including 8 women, held in Thai prison
36 Malaysians, including 8 women, held in Thai prison

The Sun

time10 hours ago

  • The Sun

36 Malaysians, including 8 women, held in Thai prison

NARATHIWAT: There are currently 36 Malaysians, including eight women, currently in Narathiwat Prison after five Malaysian men were sent there early this year. Narathiwat Prison director, Surin Chantep said they were all linked to drug related offences, some still awaiting trial, while others have been convicted and were all currently in good condition and carrying out their sentences without any incident. 'Most of them are healthy mentally and physically. Prison authorities ensure they are free from dangerous diseases. If there are any with such conditions, they would be sent to hospitals with treatment facilities,' he told Bernama recently. The Narathiwat Prison houses inmates from various countries, including Malaysia. - Bernama

Should Hong Kong plug legal gaps to stamp out AI-generated porn?
Should Hong Kong plug legal gaps to stamp out AI-generated porn?

The Star

time11 hours ago

  • The Star

Should Hong Kong plug legal gaps to stamp out AI-generated porn?

Betrayal was the first thought to cross the mind of a University of Hong Kong (HKU) law student when she found out that a classmate, whom she had considered a friend, had used AI to depict her naked. 'I felt a great sense of betrayal and was traumatised by the friendship and all the collective memories,' said the student, who has asked to be called 'B'. She was not alone. More than 700 deepfake pornographic images were found on her male classmate's computer, including AI-generated pictures of other HKU students. B said she felt 'used' and that her bodily autonomy and dignity had been violated. Angered by what she saw as inaction by HKU in the aftermath of the discovery and the lack of protection under existing city laws, B joined forces with two other female students to set up an Instagram account to publicise the case. HKU said it had issued a warning letter to the male student and told him to make a formal apology. It later said it was reviewing the incident and vowed to take further action, with the privacy watchdog also launching a criminal investigation. But the women said they had no plans to file a police report as the city lacked the right legal 'framework'. The case has exposed gaps in legislation around the non-consensual creation of images generated by artificial intelligence (AI), especially when the content was only used privately and not published to a wider audience. Authorities have also weighed in and pledged to examine regulations in other jurisdictions to decide on the path forward. Hong Kong has no dedicated laws tackling copyright issues connected to AI-generated content or covering the creation of unauthorised intimate images that have not been published. Legal experts told the Post that more rules were needed, while technology sector players urged authorities to avoid stifling innovation through overregulation. The HKU case is not the only one to have put the spotlight on personal information being manipulated or used by AI without consent. In May, a voice that was highly similar to that of an anchor from local broadcaster TVB was found in an advertisement and a video report from another online news outlet. The broadcaster, which said it suspected another news outlet and an advertiser had used AI to create the voice-over, warned that the technology was capable of creating convincing audio of a person's voice without prior authorisation. Separately, a vendor on an e-commerce platform was found to be selling AI-created videos mimicking a TVB news report with lifelike anchors delivering birthday messages. In response to a Post inquiry, TVB's assistant general manager in corporate communications, Bonnie Wong, said the broadcaster had issued a warning to the vendor and alerted the platform to demand the immediate removal of the infringing content. What the law says Lawyers and a legal academic who spoke to the Post pointed to gaps in criminal, privacy and copyright laws regarding the production of intimate images using AI without consent or providing sufficient remedies for victims. Barrister Michelle Wong Lap-yan said prosecutors would have a hard time laying charges using existing offences such as access to a computer with criminal or dishonest intent and publication, or threatening to publish intimate images without consent. Both offences are under the Crimes Ordinance. The barrister, who has experience advising victims of sexual harassment, said the first offence governing computer use only applied when a perpetrator accessed another person's device. 'Hence, if the classmate generated the images using his own computer, the act may not be caught under [the offence],' Wong said. As for the second offence, a perpetrator needed to publish the images or at least threaten to do so. 'Unless the classmate threatened to publish the deepfake images, he would not be caught under this newly enacted legislation [that came into force in 2021],' Wong said. She said that 'publishing' was defined broadly under the Crimes Ordinance to cover situations such as when a person distributed, circulated, made available, sold, gave or lent images to another person. The definition also included showing the content to another person in any manner. Craig Choy Ki, a US-based barrister, told the Post that challenges also existed in using privacy and copyright laws. He said the city's anti-doxxing laws only applied if the creator of the image disclosed the intimate content to other people. Privacy laws could come into play depending on how the creator had collected personal information when generating the images, and whether they had been published without permission to infringe upon the owner's 'moral rights'. The three alleged victims in the HKU case told the Post that a friend of the male student had discovered the AI-generated pornographic images when borrowing his computer in February. 'As far as we know, [the male student] did not disclose the images to his friend himself,' the trio said. They added that the person who made the discovery told them they were among the files. Stuart Hargreaves, an associate professor from the Chinese University of Hong Kong's law faculty, said legislation covering compensation for harm was inadequate for helping victims of non-consensual AI-produced intimate images. In an academic paper, Hargreaves argued that a court might not deem such images, which were known to be fake and fabricated, as able to cause reputational harm if created or published without consent. Suing the creator for deliberately causing emotional distress could also be difficult given the level of harm required and the threshold usually required by a court. 'In the case of targeted synthetic pornography, the intent of the creator is often for private gratification or esteem building in communities that trade the images. Any harm to the victim may often be an afterthought,' Hargreaves wrote. A Post check found that major AI chatbots prohibited the creation of pictures using specific faces of real people or creating sexual content, but free-of-charge platforms specialising in the generation of such content were readily available online. The three female HKU students also told the Post that one of the pieces of software used to produce the more than 700 intimate images found on their classmate's laptop was ' On its website, the software is described as a 'free AI undressing tool that allows users to generate images of girls without clothing'. The provider is quoted as saying: 'There are no laws prohibiting the use of [the] application for personal purposes.' It also describes the software as operating 'within the bounds of current legal frameworks'. Clicking on a button to create a 'deepnude', a sign-in page tells users they must be aged over 18, that they cannot use another person's photo without their permission and that they will be responsible for the images created. But users can simply ignore the messages. The Post found that the firm behind this software, Itai Tech Limited, is under investigation by the UK's communications services regulator for allegedly providing ineffective protection to prevent children from accessing pornographic content. The UK-registered firm is also being sued by the San Francisco City Attorney for operating websites producing non-consensual intimate images of adults. The Post has reached out to Itai Tech for comment. How can deepfake porn be regulated? Responding to an earlier Post inquiry in July, the Innovation, Technology and Industry Bureau said it would continue to monitor the development and application of AI in the city, draw references from elsewhere and 'review the existing legislation' if necessary. Chief Executive John Lee Ka-chiu also pledged to examine global regulatory trends and research international 'best practices' on regulating the emerging technology. Some overseas jurisdictions have introduced offences for deepfake porn that extend beyond the publication of such content. In Australia, the creation and transmission of deepfake pornographic images made without consent was banned last year, with offenders facing up to seven years in jail if they had also created the visuals. South Korea also established offences last year for possessing and viewing deepfake porn. Offenders can be sentenced to three years' imprisonment or fined up to 30 million won (US$21,666). The United Kingdom has floated a proposal to ban the creation of sexually explicit images of adults without their consent, regardless of whether the creator intended to share the content. Perpetrators would have the offence added to their criminal record and face a fine, while those who shared their unauthorised creations could face jail time. The lawyers also noted that authorities would need to lay out how a law banning the non-consensual creation or possession of sexually explicit images could be enforced. Wong, the barrister, said law enforcement and prosecutors should have adequate experience in establishing and implementing the charges with reference to existing legislation targeting child pornography. In such cases, authorities can access an arrested person's electronic devices with their consent or with a court warrant. Is a legal ban the right solution? Hargreaves suggested Hong Kong consider establishing a broad legal framework to target unauthorised sexually explicit deepfake content with both civil and criminal law reforms. He added that criminal law alone was insufficient to tackle such cases, which he expected to become more common in the future. 'It is a complicated problem that has no single silver bullet solution,' Hargreaves said. The professor suggested the introduction by statute of a new right of action based upon misappropriation of personality to cover the creation, distribution or public disclosure of visual or audiovisual material that depicted an identifiable person in a false sexual context without their consent. Its goal would be to cover harm to a victim's dignity and allow them to seek restitution and quicker remedies. United States-based barrister Choy said the city could establish a tiered criminal framework with varying levels of penalties, while also allowing defences such as satire or news reports. He said the law should only apply when an identifiable person was depicted in the content. But Hargreaves and Choy cautioned that any new law would have to strike a balance with freedom of expression, especially for cases in which content had not been published. The lawyers said it would be difficult to draw up a legal boundary between what might be a private expression of sexual thought and protection for people whose information was appropriated without their consent. Hargreaves said films would not be deemed illegal for portraying a historical or even a current figure engaging in sexual activity, adding that a large amount of legitimate art could be sexually explicit without infringing upon the law. But the same could not be said for pornographic deepfake content depicting a known person without their consent. 'Society should express disapproval of the practice of creating sexually explicit images of people without their consent, even if there is no intent that those images be shared,' Hargreaves said. Choy said the city would need to consult extensively across society on the acceptable moral standards to decide which legal steps would be appropriate and to avoid setting up a 'thought crime' for undisclosed private material. 'When we realise there is a moral issue, the use of the law is to set the lowest acceptable bar in society as guidelines for everyone. The discussion should include how the legal framework should be laid out and whether laws should be used,' he said. But penalties might only provide limited satisfaction for victims, as they had potentially suffered emotional trauma that was not easily remedied under the law. The barrister added that the city should also consider if the creator of the unauthorised content should be the sole bearer of legal consequences, as well as whether AI service providers should have some responsibility. 'I think we sometimes overweaponise the law, thinking that it would solve all problems. With AI being such a novel technology which everyone is still figuring out its [moral] boundaries, education is also important,' Choy said. Will laws hurt tech progress? Lawmakers with ties to the technology sector have expressed concerns over possible legislation banning the creation of deepfake pornography without others' consent, saying that the law could not completely stamp out misuse and the city risked stifling a rapidly developing area. Duncan Chiu, a legislator representing the technology sector, likened the potential ban on creating the content to prohibition on possession of pornography. He said a jurisdiction could block access to websites within its remit but was unable to prevent internet users from accessing the same material in another country that permitted the content. 'This situation is also found elsewhere on the internet. One jurisdiction enacts a law, but you can't stop people from accessing the said software in other countries or regions,' Chiu said. The lawmaker said AI platforms had been working together to establish regulations on ethical concerns, such as labelling images with watermarks to identify them as digital creations. He also gave the example of 'hallucinations', which are incorrect or misleading results generated by AI models, saying the city did not need to legislate over the generation of false content. 'Many generative AI programmes are able to reveal sources behind their answers with a click of a button. Hallucinations can be adjusted away too. It would have been wrong to legislate against this back then, as technology would advance past this,' Chiu said. Johnny Ng Kit-chong, a lawmaker who has incubated tech start-ups in the city, said a social consensus on the use of AI was needed before moving to the legislative stage. Ng said that banning the creation of sexually explicit deepfake content produced without consent would leave the burden on technology firms to establish rules on their platforms. 'This might not affect society much as most people would think regulations on sexual images are [inappropriate]. However, to start-ups, they would be limited in their development of artificial intelligence functions,' Ng said. Ng said Hong Kong could reference the EU Artificial Intelligence Act, which classified AI by the amount of risk with corresponding levels of regulation. Some practices, such as using the technology for social scoring, were banned. But the use of AI to generate sexual content was permitted and not listed as a high-risk practice. Yeung Kwong-chak, CEO of start-up DotAI, told the Post that businesses might welcome more clarity on legal boundaries. His firm uses AI to provide corporate services and also offers training courses for individuals and businesses to learn how to harness the technology. 'Large corporations are concerned about risks [associated with using new technologies], such as data leaks or if AI-generated content may offend internal policies. Having laws will give them a sense of certainty in knowing where the boundaries are, encouraging their uptake of the technology,' Yeung said. The three students in the HKU case said they hoped the law would keep up with the times to offer protection to victims, provide a deterrent and outlaw the creation of pornographic deepfake images without consent. 'The risk of the creation of deepfake images does not lie only in 'possession', but more importantly in the acquisition of indecent images without consent, and the potential distribution of images [of which the] authenticity may become more and more difficult to distinguish as AI continues to develop,' they warned. They added that 'permanent and substantial' punishment to creators of these non-consensual images would do the most to ease their concerns. 'We hope the punishment can be sufficient so that he can be constantly, impactfully reminded of his wrongdoing and not commit such an irritating act ever again,' the students said. - SOUTH CHINA MORNING POST

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store