Post-mortem finds baby's skull fractured, repeated abuse by foster mum linked to crying, says acting KL police chief
Acting Kuala Lumpur Police Chief Datuk Mohamed Usuf Jan Mohamad said a post-mortem revealed the infant had suffered a fractured skull.
He said the injuries and death were the result of ongoing abuse over the past month.
'Investigations revealed that the injuries were caused after the baby cried incessantly, leading the foster mother to allegedly abuse her,' he told Utusan Malaysia.
He added that the woman's husband works as a salesman, while she is a homemaker.
'Further investigations are ongoing to complete the case file,' he said.
Earlier, the couple was arrested after the baby was found unconscious, with bluish skin and bruises on her body, in the living room of their home.
A man lodged a police report at 9.34pm, claiming his adopted daughter had lost consciousness for unknown reasons.
Following the report, police arrested the 38-year-old woman and her 50-year-old husband under Section 31(1)(a) of the Child Act 2001 (Act 611).
The foster mother also tested positive for drugs.
Initial investigations found that the baby had been under the woman's care since 18 June, after her biological mother returned to Indonesia.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Malay Mail
38 minutes ago
- Malay Mail
‘It sounded just like my brother': How deepfake voices are fuelling money scams
KUALA LUMPUR, Aug 4 — Imagine receiving a voice note on WhatsApp from someone who sounds exactly like your younger brother — his voice, tone, and even the way he says your name are all spot on. He says he's stuck at work, left his wallet behind, and needs RM1,500 urgently to sort something out before his boss finds out. There's even a familiar laugh at the end and you don't think twice because you really think it was him. But what if that voice was not real? CyberSecurity Malaysia (CSM) chief executive officer Datuk Amirudin Abdul Wahab has warned about a rise in scams involving AI-generated voice cloning, where scammers use artificial intelligence to impersonate family members, friends or colleagues. In many cases, the goal is to trick victims into sending money by creating a false sense of urgency and trust. 'Scammers use AI-generated voices to mimic friends, family members, or colleagues often via WhatsApp or phone calls to request urgent transfers or loans. 'Since early 2024, the police have investigated over 454 such cases, with losses totalling approximately RM2.72 million,' he said when contacted by Malay Mail. He then went on to say in the first three months of 2025, the country recorded 12,110 online fraud cases, involving scams such as fake e-commerce deals, bogus loans, and non-existent investment schemes, with total losses amounting to RM573.7 million. Citing Bukit Aman's Commercial Crime Investigation Department (CCID), he said generative AI tools including deepfake videos, cloned voices, fake digital identities, and chatbots are increasingly being used to carry out these scams. 'There has also been a rise in scams involving AI-generated cloned voices. In one case, scammers mimicked the voice of a family member to simulate an emergency situation via WhatsApp voice notes, urging the recipient to urgently transfer funds,' he said. He noted that the voice was cloned from short public TikTok videos. Amirudin added that deepfake scams have also involved national icons like Datuk Seri Siti Nurhaliza and Datuk Lee Chong Wei, whose altered images and voices were used in fake advertisements promoting cryptocurrency and investment platforms. 'As of March 2025, CCID Bukit Aman confirmed the discovery of at least five deepfake videos impersonating both national and international personalities. Among the names falsely used were Prime Minister Datuk Seri Anwar Ibrahim, Elon Musk, Donald Trump, Teresa Kok, and a senior Petronas executive. 'The manipulated clips were widely circulated online to promote fake investment platforms, many of which falsely promised returns of up to 100 times the original amount,' he added. He said the scams relied heavily on the authority and familiarity of well-known figures to convince unsuspecting viewers, especially on social media platforms where verification is often overlooked. Why it poses a serious threat, Amirudin explained that the rise of deepfake technology is alarming not just for its technical sophistication, but for the far-reaching impact it can have on society. At the individual level, he said deepfakes are being used to exploit public emotions, especially in scams that mimic the voices of family members, government officials, or well-known personalities. These tactics create a false sense of urgency, pushing victims into making quick decisions often involving money before they have a chance to think critically. 'Beyond personal safety, there is growing concern over the effect deepfakes have on public trust in the media. As manipulated content becomes increasingly indistinguishable from real footage or audio, it blurs the line between fact and fiction,' Amirudin said. He also said that this erosion of trust can sow confusion, making it easier for false narratives, misinformation, and disinformation to spread particularly on social media. At a broader level, he highlighted that national security is also at stake because the content that convincingly imitates political leaders or high-ranking officials could be weaponised to stir panic, manipulate public sentiment, or create political instability. How to verify and report suspicious AI-generated content With deepfakes becoming more difficult to detect, CSM is urging the public to stay vigilant and take advantage of available resources to verify suspicious content. He said the agency's Cyber999 Incident Response Centre supports both individuals and organisations in identifying cyber threats that involve technical components such as phishing, malware, or manipulated digital content. Members of the public can report suspicious activity through several channels: Online form and mobile application Email: cyber999[@] Hotline: 1-300-88-2999 (during office hours) or +60 19-266 5850 (24/7) 'Cyber999 also provides technical analysis of suspicious emails which users are encouraged to forward the full email header and content for expert review. 'In addition, the team shares regular security advisories and best practices, helping Malaysians keep up with the latest online threats and how to avoid them,' he said. He explained that Cyber999 handles technical cyber threats like phishing and malware, while deepfake cases without clear technical elements are usually referred to law enforcement or regulators. For small businesses, Amiruddin said CSM has developed the CyberSAFE SME Guidelines, which offer a simple checklist to help organisations detect, verify, and respond to suspicious online content. Wrapping up in our final part: It's not just tech — it's trust. We look at why media literacy is your best line of defence in the age of deepfakes, and how you can help protect not just yourself — but your family too. Recommended reading: Why seeing isn't believing anymore: What are deepfakes, and how to protect yourself from AI-generated scams AI scams are getting real: Here are the cases happening in Malaysia that you should know about


Malay Mail
38 minutes ago
- Malay Mail
AI scams are getting real: Here are the cases happening in Malaysia that you should know about
KUALA LUMPUR, Aug 4 — Scams used to be easy to spot — all it took was some bad grammar, a weird link, or a dodgy phone call. But in today's digital era, fraudsters are using artificial intelligence (AI) to impersonate people we know and trust in order to steal money or personal data. Malay Mail has compiled some of the real-life scams behind AI-powered fraud wave: Voice-cloning scams via phone or WhatsApp In May this year, a woman in Selangor lost RM5,000 after falling victim to a sophisticated voice cloning scam that used AI to mimic her employer's voice, The Rakyat Post reported. The incident occurred during a routine workday at a local shop when the company phone rang repeatedly. On the line was someone who sounded exactly like her boss and he requested several Touch 'n Go (TnG) PINs, claiming it was an urgent matter. It wasn't the first time he had made such requests, so she didn't hesitate. The woman quickly went from one convenience store to another, purchasing RM5,000 worth of TnG top-up codes and sending them as instructed. Then the line went dead. When she eventually managed to contact her real boss through a different channel, he confirmed he had never made the call. His phone had been off the entire time. Police later confirmed it was an AI-driven scam. As of 2024, The Star had reported at least three AI voice scam cases where victims lost thousands of ringgit. In Kuala Terengganu, a travel agent lost RM49,800 after receiving a highly convincing phone call from someone who sounded exactly like her close friend. Believing her friend was in urgent trouble, she transferred the money without hesitation. In Kuala Lumpur, a 26-year-old interior designer was scammed out of RM3,000 in a similar incident, where the caller impersonated a trusted contact using AI-generated audio. In Penang, a 50-year-old housewife fell victim to the same tactic, losing RM4,800 after speaking with a familiar-sounding voice on the other end of the line. Last year, the police investigated 454 fraud cases involving deepfake technology, with total reported losses amounting to RM2.72 million, according to Bukit Aman Commercial Crime Investigation Department (CCID) director Datuk Seri Ramli Mohamed Yoosuf. He said these scams frequently involve the use of AI-generated voices to impersonate family members, friends, or acquaintances, often via WhatsApp voice calls or messages. Scammers typically claim to be in urgent need of help and request money through bank transfers or prepaid top-up PINs Deepfake video investment scams featuring VIPs Scammers are now leveraging AI to produce highly convincing videos of politicians, business leaders, and celebrities to trick victims into bogus investment schemes. These AI-generated deepfake videos commonly feature well-known figures including Prime Minister Datuk Seri Anwar Ibrahim, tycoon Tan Sri Robert Kuok, former chief justice Tun Tengku Maimun Tuan Mat, and Capital A Bhd CEO Tan Sri Tony Fernandes, appearing to endorse fake investment opportunities and quick-money schemes. Even the monarchy wasn't spared — on July 10, the Johor Royal Press Office issued a public warning after detecting an AI-generated deepfake video of His Majesty Sultan Ismail, King of Malaysia on Facebook, falsely promoting an investment scheme. The palace reminded the public that impersonating the King is a serious offence and urged people not to fall for these scams. On Saturday (July 5), MCA Public Services and Complaints Department head Datuk Seri Michael Chong said Malaysians lost a staggering RM2.11 billion to such scams last year, with 13,956 cases reported. 'The AI-generated videos look so real that people can't tell the difference. Anyone watching would think it is the prime minister himself asking the public to invest, unaware that it's an AI-generated fake.' Chong was quoted as saying by News Straits Times. He also said 85 per cent of victims were convinced to invest after watching fake promotional videos featuring seemingly genuine endorsements from public figures. Recommended reading:


Free Malaysia Today
10 hours ago
- Free Malaysia Today
Tussle over custody behind 6-year-old's murder, say police
Family members awaiting the results of a post-mortem examination in Rembau on July 29 after the body of M Tishant was found in Jempol. (Bernama pic) PETALING JAYA : Family matters, including arguments over child custody, are believed to be the main motive behind the murder of six-year-old M Tishant, according to Negeri Sembilan police. State police chief Ahmad Dzaffir Yussof said the police had not discovered any other motive so far. He said the boy's parents were in the process of getting separated and were battling for custody, according to a Kosmo report. Dzaffir said that Tishant's body had been buried in Jempol as the suspect was familiar with the area. The boy was reported missing at Taman Bukit Indah in Johor on July 24 but was found buried in Jempol four days later. The body was discovered after Tishant's 36-year-old father was arrested. An autopsy found that he had been strangled to death with a cable tie.