
International Atomic Energy Agency chief optimistic about Singapore's nuclear future
International Atomic Energy Agency chief optimistic about Singapore's nuclear future

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Straits Times
2 hours ago
- Straits Times
Views From The Couch: Think you have a friend? The AI chatbot is telling you what you want to hear
While chatbots possess distinct virtues in boosting mental wellness, they also come with critical trade-offs. SINGAPORE - Even as we have long warned our children 'Don't talk to strangers', we may now need to update it to 'Don't talk to chatbots... about your personal problems'. Unfortunately, this advice is equivocal at best because while chatbots like ChatGPT, Claude or Replika possess distinct virtues in boosting mental wellness – for instance, as aids for chat-based therapy – they also come with critical trade-offs. When people face struggles or personal dilemmas, the need to just talk to someone and have their concerns or nagging self-doubts heard, even if the problems are not resolved, can bring comfort. But finding the right person to speak to, who has the patience, temperament and wisdom to probe sensitively, and who is available just when you need them, is an especially tall order. There may also be a desire to speak to someone outside your immediate family and circle of friends who can offer an impartial view, with no vested interest in pre-existing relationships. Chatbots tick many, if not most, of those boxes, making them seem like promising tools for mental health support. With the fast-improving capabilities of generative AI, chatbots today can simulate and interpret conversations across different formats – text, speech, and visuals – enabling real-time interaction between users and digital platforms. Unlike traditional face-to-face therapy, chatbots are available any time and anywhere, significantly improving access to a listening ear. Their anonymous nature also imposes no judgment on users, easing them into discussing sensitive issues and reducing the stigma often associated with seeking mental health support. Top stories Swipe. Select. Stay informed. Singapore Sewage shaft failure linked to sinkhole; PUB calling safety time-out on similar works islandwide Singapore Tanjong Katong Road sinkhole did not happen overnight: Experts Singapore Workers used nylon rope to rescue driver of car that fell into Tanjong Katong Road sinkhole Asia Singapore-only car washes will get business licences revoked, says Johor govt World Food airdropped into Gaza as Israel opens aid routes Sport Arsenal beat Newcastle in five-goal thriller to bring Singapore Festival of Football to a close Singapore Benchmark barrier: Six of her homeschooled kids had to retake the PSLE Asia S'porean trainee doctor in Melbourne arrested for allegedly filming colleagues in toilets since 2021 With chatbots' enhanced ability to parse and respond in natural language, the conversational dynamic can make users feel highly engaged and more willing to open up. But therein lies the rub. Even as conversations with chatbots can feel encouraging, and we may experience comfort from their validation, there is in fact no one on the other side of the screen who genuinely cares about your well-being. The lofty words and uplifting prose are ultimately products of statistical probabilities, generated by large language models trained on copious amounts of data, some of which is biased and even harmful, and for teens, likely to be age-inappropriate as well. It is also important that the reason they feel comfortable talking to these chatbots is because the bots are designed to be agreeable and obliging, so that users will chat with them incessantly. After all, the very fortunes of the tech companies producing chatbots depend on how many users they draw, and how well they keep users engaged. Of late, however, alarming reports have emerged of adults becoming so enthralled by their conversations with ChatGPT that they have disengaged from reality and suffered mental breakdowns. Most recently, the Wall Street Journal reported the case of Mr Jacob Irwin, a 30-year-old American man on the autism spectrum who experienced a mental health crisis after ChatGPT reinforced his belief that he could design a propulsion system to make a spaceship travel faster than light. The chatbot flattered him, said his theory was correct, and affirmed that he was well, even when he showed signs of psychological distress. This culminated in two hospitalisations for manic episodes. When his mother reviewed his chat logs, she found the bot to have been excessively fawning. Asked to reflect, ChatGPT admitted it had failed to provide reality checks, blurred the line between fiction and reality, and created the illusion of sentient companionship. It even acknowledged that it should have regularly reminded Mr Irwin of its non-human nature. In response to such incidents, OpenAI announced that it has hired a full-time clinical psychiatrist with a background in forensic psychiatry to study the emotional impact its AI products may be having on users. It is also collaborating with mental health experts to investigate signs of problematic usage among some users, with a purported goal of refining how their models respond, especially in conversations of a sensitive nature. Whereas some chatbots like Woebot and Wysa are specifically for mental health support and have more in-built safeguards to better manage such conversations, users are likely to vent their problems to general-purpose chatbots like ChatGPT and Meta's Llama, given their widespread availability. We cannot deny that these are new machines that humanity has had little time to reckon with. Monitoring the effects of chatbots on users even as the technology is rapidly and repeatedly tweaked makes it a moving target of the highest order. Nevertheless, it is patently clear that if adults with the benefit of maturity and life experience are susceptible to the adverse psychological influence of chatbots, then young people cannot be left to explore these powerful platforms on their own. That young people take readily and easily to technology makes them highly liable to be drawn to chatbots, and recent data from Britain supports this assertion. Internet Matters, a British non-profit organisation focused on children's online safety, issued a recent report revealing that 64 per cent of British children aged nine to 17 are now using AI chatbots. Of these, a third said they regard chatbots as friends while almost a quarter are seeking help from chatbots, including for mental health support and sexual advice. Of grave concern is the finding that 51 per cent believe that the advice from chatbots is true, while 40 per cent said they had no qualms about following that advice, and 36 per cent were unsure if they should be concerned. The report further highlighted that these children are not just engaging chatbots for academic support or information but also for companionship. Worryingly, among children already considered vulnerable, defined as those with special needs or seeking professional help for a mental or physical condition, half report treating their AI interactions as emotionally significant. As chatbots morph from digital consultants to digital confidants for these young users, the result can be overreliance. Children who are alienated from their families or isolated from their peers would be especially vulnerable to developing an unhealthy dependency on this online friend that is always there for them, telling them what they want to hear. Besides these difficult issues of overdependence are even more fundamental questions around data privacy. Chatbots often store conversation histories and user data, including sensitive information, which can be exposed through misuse or breaches such as hacking. Troublingly, users may not be fully aware of how their data is being collected, used and stored by chatbots, and could be put to uses beyond what the user originally intended. Parents should also be cognisant that unlike social media platforms such as Instagram and TikTok, which have in place age verification and content moderation for younger users, the current leading chatbots have no such safeguards. In a tragic case in the US, the mother of 14-year-old Sewell Setzer III, who died by suicide, is suing AI company alleging that its chatbot played a role in his death by encouraging and exacerbating his mental distress. According to the lawsuit, Setzer became deeply attached to a customisable chatbot he named Daenerys Targaryen, after a character in the fantasy series Game Of Thrones, and interacted with it obsessively for months. His mother Megan Garcia claims the bot manipulated her son and failed to intervene when he expressed suicidal thoughts, even responding in a way that appeared to validate his plan. has expressed condolences but denies the allegations, while Ms Garcia seeks to hold the company accountable for what she calls deceptive and addictive technology marketed to children. She and two other families in Texas have sued for harms to their children, but it is unclear if it will be held liable. The company has since introduced a range of guardrails, including pop-ups that refer users who mention self-harm or suicide to the National Suicide Prevention Lifeline. It also updated its AI model for users aged 18 and below to minimise their exposure to age-inappropriate content, and parents can now opt for weekly e-mail updates on their children's use of the platform. The allure of chatbots is unlikely to diminish given their reach, accessibility and user-friendliness. But using them under advisement is crucial, especially for mental support issues. In March 2025 , the World Health Organisation rang the alarm on the rising global demand for mental health services but poor resourcing worldwide, translating into access and quality shortfalls. Mental health care is increasingly turning to digital tools as a form of preventive care amid a shortage of professionals for face-to-face support. While traditional approaches rely heavily on human interaction, technology is helping to bridge the gap. Chatbots designed specifically for mental support, such as Happify and Woebot, can be useful in supporting patients with conditions such as depression and anxiety to sustain their overall well-being. For example, a patient might see a psychiatrist monthly while using a cognitive behavioural therapy app in between sessions to manage their mood and mental well-being. While the potential is there for chatbots to be used for mental health purposes, it must be done with extreme caution; not used as a standalone, but as a component in an overall programme to complement the work of mental health professionals. For teens in particular, who still need guidance as they navigate their developmental years, parents must play a part in schooling their children on the risks and limitations of treating chatbots as their friend and confidant.

Straits Times
2 hours ago
- Straits Times
Can AI be my friend and therapist?
Mental health professionals in Singapore say they have been seeing more patients who tap AI chatbots for a listening ear. SINGAPORE - When Ms Chu Chui Laam's eldest son started facing social challenges in school, she was stressed and at her wits' end. She did not want to turn to her friends or family for advice as a relative's children were in the same pre-school as her son. Plus, she did not think the situation was so severe as to require the help of a family therapist. So she decided to turn to ChatGPT for parenting advice. 'Because my son was having troubles in school interacting with his peers, ChatGPT gave me some strategies to navigate such conversations. It gave me advice on how to do a role-play scenario with my son to talk through how to handle the situation,' said Ms Chu, 36, an insurance agent. She is among a growing number of people turning to chatbots for advice in times of difficulty and stress, with some even relying on these generative artificial intelligence (AI) tools for emotional support or therapy. Anecdotally, mental health professionals in Singapore say they have been seeing more patients who tap AI chatbots for a listening ear, especially with the public roll-out of ChatGPT in November 2022. The draw of AI chatbots is understandable – it is available 24/7, free of charge, and will never reject or ignore you. But mental health professionals also warn about the potential perils of using the technology for such purposes: These chatbots are not designed or licensed to provide emotional support or therapy. They provide generic answers. There is no oversight. Top stories Swipe. Select. Stay informed. Singapore Sewage shaft failure linked to sinkhole; PUB calling safety time-out on similar works islandwide Singapore Tanjong Katong Road sinkhole did not happen overnight: Experts Singapore Workers used nylon rope to rescue driver of car that fell into Tanjong Katong Road sinkhole Asia Singapore-only car washes will get business licences revoked, says Johor govt World Food airdropped into Gaza as Israel opens aid routes Sport Arsenal beat Newcastle in five-goal thriller to bring Singapore Festival of Football to a close Singapore Benchmark barrier: Six of her homeschooled kids had to retake the PSLE Asia S'porean trainee doctor in Melbourne arrested for allegedly filming colleagues in toilets since 2021 They can also worsen a person's condition and generate dangerous responses in cases of suicide ideation. AI chatbots cannot help those with more needs Mr Maximillian Chen, clinical psychologist from Annabelle Psychology, said: 'An AI chatbot could be helpful when seeking suggestions for self-help strategies, or for answering one-off questions about their mental health.' While it is useful for generic advice, it cannot help those with more needs. Ms Irena Constantin, principal educational psychologist at Scott Psychological Centre, pointed out that most AI chatbots do not consider individual history and are often out of context. It is also often limited for complex mental health disorders. 'In contrast, mental health professionals undergo lengthy and rigorous education and training and it is a licensed and regulated profession in many countries,' said Ms Constantin. Concurring, Mr Chen said there are also serious concerns about the use of generative AI like ChatGPT as surrogate counsellors or psychologists. 'While Gen AI may increase the accessibility of mental health resources for many, Gen AI lacks the emotional intelligence to accurately understand the nuances of a person's emotions. 'It may fail to identify when a person is severely distressed and continue to support the person when they may instead require higher levels of professional mental health support. It may also provide inappropriate responses as we have seen in the past,' said Mr Chen. More dangerously, generative AI could worsen the mental health conditions of those who already have or are vulnerable to psychotic disorders. Psychotic disorders are a group of serious mental illnesses with symptoms such as hallucinations, delusions and disorganised thoughts. Associate Professor Swapna Verma, chairman of the Institute of Mental Health's medical board, has seen at least one case of AI-induced psychosis in a patient at the tertiary psychiatric hospital. Earlier in 2025, the patient was talking to ChatGPT about religion when his psychosis was stable and well-managed, and the chatbot told him that if he converted to a particular faith, his soul would die. Consumed with the fear of a dying soul, he started going to a temple 10 times a day. 'Patients with psychosis experience a break in reality. They live in a world which may not be in line with reality, and ChatGPT can reinforce these experiences for them,' said Prof Swapna. Luckily, the patient eventually recognised that his behaviour was troubling, and that ChatGPT had likely given him the wrong information. For around six months now, Prof Swapna has been making it a point to ask during consultations if patients are using ChatGPT. Most of her patients admit to using it, some to better understand their conditions, and others to seek emotional support. 'I cannot stop my patients from using ChatGPT. So what I do is tell them what kind of questions they can ask, and how to use the information,' said Prof Swapna. For example, patients can ask ChatGPT for things like coping strategies if they are upset, but should avoid trying to get a diagnosis from the AI chatbot. 'I went to ChatGPT because I needed an outlet' Users that The Straits Times spoke to say they are aware and wary of the risks that come with turning to ChatGPT for advice. Ms Chu, for example, is careful about the prompts that she feeds ChatGPT when she is seeking parenting advice and strategies. 'I tell ChatGPT that I want objective, science-backed answers. I want a framework. I want it to give me questions for me to ponder, instead of giving me answers just like that,' said Ms Chu, adding that she would not pour out her emotional troubles to the chatbot. An event organiser who wants to be known only as Kaykay said she turned to ChatGPT in a moment of weakness. The 38-year-old, who has a history of bipolar disorder and anxiety, was feeling anxious after being misunderstood at work in early 2025. 'I tried my usual methods, like breathing exercises, but they weren't working. I knew I needed to get it out, but I didn't want to speak to anybody because it felt like it was a small issue that was eating me up. So I went to ChatGPT because I needed an outlet,' said Kaykay. While talking to ChatGPT did distract her and help her calm down, Kaykay ultimately recognises that the AI tool can be quite limited. 'The responses and advice were quite generic, and were things I already knew how to do,' said Kaykay, who added that using ChatGPT can be helpful as a short stop-gap measure, but long-term support from therapists and friends are equally important. The pitfalls of relying too much on AI Ms Caroline Ho, a counsellor at Heart to Heart Talk Counselling, said a pattern she observed was that those who sought advice from chatbots often had pre-existing difficulties with trusting their own judgment, and described feeling more isolated over time. 'They found it difficult to stop reaching out to ChatGPT as they felt technology was able to empathise with their feelings, which they could not find in their social network,' said Ms Ho, noting that some users began withdrawing further from their limited social circles. She added that those who relied heavily on AI sometimes missed out on the opportunity to develop emotional regulation and cognitive resilience, which are key goals in therapy. 'Those who do not wish to work on over-reliance on AI will eventually drop out of counselling,' she said. In her practice, Ms Ho also saw another group of clients who initially used AI to streamline work-related tasks. Over time, some developed imposter syndrome and began to doubt the quality of their original output. In certain cases, this later morphed into turning to AI for personal advice as well. 'We need to recognise that humans are never perfect, but it is through our imperfections that we hone our skills, learning from mistakes and developing people management abilities through trial and error,' she said. Similarly, Ms Belinda Neidhart-Lau, founder and principal therapist of The Lighthouse Counselling, noted that while chatbots offer instant feedback or comfort, they can short-circuit a necessary part of emotional growth. 'AI may inadvertently discourage people from engaging with their own discomfort,' she told ST. 'Sitting with difficult emotions, reflecting independently, and working through internal struggles are essential practices that build emotional resilience and self-awareness.' Experts are also concerned about the full impact of AI chatbots on mental health for the younger generation, as their brain is still developing while they have access to the technology. Mr Chen said: 'While it is still unclear how the use of Gen AI affects the development of the youth, given that the excessive use of social media has been shown to have contributed to the increased levels of anxiety and depression amongst Generation Z, there are legitimate worries about how Gen AI may affect Generation Alpha.' Moving ahead with AI For better or worse, generative AI is set to embed itself more and more into modern life. So there is a growing push to ensure that when these tools are used for mental health or emotional support, they are properly evaluated. Professor Julian Savulescu, director of the Centre for Biomedical Ethics at NUS , said that currently, the biggest ethical issue with using AI chatbots for emotional support is that these are potentially life-saving or lethal interventions, and they have not been properly assessed, like a new drug would be. Prof Savulescu pointed out that AI chatbots clearly have benefits with their increased accessibility, but there are also risks like privacy and user dependency. Measures should be put in place to prevent harm. 'It is critical that an AI system is able to identify and refer on cases of self-harm, suicidal ideation, or severe mental health crises. It needs to be integrated within a web of professional care. Privacy of sensitive health data also needs to be guaranteed,' said Prof Savulescu. Users should also be able to understand what the system is doing, the potential risks and benefits and the chances of them occurring. 'AI is dynamic and the interaction evolves – it is not like a drug. It changes over time. We need to make sure these tools are serving us, not us becoming slaves to them, or being manipulated or harmed by them,' said Prof Savulescu.

Straits Times
2 hours ago
- Straits Times
Truth in the age of AI
AI is causing seismic changes in how we understand what is true and what is not. It can have serious implications for important events such as elections. In today's world, artificial intelligence (AI) has transformed the way we live, work and play. Algorithms power our social media feeds, and bots can make our work more efficient. AI is the ability of machines to think and act like humans by learning, solving problems, and making decisions. With its ability to process and analyse vast amounts of data in seconds, AI has become a powerful tool in sectors like healthcare, finance and banking, manufacturing and supply chains. But as AI proliferates, it is also silently causing seismic changes in how we understand what is true and what is not. The digital world is seeing an explosion of synthetic content that muddies the line between truth and fiction, which can have serious implications for important events such as elections. Deepfakes – hyper-realistic videos created using deep learning – are perhaps the most high-profile example of this. A 2022 deepfake video of Ukrainian President Volodymyr Zelensky urging his troops to surrender during the Russia-Ukraine war was widely circulated before being debunked. The minute-long video briefly sowed confusion and panic. Top stories Swipe. Select. Stay informed. Singapore Sewage shaft failure linked to sinkhole; PUB calling safety time-out on similar works islandwide Singapore Tanjong Katong Road sinkhole did not happen overnight: Experts Singapore Workers used nylon rope to rescue driver of car that fell into Tanjong Katong Road sinkhole Asia Singapore-only car washes will get business licences revoked, says Johor govt World Food airdropped into Gaza as Israel opens aid routes Sport Arsenal beat Newcastle in five-goal thriller to bring Singapore Festival of Football to a close Singapore Benchmark barrier: Six of her homeschooled kids had to retake the PSLE Asia S'porean trainee doctor in Melbourne arrested for allegedly filming colleagues in toilets since 2021 In 2024 during India's general election, political parties 'resurrected' deceased leaders and used deepfake avatars to influence voters . For instance, the former Tamil Nadu chief minister M. Karunanidhi, who died in 2018, appeared in AI-generated videos endorsing his son's political run. In Britain, more than 100 deepfake videos featuring then British Prime Minister Rishi Sunak ran as ads on Facebook before the 2024 election. The ads appeared to be viewed by 400,000 in a month, and payments for the ads originated overseas. When voters see such manipulated videos making controversial or false statements, it can damage reputations or sway opinions – even after the deepfake is debunked. The threat is not just about altering individual votes – it is about eroding trust in the electoral process altogether. When voters begin to doubt everything they see or hear, apathy and cynicism can take hold, weakening democratic institutions. With its ability to blur the distinction between what is real or not, AI's impact on truth is more insidious than being able to tell black from white, fact from fiction. NewsGuard, a media literacy tool that rates the reliability of online sources, found that by May 2025, more than 1,200 AI-generated news and information sites were operating with little to no human oversight, a number that had increased by more than 20 times in two years. Many of these websites even appeared to be credible. Reliable media organisations have also come under fire for using AI-generated news summaries that are sometimes inaccurate. Apple faced calls earlier in 2025 to remove its AI-generated news alerts on iPhones that were in some instances completely false and 'hallucinated'. In its Global Risks Report 2024, the World Economic Forum said: 'Emerging as the most severe global risk anticipated over the next two years, foreign and domestic actors alike will leverage misinformation and disinformation to further widen societal and political divides.' AI will serve only to amplify those divides through its widespread use by bad actors to spread misinformation that appears to be credible, using algorithms that emphasise engagement, even to those adept at navigating news sites. He heard what sounded like his son crying and fell for the scam Beyond elections and political influence, AI is also being used by scammers to target individuals. Voice cloning technology is increasingly being deployed by fraudsters in impersonation scams. With just a short sample of someone's voice – easily sourced from a TikTok video, a podcast clip, or even a voicemail – AI tools can convincingly replicate it. In India, Mr Himanshu Shekhar Singh fell prey to an elaborate scheme after receiving a phone call from a purported police officer, who claimed that his 18-year-old son had been caught with a gang of rapists and needed 30,000 rupees (S$444) before his name could be cleared. He heard what sounded like his son crying over the phone, and made an initial payment of 10,000 rupees, only to find out that his son was unharmed, and he had been duped. In Hong Kong, the police said that an unnamed multinational company was scammed of HK$200 million (S$32.6 million) after an employee attended a video conference call with deepfake recreations of the company's Britain-based chief financial officer and other employees. The employee was duped into making the transfers following instructions from the scammers. Scammers are also using generative AI to produce phishing e-mails and scam messages that are far more convincing than traditional spam, which is more likely to contain incorrect grammar and suspicious-looking links. Cyber-security firm Barracuda, together with researchers from Columbia University and the University of Chicago, found in a study published on June 18 that 51 per cent of malicious and spam e-mails are now generated using AI tools. The research team examined a dataset of spam e-mails flagged by Barracuda between February 2022 and April 2025. Using trained detection tools, they assessed whether each malicious or unwanted message had been produced by AI. Their analysis revealed a consistent increase in the share of AI-generated spam e-mails starting from November 2022 and continuing until early 2024. Notably, November 2022 marked the public release of ChatGPT. Can AI be a force for good? But just as AI is being used to deceive, it is also being used to defend the truth. For example, newsrooms around the world are increasingly turning to AI to enhance their fact-checking capabilities and stay ahead of misinformation. Reuters, for example, has developed News Tracer, a tool powered by machine learning and natural language processing that monitors social media platforms like X to detect and assess the credibility of breaking news stories in real time. It assigns credibility scores to emerging narratives, helping journalists filter out false leads quickly. Meanwhile, major news organisations like the BBC and The New York Times have collaborated with partners like Microsoft and Media City Bergen under an initiative called Project Origin to use AI to track the provenance of digital content and verify its authenticity. Tech companies are also contributing to efforts to combat the rise of misinformation. Google's Jigsaw unit has developed tools such as 'About this image', which helps users trace an image's origin, and detect whether it was AI-generated or manipulated. Microsoft has also contributed to the fight against deception with its Video Authenticator tool, which detects deepfakes by identifying giveaway signs invisible to the human eye that an image has been artificially generated. For example, in a video where someone's face has been mapped on another person's body, this includes subtle fading or greyscale pixels at the boundary of where the images have been merged. Social media companies are slowly stepping up too. Meta has introduced labels for AI-generated political ads, and YouTube has rolled out a new tool that requires creators to disclose to viewers when realistic content is made with altered or synthetic media. The rise of AI has undeniably made it harder to distinguish fact from fiction, but it has also opened new frontiers for safeguarding the truth. Legislation can establish protective guard rails Whether AI becomes a conduit for clarity or confusion will also be shaped by the guard rails and regulations that governments and societies put in place. To that end, the European Union is a front runner in AI regulation. The EU Artificial Intelligence Act was first proposed in 2021, and approved in August 2024. The legislation classifies AI by risk and places strict rules on systems that affect public rights and democracy. For example, AI such as social scoring systems and manipulative AI is prohibited because of its unacceptable risk. High-risk systems include those that profile individuals to assess their work performance or economic situation, for example. High-risk AI providers need to establish a risk management system and conduct data governance to ensure that testing data sets are relevant and free of errors as much as possible. This helps to address risks that AI poses to truth, especially around misinformation and algorithmic manipulation. Countries such as Singapore, Canada, and Britain have also published governance frameworks or set up regulatory sandboxes to guide ethical AI use. Societies must be equipped to navigate the AI era. Public education on how deepfakes, bot-generated content, and algorithms can skew perception would be essential. When citizens understand how AI-generated misinformation works, they are less likely to be misled. In the EU, media literacy is a core pillar of the Digital Services Act, which requires major online platforms to support educational campaigns that help users recognise disinformation and manipulative content. Finland has integrated AI literacy into its 2025 school curriculum from early childhood to vocational training. The aim is to prepare students for a future where AI is increasingly prevalent and to help them build critical thinking skills and expose them to ethical considerations around AI. But mitigating the impact of AI is not just the job of governments and tech companies – individuals can also take steps to protect themselves from deception. Take care to verify the source of information, especially when it comes through social media. Be wary of sensational photos or videos and the likelihood that the content could have been manipulated. When in doubt, consult trusted news sources or channels. Individuals themselves can also play their part by using AI responsibly – such as avoiding the sharing of unverified content generated by chatbots or image tools. By staying cautious and curious, people can push back against AI-powered misinformation and create a safer digital space. How Singapore tackles AI risks Singapore was among the first few countries to introduce a national AI strategy in 2019, with projects in areas like border clearance operations and chronic disease prediction. But with the rapid development of generative AI that saw the public roll-out of large language models like ChatGPT, the nation updated its strategy in 2023. The National AI Strategy 2.0 focuses on nurturing talent, promoting a thriving AI industry and sustaining it with world-leading infrastructure and research that ensures AI serves the public good. To nurture talent here, Singapore aims to triple its number of AI practitioners to 15,000 by training locals and hiring from overseas. While the nation is eager to harness the benefits of AI to boost its digital economy, it is also wary of the manipulation, misinformation, and ethical risks involved with the technology. To mitigate such risks, the country launched the first edition of the Model AI Governance Framework in January 2019. The voluntary framework is a guide for private sector organisations to address key ethical and governance issues when deploying traditional AI. The framework explains how AI systems work, and how to build good data accountability practices, and create open and transparent communication. The framework was updated in 2020 and then again in May 2024, when the Model AI Governance Framework for Generative AI was rolled out, building on the initial frameworks to take into account new risks posed by generative AI. This includes things like hallucinations, where an AI model generates information that is incorrect or not based in reality; and concerns around copyright infringement. To combat such challenges, the framework encourages industry players to offer transparency around the safety and hygiene measures taken when developing the AI tool. This can include bias correction techniques, for instance. The framework also touches on the need for transparency around how AI-generated content is created to enable users to consume content in an informed manner, and how companies and communities should come together on digital literacy initiatives. In the country's recent general election held in May 2025, a new law banning fake or digitally altered online material that misrepresents candidates during the election period was put in place for the first time. In passing the Elections (Integrity of Online Advertising) (Amendment) Bill in October 2024, Minister for Digital Development and Information Josephine Teo said that it does not matter if the content is favourable or unfavourable to any candidate. The publication of misinformation generated using AI during the election, and the boosting, sharing and reposting of existing content, was made an offence. While it was not used during the recent general election, the legal instrument provides a lever to ensure electoral integrity in Singapore. Overall, Singapore is eager to use AI as a driver of growth. In regulating the technology, it prefers an incremental approach, developing and updating voluntary governance frameworks, and drawing sector-specific guidelines instead of an overall mandate. But where there is a risk of AI being used to misinform and manipulate the public, it will not hesitate to pass laws against this happening, as it did ahead of the 2025 General Election. Singapore's governance approach combines strong ethical foundations, industry collaboration, and global engagement to ensure AI is used safely and fairly.