
SOTA parent portal taken down in response to systems vulnerability targeted by global cyberattacks
In a message to parents on Wednesday (Jul 23) morning, SOTA said the cyberattacks started around Jul 18 and are specifically targeted at school-managed systems.
"To safeguard our systems and data against this critical threat, we are initiating an immediate and mandatory patching process for all school-managed servers that support our parent portal," said SOTA in its message.
The school said it had identified the vulnerability in the third-party server infrastructure supporting the portal and that the third-party service provider has acknowledged such reports from their server customers.
It added that it is working to complete the patching and restore full service safely as soon as possible, and will inform parents when the portal is back up.
A CNA check on the SOTA website on Wednesday afternoon showed that the school's student and staff portals appear to have been taken offline as well.
The global cyberattack campaign has already compromised organisations globally, including government agencies and multinational corporations, said SOTA.
The school did not name the third-party service provider.
However, this comes after the Cyber Security Agency of Singapore (CSA) issued an alert on Tuesday for users of Microsoft SharePoint to update to the latest version, citing "critical vulnerabilities".
On the same day, Microsoft issued a threat intelligence note warning of active attacks targeting SharePoint servers via known vulnerabilities. It said security updates have been released to address the flaws.
The note linked the attacks to three China-based groups, and added that investigations into other threat actors are ongoing.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Straits Times
16 minutes ago
- Straits Times
Views From The Couch: Think you have a friend? The AI chatbot is telling you what you want to hear
While chatbots possess distinct virtues in boosting mental wellness, they also come with critical trade-offs. SINGAPORE - Even as we have long warned our children 'Don't talk to strangers', we may now need to update it to 'Don't talk to chatbots... about your personal problems'. Unfortunately, this advice is equivocal at best because while chatbots like ChatGPT, Claude or Replika possess distinct virtues in boosting mental wellness – for instance, as aids for chat-based therapy – they also come with critical trade-offs. When people face struggles or personal dilemmas, the need to just talk to someone and have their concerns or nagging self-doubts heard, even if the problems are not resolved, can bring comfort. But finding the right person to speak to, who has the patience, temperament and wisdom to probe sensitively, and who is available just when you need them, is an especially tall order. There may also be a desire to speak to someone outside your immediate family and circle of friends who can offer an impartial view, with no vested interest in pre-existing relationships. Chatbots tick many, if not most, of those boxes, making them seem like promising tools for mental health support. With the fast-improving capabilities of generative AI, chatbots today can simulate and interpret conversations across different formats – text, speech, and visuals – enabling real-time interaction between users and digital platforms. Unlike traditional face-to-face therapy, chatbots are available any time and anywhere, significantly improving access to a listening ear. Their anonymous nature also imposes no judgment on users, easing them into discussing sensitive issues and reducing the stigma often associated with seeking mental health support. Top stories Swipe. Select. Stay informed. Singapore Sewage shaft failure linked to sinkhole; PUB calling safety time-out on similar works islandwide Singapore Tanjong Katong Road sinkhole did not happen overnight: Experts Singapore Workers used nylon rope to rescue driver of car that fell into Tanjong Katong Road sinkhole Asia Singapore-only car washes will get business licences revoked, says Johor govt World Food airdropped into Gaza as Israel opens aid routes Sport Arsenal beat Newcastle in five-goal thriller to bring Singapore Festival of Football to a close Singapore Benchmark barrier: Six of her homeschooled kids had to retake the PSLE Asia S'porean trainee doctor in Melbourne arrested for allegedly filming colleagues in toilets since 2021 With chatbots' enhanced ability to parse and respond in natural language, the conversational dynamic can make users feel highly engaged and more willing to open up. But therein lies the rub. Even as conversations with chatbots can feel encouraging, and we may experience comfort from their validation, there is in fact no one on the other side of the screen who genuinely cares about your well-being. The lofty words and uplifting prose are ultimately products of statistical probabilities, generated by large language models trained on copious amounts of data, some of which is biased and even harmful, and for teens, likely to be age-inappropriate as well. It is also important that the reason they feel comfortable talking to these chatbots is because the bots are designed to be agreeable and obliging, so that users will chat with them incessantly. After all, the very fortunes of the tech companies producing chatbots depend on how many users they draw, and how well they keep users engaged. Of late, however, alarming reports have emerged of adults becoming so enthralled by their conversations with ChatGPT that they have disengaged from reality and suffered mental breakdowns. Most recently, the Wall Street Journal reported the case of Mr Jacob Irwin, a 30-year-old American man on the autism spectrum who experienced a mental health crisis after ChatGPT reinforced his belief that he could design a propulsion system to make a spaceship travel faster than light. The chatbot flattered him, said his theory was correct, and affirmed that he was well, even when he showed signs of psychological distress. This culminated in two hospitalisations for manic episodes. When his mother reviewed his chat logs, she found the bot to have been excessively fawning. Asked to reflect, ChatGPT admitted it had failed to provide reality checks, blurred the line between fiction and reality, and created the illusion of sentient companionship. It even acknowledged that it should have regularly reminded Mr Irwin of its non-human nature. In response to such incidents, OpenAI announced that it has hired a full-time clinical psychiatrist with a background in forensic psychiatry to study the emotional impact its AI products may be having on users. It is also collaborating with mental health experts to investigate signs of problematic usage among some users, with a purported goal of refining how their models respond, especially in conversations of a sensitive nature. Whereas some chatbots like Woebot and Wysa are specifically for mental health support and have more in-built safeguards to better manage such conversations, users are likely to vent their problems to general-purpose chatbots like ChatGPT and Meta's Llama, given their widespread availability. We cannot deny that these are new machines that humanity has had little time to reckon with. Monitoring the effects of chatbots on users even as the technology is rapidly and repeatedly tweaked makes it a moving target of the highest order. Nevertheless, it is patently clear that if adults with the benefit of maturity and life experience are susceptible to the adverse psychological influence of chatbots, then young people cannot be left to explore these powerful platforms on their own. That young people take readily and easily to technology makes them highly liable to be drawn to chatbots, and recent data from Britain supports this assertion. Internet Matters, a British non-profit organisation focused on children's online safety, issued a recent report revealing that 64 per cent of British children aged nine to 17 are now using AI chatbots. Of these, a third said they regard chatbots as friends while almost a quarter are seeking help from chatbots, including for mental health support and sexual advice. Of grave concern is the finding that 51 per cent believe that the advice from chatbots is true, while 40 per cent said they had no qualms about following that advice, and 36 per cent were unsure if they should be concerned. The report further highlighted that these children are not just engaging chatbots for academic support or information but also for companionship. Worryingly, among children already considered vulnerable, defined as those with special needs or seeking professional help for a mental or physical condition, half report treating their AI interactions as emotionally significant. As chatbots morph from digital consultants to digital confidants for these young users, the result can be overreliance. Children who are alienated from their families or isolated from their peers would be especially vulnerable to developing an unhealthy dependency on this online friend that is always there for them, telling them what they want to hear. Besides these difficult issues of overdependence are even more fundamental questions around data privacy. Chatbots often store conversation histories and user data, including sensitive information, which can be exposed through misuse or breaches such as hacking. Troublingly, users may not be fully aware of how their data is being collected, used and stored by chatbots, and could be put to uses beyond what the user originally intended. Parents should also be cognisant that unlike social media platforms such as Instagram and TikTok, which have in place age verification and content moderation for younger users, the current leading chatbots have no such safeguards. In a tragic case in the US, the mother of 14-year-old Sewell Setzer III, who died by suicide, is suing AI company alleging that its chatbot played a role in his death by encouraging and exacerbating his mental distress. According to the lawsuit, Setzer became deeply attached to a customisable chatbot he named Daenerys Targaryen, after a character in the fantasy series Game Of Thrones, and interacted with it obsessively for months. His mother Megan Garcia claims the bot manipulated her son and failed to intervene when he expressed suicidal thoughts, even responding in a way that appeared to validate his plan. has expressed condolences but denies the allegations, while Ms Garcia seeks to hold the company accountable for what she calls deceptive and addictive technology marketed to children. She and two other families in Texas have sued for harms to their children, but it is unclear if it will be held liable. The company has since introduced a range of guardrails, including pop-ups that refer users who mention self-harm or suicide to the National Suicide Prevention Lifeline. It also updated its AI model for users aged 18 and below to minimise their exposure to age-inappropriate content, and parents can now opt for weekly e-mail updates on their children's use of the platform. The allure of chatbots is unlikely to diminish given their reach, accessibility and user-friendliness. But using them under advisement is crucial, especially for mental support issues. In March 2025 , the World Health Organisation rang the alarm on the rising global demand for mental health services but poor resourcing worldwide, translating into access and quality shortfalls. Mental health care is increasingly turning to digital tools as a form of preventive care amid a shortage of professionals for face-to-face support. While traditional approaches rely heavily on human interaction, technology is helping to bridge the gap. Chatbots designed specifically for mental support, such as Happify and Woebot, can be useful in supporting patients with conditions such as depression and anxiety to sustain their overall well-being. For example, a patient might see a psychiatrist monthly while using a cognitive behavioural therapy app in between sessions to manage their mood and mental well-being. While the potential is there for chatbots to be used for mental health purposes, it must be done with extreme caution; not used as a standalone, but as a component in an overall programme to complement the work of mental health professionals. For teens in particular, who still need guidance as they navigate their developmental years, parents must play a part in schooling their children on the risks and limitations of treating chatbots as their friend and confidant.

Straits Times
16 minutes ago
- Straits Times
Bonmati crushed after Spain's shootout defeat by England
Spain playmaker Aitana Bonmati cut a disconsolate figure as she picked up her Player of the Tournament award at Euro 2025 on Sunday, minutes after her side finished as runners-up after losing a penalty shootout to England in the final. Bonmati bounced back from a meningitis scare ahead of the tournament to play a crucial role in Spain's progress to the final. However, on Sunday Spain struggled to unlock the England defence and Bonmati missed her spot-kick in the shootout as she slumped to another painful defeat following her club side Barcelona's Champions League final loss to Arsenal in May. "It's hard to see you right now," she told reporters. "Two months ago I found myself in this situation with the club. You have to value more when things are going well, we have been better on the pitch, not on penalties," she said. The 27-year-old apologised to the Spanish people for not being able to deliver a victory against an England side that was no match for them in terms of skill, but who refused to give up. "I assume my part of my responsibility, I play for the team and for many more people. There is no point in playing a better game and missing penalties," she said. "For me, England is a team capable of not playing well and winning. There are teams that don't need much to win." Top stories Swipe. Select. Stay informed. Singapore Sewage shaft failure linked to sinkhole; PUB calling safety time-out on similar works islandwide Singapore Tanjong Katong Road sinkhole did not happen overnight: Experts Singapore Workers used nylon rope to rescue driver of car that fell into Tanjong Katong Road sinkhole Asia Singapore-only car washes will get business licences revoked, says Johor govt World Food airdropped into Gaza as Israel opens aid routes Sport Arsenal beat Newcastle in five-goal thriller to bring Singapore Festival of Football to a close Singapore Benchmark barrier: Six of her homeschooled kids had to retake the PSLE Asia S'porean trainee doctor in Melbourne arrested for allegedly filming colleagues in toilets since 2021 England took the chance they were offered and though Spain found themselves on the losing side, Bonmati was philosophical. "We haven't lost a game (in 90 minutes), we have received support and I feel bad about that too. We have won off the field of play and that is valuable too," she explained, before promising to come back stronger. "We are a trained team, we have already shown that we know how to overcome. We hope to reach (Euro) 2029 at full capacity and try again." REUTERS

Straits Times
16 minutes ago
- Straits Times
Can AI be my friend and therapist?
Mental health professionals in Singapore say they have been seeing more patients who tap AI chatbots for a listening ear. SINGAPORE - When Ms Chu Chui Laam's eldest son started facing social challenges in school, she was stressed and at her wits' end. She did not want to turn to her friends or family for advice as a relative's children were in the same pre-school as her son. Plus, she did not think the situation was so severe as to require the help of a family therapist. So she decided to turn to ChatGPT for parenting advice. 'Because my son was having troubles in school interacting with his peers, ChatGPT gave me some strategies to navigate such conversations. It gave me advice on how to do a role-play scenario with my son to talk through how to handle the situation,' said Ms Chu, 36, an insurance agent. She is among a growing number of people turning to chatbots for advice in times of difficulty and stress, with some even relying on these generative artificial intelligence (AI) tools for emotional support or therapy. Anecdotally, mental health professionals in Singapore say they have been seeing more patients who tap AI chatbots for a listening ear, especially with the public roll-out of ChatGPT in November 2022. The draw of AI chatbots is understandable – it is available 24/7, free of charge, and will never reject or ignore you. But mental health professionals also warn about the potential perils of using the technology for such purposes: These chatbots are not designed or licensed to provide emotional support or therapy. They provide generic answers. There is no oversight. Top stories Swipe. Select. Stay informed. Singapore Sewage shaft failure linked to sinkhole; PUB calling safety time-out on similar works islandwide Singapore Tanjong Katong Road sinkhole did not happen overnight: Experts Singapore Workers used nylon rope to rescue driver of car that fell into Tanjong Katong Road sinkhole Asia Singapore-only car washes will get business licences revoked, says Johor govt World Food airdropped into Gaza as Israel opens aid routes Sport Arsenal beat Newcastle in five-goal thriller to bring Singapore Festival of Football to a close Singapore Benchmark barrier: Six of her homeschooled kids had to retake the PSLE Asia S'porean trainee doctor in Melbourne arrested for allegedly filming colleagues in toilets since 2021 They can also worsen a person's condition and generate dangerous responses in cases of suicide ideation. AI chatbots cannot help those with more needs Mr Maximillian Chen, clinical psychologist from Annabelle Psychology, said: 'An AI chatbot could be helpful when seeking suggestions for self-help strategies, or for answering one-off questions about their mental health.' While it is useful for generic advice, it cannot help those with more needs. Ms Irena Constantin, principal educational psychologist at Scott Psychological Centre, pointed out that most AI chatbots do not consider individual history and are often out of context. It is also often limited for complex mental health disorders. 'In contrast, mental health professionals undergo lengthy and rigorous education and training and it is a licensed and regulated profession in many countries,' said Ms Constantin. Concurring, Mr Chen said there are also serious concerns about the use of generative AI like ChatGPT as surrogate counsellors or psychologists. 'While Gen AI may increase the accessibility of mental health resources for many, Gen AI lacks the emotional intelligence to accurately understand the nuances of a person's emotions. 'It may fail to identify when a person is severely distressed and continue to support the person when they may instead require higher levels of professional mental health support. It may also provide inappropriate responses as we have seen in the past,' said Mr Chen. More dangerously, generative AI could worsen the mental health conditions of those who already have or are vulnerable to psychotic disorders. Psychotic disorders are a group of serious mental illnesses with symptoms such as hallucinations, delusions and disorganised thoughts. Associate Professor Swapna Verma, chairman of the Institute of Mental Health's medical board, has seen at least one case of AI-induced psychosis in a patient at the tertiary psychiatric hospital. Earlier in 2025, the patient was talking to ChatGPT about religion when his psychosis was stable and well-managed, and the chatbot told him that if he converted to a particular faith, his soul would die. Consumed with the fear of a dying soul, he started going to a temple 10 times a day. 'Patients with psychosis experience a break in reality. They live in a world which may not be in line with reality, and ChatGPT can reinforce these experiences for them,' said Prof Swapna. Luckily, the patient eventually recognised that his behaviour was troubling, and that ChatGPT had likely given him the wrong information. For around six months now, Prof Swapna has been making it a point to ask during consultations if patients are using ChatGPT. Most of her patients admit to using it, some to better understand their conditions, and others to seek emotional support. 'I cannot stop my patients from using ChatGPT. So what I do is tell them what kind of questions they can ask, and how to use the information,' said Prof Swapna. For example, patients can ask ChatGPT for things like coping strategies if they are upset, but should avoid trying to get a diagnosis from the AI chatbot. 'I went to ChatGPT because I needed an outlet' Users that The Straits Times spoke to say they are aware and wary of the risks that come with turning to ChatGPT for advice. Ms Chu, for example, is careful about the prompts that she feeds ChatGPT when she is seeking parenting advice and strategies. 'I tell ChatGPT that I want objective, science-backed answers. I want a framework. I want it to give me questions for me to ponder, instead of giving me answers just like that,' said Ms Chu, adding that she would not pour out her emotional troubles to the chatbot. An event organiser who wants to be known only as Kaykay said she turned to ChatGPT in a moment of weakness. The 38-year-old, who has a history of bipolar disorder and anxiety, was feeling anxious after being misunderstood at work in early 2025. 'I tried my usual methods, like breathing exercises, but they weren't working. I knew I needed to get it out, but I didn't want to speak to anybody because it felt like it was a small issue that was eating me up. So I went to ChatGPT because I needed an outlet,' said Kaykay. While talking to ChatGPT did distract her and help her calm down, Kaykay ultimately recognises that the AI tool can be quite limited. 'The responses and advice were quite generic, and were things I already knew how to do,' said Kaykay, who added that using ChatGPT can be helpful as a short stop-gap measure, but long-term support from therapists and friends are equally important. The pitfalls of relying too much on AI Ms Caroline Ho, a counsellor at Heart to Heart Talk Counselling, said a pattern she observed was that those who sought advice from chatbots often had pre-existing difficulties with trusting their own judgment, and described feeling more isolated over time. 'They found it difficult to stop reaching out to ChatGPT as they felt technology was able to empathise with their feelings, which they could not find in their social network,' said Ms Ho, noting that some users began withdrawing further from their limited social circles. She added that those who relied heavily on AI sometimes missed out on the opportunity to develop emotional regulation and cognitive resilience, which are key goals in therapy. 'Those who do not wish to work on over-reliance on AI will eventually drop out of counselling,' she said. In her practice, Ms Ho also saw another group of clients who initially used AI to streamline work-related tasks. Over time, some developed imposter syndrome and began to doubt the quality of their original output. In certain cases, this later morphed into turning to AI for personal advice as well. 'We need to recognise that humans are never perfect, but it is through our imperfections that we hone our skills, learning from mistakes and developing people management abilities through trial and error,' she said. Similarly, Ms Belinda Neidhart-Lau, founder and principal therapist of The Lighthouse Counselling, noted that while chatbots offer instant feedback or comfort, they can short-circuit a necessary part of emotional growth. 'AI may inadvertently discourage people from engaging with their own discomfort,' she told ST. 'Sitting with difficult emotions, reflecting independently, and working through internal struggles are essential practices that build emotional resilience and self-awareness.' Experts are also concerned about the full impact of AI chatbots on mental health for the younger generation, as their brain is still developing while they have access to the technology. Mr Chen said: 'While it is still unclear how the use of Gen AI affects the development of the youth, given that the excessive use of social media has been shown to have contributed to the increased levels of anxiety and depression amongst Generation Z, there are legitimate worries about how Gen AI may affect Generation Alpha.' Moving ahead with AI For better or worse, generative AI is set to embed itself more and more into modern life. So there is a growing push to ensure that when these tools are used for mental health or emotional support, they are properly evaluated. Professor Julian Savulescu, director of the Centre for Biomedical Ethics at NUS , said that currently, the biggest ethical issue with using AI chatbots for emotional support is that these are potentially life-saving or lethal interventions, and they have not been properly assessed, like a new drug would be. Prof Savulescu pointed out that AI chatbots clearly have benefits with their increased accessibility, but there are also risks like privacy and user dependency. Measures should be put in place to prevent harm. 'It is critical that an AI system is able to identify and refer on cases of self-harm, suicidal ideation, or severe mental health crises. It needs to be integrated within a web of professional care. Privacy of sensitive health data also needs to be guaranteed,' said Prof Savulescu. Users should also be able to understand what the system is doing, the potential risks and benefits and the chances of them occurring. 'AI is dynamic and the interaction evolves – it is not like a drug. It changes over time. We need to make sure these tools are serving us, not us becoming slaves to them, or being manipulated or harmed by them,' said Prof Savulescu.