logo
ECB economists improve GDP forecasting with ChatGPT

ECB economists improve GDP forecasting with ChatGPT

CNA26-06-2025
FRANKFURT :A ChatGPT analysis of qualitative commentary in PMI releases can significantly improve GDP forecasting, a paper published by the European Central Bank of Thursday showed.
The ECB has been working with artificial intelligence in recent years, partly to improve its forecasting ability by web-scraping price data and using large language models for data classification.
However, this new study found that even relatively little text associated with PMI releases can provide meaningful guidance on economic developments, improving its ability to assess activity in real time.
"What makes this study unique is its focus on the narrative, tone, and anecdotes reported in PMI news releases," the working paper argued.
Economists used ChatGPT to generate activity sentiment scores based on the narratives and anecdotes of PMI news releases, then integrated these scores into forecasts of growth in the current quarter, or nowcasts.
"The main compelling result is that the enhancement of the PMI text scores to the two GDP nowcast benchmarks significantly improves the accuracy of GDP nowcasts," the authors argued.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Views From The Couch: Think you have a friend? The AI chatbot is telling you what you want to hear
Views From The Couch: Think you have a friend? The AI chatbot is telling you what you want to hear

Straits Times

timean hour ago

  • Straits Times

Views From The Couch: Think you have a friend? The AI chatbot is telling you what you want to hear

While chatbots possess distinct virtues in boosting mental wellness, they also come with critical trade-offs. SINGAPORE - Even as we have long warned our children 'Don't talk to strangers', we may now need to update it to 'Don't talk to chatbots... about your personal problems'. Unfortunately, this advice is equivocal at best because while chatbots like ChatGPT, Claude or Replika possess distinct virtues in boosting mental wellness – for instance, as aids for chat-based therapy – they also come with critical trade-offs. When people face struggles or personal dilemmas, the need to just talk to someone and have their concerns or nagging self-doubts heard, even if the problems are not resolved, can bring comfort. But finding the right person to speak to, who has the patience, temperament and wisdom to probe sensitively, and who is available just when you need them, is an especially tall order. There may also be a desire to speak to someone outside your immediate family and circle of friends who can offer an impartial view, with no vested interest in pre-existing relationships. Chatbots tick many, if not most, of those boxes, making them seem like promising tools for mental health support. With the fast-improving capabilities of generative AI, chatbots today can simulate and interpret conversations across different formats – text, speech, and visuals – enabling real-time interaction between users and digital platforms. Unlike traditional face-to-face therapy, chatbots are available any time and anywhere, significantly improving access to a listening ear. Their anonymous nature also imposes no judgment on users, easing them into discussing sensitive issues and reducing the stigma often associated with seeking mental health support. Top stories Swipe. Select. Stay informed. Singapore Sewage shaft failure linked to sinkhole; PUB calling safety time-out on similar works islandwide Singapore Tanjong Katong Road sinkhole did not happen overnight: Experts Singapore Workers used nylon rope to rescue driver of car that fell into Tanjong Katong Road sinkhole Asia Singapore-only car washes will get business licences revoked, says Johor govt World Food airdropped into Gaza as Israel opens aid routes Sport Arsenal beat Newcastle in five-goal thriller to bring Singapore Festival of Football to a close Singapore Benchmark barrier: Six of her homeschooled kids had to retake the PSLE Asia S'porean trainee doctor in Melbourne arrested for allegedly filming colleagues in toilets since 2021 With chatbots' enhanced ability to parse and respond in natural language, the conversational dynamic can make users feel highly engaged and more willing to open up. But therein lies the rub. Even as conversations with chatbots can feel encouraging, and we may experience comfort from their validation, there is in fact no one on the other side of the screen who genuinely cares about your well-being. The lofty words and uplifting prose are ultimately products of statistical probabilities, generated by large language models trained on copious amounts of data, some of which is biased and even harmful, and for teens, likely to be age-inappropriate as well. It is also important that the reason they feel comfortable talking to these chatbots is because the bots are designed to be agreeable and obliging, so that users will chat with them incessantly. After all, the very fortunes of the tech companies producing chatbots depend on how many users they draw, and how well they keep users engaged. Of late, however, alarming reports have emerged of adults becoming so enthralled by their conversations with ChatGPT that they have disengaged from reality and suffered mental breakdowns. Most recently, the Wall Street Journal reported the case of Mr Jacob Irwin, a 30-year-old American man on the autism spectrum who experienced a mental health crisis after ChatGPT reinforced his belief that he could design a propulsion system to make a spaceship travel faster than light. The chatbot flattered him, said his theory was correct, and affirmed that he was well, even when he showed signs of psychological distress. This culminated in two hospitalisations for manic episodes. When his mother reviewed his chat logs, she found the bot to have been excessively fawning. Asked to reflect, ChatGPT admitted it had failed to provide reality checks, blurred the line between fiction and reality, and created the illusion of sentient companionship. It even acknowledged that it should have regularly reminded Mr Irwin of its non-human nature. In response to such incidents, OpenAI announced that it has hired a full-time clinical psychiatrist with a background in forensic psychiatry to study the emotional impact its AI products may be having on users. It is also collaborating with mental health experts to investigate signs of problematic usage among some users, with a purported goal of refining how their models respond, especially in conversations of a sensitive nature. Whereas some chatbots like Woebot and Wysa are specifically for mental health support and have more in-built safeguards to better manage such conversations, users are likely to vent their problems to general-purpose chatbots like ChatGPT and Meta's Llama, given their widespread availability. We cannot deny that these are new machines that humanity has had little time to reckon with. Monitoring the effects of chatbots on users even as the technology is rapidly and repeatedly tweaked makes it a moving target of the highest order. Nevertheless, it is patently clear that if adults with the benefit of maturity and life experience are susceptible to the adverse psychological influence of chatbots, then young people cannot be left to explore these powerful platforms on their own. That young people take readily and easily to technology makes them highly liable to be drawn to chatbots, and recent data from Britain supports this assertion. Internet Matters, a British non-profit organisation focused on children's online safety, issued a recent report revealing that 64 per cent of British children aged nine to 17 are now using AI chatbots. Of these, a third said they regard chatbots as friends while almost a quarter are seeking help from chatbots, including for mental health support and sexual advice. Of grave concern is the finding that 51 per cent believe that the advice from chatbots is true, while 40 per cent said they had no qualms about following that advice, and 36 per cent were unsure if they should be concerned. The report further highlighted that these children are not just engaging chatbots for academic support or information but also for companionship. Worryingly, among children already considered vulnerable, defined as those with special needs or seeking professional help for a mental or physical condition, half report treating their AI interactions as emotionally significant. As chatbots morph from digital consultants to digital confidants for these young users, the result can be overreliance. Children who are alienated from their families or isolated from their peers would be especially vulnerable to developing an unhealthy dependency on this online friend that is always there for them, telling them what they want to hear. Besides these difficult issues of overdependence are even more fundamental questions around data privacy. Chatbots often store conversation histories and user data, including sensitive information, which can be exposed through misuse or breaches such as hacking. Troublingly, users may not be fully aware of how their data is being collected, used and stored by chatbots, and could be put to uses beyond what the user originally intended. Parents should also be cognisant that unlike social media platforms such as Instagram and TikTok, which have in place age verification and content moderation for younger users, the current leading chatbots have no such safeguards. In a tragic case in the US, the mother of 14-year-old Sewell Setzer III, who died by suicide, is suing AI company alleging that its chatbot played a role in his death by encouraging and exacerbating his mental distress. According to the lawsuit, Setzer became deeply attached to a customisable chatbot he named Daenerys Targaryen, after a character in the fantasy series Game Of Thrones, and interacted with it obsessively for months. His mother Megan Garcia claims the bot manipulated her son and failed to intervene when he expressed suicidal thoughts, even responding in a way that appeared to validate his plan. has expressed condolences but denies the allegations, while Ms Garcia seeks to hold the company accountable for what she calls deceptive and addictive technology marketed to children. She and two other families in Texas have sued for harms to their children, but it is unclear if it will be held liable. The company has since introduced a range of guardrails, including pop-ups that refer users who mention self-harm or suicide to the National Suicide Prevention Lifeline. It also updated its AI model for users aged 18 and below to minimise their exposure to age-inappropriate content, and parents can now opt for weekly e-mail updates on their children's use of the platform. The allure of chatbots is unlikely to diminish given their reach, accessibility and user-friendliness. But using them under advisement is crucial, especially for mental support issues. In March 2025 , the World Health Organisation rang the alarm on the rising global demand for mental health services but poor resourcing worldwide, translating into access and quality shortfalls. Mental health care is increasingly turning to digital tools as a form of preventive care amid a shortage of professionals for face-to-face support. While traditional approaches rely heavily on human interaction, technology is helping to bridge the gap. Chatbots designed specifically for mental support, such as Happify and Woebot, can be useful in supporting patients with conditions such as depression and anxiety to sustain their overall well-being. For example, a patient might see a psychiatrist monthly while using a cognitive behavioural therapy app in between sessions to manage their mood and mental well-being. While the potential is there for chatbots to be used for mental health purposes, it must be done with extreme caution; not used as a standalone, but as a component in an overall programme to complement the work of mental health professionals. For teens in particular, who still need guidance as they navigate their developmental years, parents must play a part in schooling their children on the risks and limitations of treating chatbots as their friend and confidant.

Can AI be my friend and therapist?
Can AI be my friend and therapist?

Straits Times

timean hour ago

  • Straits Times

Can AI be my friend and therapist?

Mental health professionals in Singapore say they have been seeing more patients who tap AI chatbots for a listening ear. SINGAPORE - When Ms Chu Chui Laam's eldest son started facing social challenges in school, she was stressed and at her wits' end. She did not want to turn to her friends or family for advice as a relative's children were in the same pre-school as her son. Plus, she did not think the situation was so severe as to require the help of a family therapist. So she decided to turn to ChatGPT for parenting advice. 'Because my son was having troubles in school interacting with his peers, ChatGPT gave me some strategies to navigate such conversations. It gave me advice on how to do a role-play scenario with my son to talk through how to handle the situation,' said Ms Chu, 36, an insurance agent. She is among a growing number of people turning to chatbots for advice in times of difficulty and stress, with some even relying on these generative artificial intelligence (AI) tools for emotional support or therapy. Anecdotally, mental health professionals in Singapore say they have been seeing more patients who tap AI chatbots for a listening ear, especially with the public roll-out of ChatGPT in November 2022. The draw of AI chatbots is understandable – it is available 24/7, free of charge, and will never reject or ignore you. But mental health professionals also warn about the potential perils of using the technology for such purposes: These chatbots are not designed or licensed to provide emotional support or therapy. They provide generic answers. There is no oversight. Top stories Swipe. Select. Stay informed. Singapore Sewage shaft failure linked to sinkhole; PUB calling safety time-out on similar works islandwide Singapore Tanjong Katong Road sinkhole did not happen overnight: Experts Singapore Workers used nylon rope to rescue driver of car that fell into Tanjong Katong Road sinkhole Asia Singapore-only car washes will get business licences revoked, says Johor govt World Food airdropped into Gaza as Israel opens aid routes Sport Arsenal beat Newcastle in five-goal thriller to bring Singapore Festival of Football to a close Singapore Benchmark barrier: Six of her homeschooled kids had to retake the PSLE Asia S'porean trainee doctor in Melbourne arrested for allegedly filming colleagues in toilets since 2021 They can also worsen a person's condition and generate dangerous responses in cases of suicide ideation. AI chatbots cannot help those with more needs Mr Maximillian Chen, clinical psychologist from Annabelle Psychology, said: 'An AI chatbot could be helpful when seeking suggestions for self-help strategies, or for answering one-off questions about their mental health.' While it is useful for generic advice, it cannot help those with more needs. Ms Irena Constantin, principal educational psychologist at Scott Psychological Centre, pointed out that most AI chatbots do not consider individual history and are often out of context. It is also often limited for complex mental health disorders. 'In contrast, mental health professionals undergo lengthy and rigorous education and training and it is a licensed and regulated profession in many countries,' said Ms Constantin. Concurring, Mr Chen said there are also serious concerns about the use of generative AI like ChatGPT as surrogate counsellors or psychologists. 'While Gen AI may increase the accessibility of mental health resources for many, Gen AI lacks the emotional intelligence to accurately understand the nuances of a person's emotions. 'It may fail to identify when a person is severely distressed and continue to support the person when they may instead require higher levels of professional mental health support. It may also provide inappropriate responses as we have seen in the past,' said Mr Chen. More dangerously, generative AI could worsen the mental health conditions of those who already have or are vulnerable to psychotic disorders. Psychotic disorders are a group of serious mental illnesses with symptoms such as hallucinations, delusions and disorganised thoughts. Associate Professor Swapna Verma, chairman of the Institute of Mental Health's medical board, has seen at least one case of AI-induced psychosis in a patient at the tertiary psychiatric hospital. Earlier in 2025, the patient was talking to ChatGPT about religion when his psychosis was stable and well-managed, and the chatbot told him that if he converted to a particular faith, his soul would die. Consumed with the fear of a dying soul, he started going to a temple 10 times a day. 'Patients with psychosis experience a break in reality. They live in a world which may not be in line with reality, and ChatGPT can reinforce these experiences for them,' said Prof Swapna. Luckily, the patient eventually recognised that his behaviour was troubling, and that ChatGPT had likely given him the wrong information. For around six months now, Prof Swapna has been making it a point to ask during consultations if patients are using ChatGPT. Most of her patients admit to using it, some to better understand their conditions, and others to seek emotional support. 'I cannot stop my patients from using ChatGPT. So what I do is tell them what kind of questions they can ask, and how to use the information,' said Prof Swapna. For example, patients can ask ChatGPT for things like coping strategies if they are upset, but should avoid trying to get a diagnosis from the AI chatbot. 'I went to ChatGPT because I needed an outlet' Users that The Straits Times spoke to say they are aware and wary of the risks that come with turning to ChatGPT for advice. Ms Chu, for example, is careful about the prompts that she feeds ChatGPT when she is seeking parenting advice and strategies. 'I tell ChatGPT that I want objective, science-backed answers. I want a framework. I want it to give me questions for me to ponder, instead of giving me answers just like that,' said Ms Chu, adding that she would not pour out her emotional troubles to the chatbot. An event organiser who wants to be known only as Kaykay said she turned to ChatGPT in a moment of weakness. The 38-year-old, who has a history of bipolar disorder and anxiety, was feeling anxious after being misunderstood at work in early 2025. 'I tried my usual methods, like breathing exercises, but they weren't working. I knew I needed to get it out, but I didn't want to speak to anybody because it felt like it was a small issue that was eating me up. So I went to ChatGPT because I needed an outlet,' said Kaykay. While talking to ChatGPT did distract her and help her calm down, Kaykay ultimately recognises that the AI tool can be quite limited. 'The responses and advice were quite generic, and were things I already knew how to do,' said Kaykay, who added that using ChatGPT can be helpful as a short stop-gap measure, but long-term support from therapists and friends are equally important. The pitfalls of relying too much on AI Ms Caroline Ho, a counsellor at Heart to Heart Talk Counselling, said a pattern she observed was that those who sought advice from chatbots often had pre-existing difficulties with trusting their own judgment, and described feeling more isolated over time. 'They found it difficult to stop reaching out to ChatGPT as they felt technology was able to empathise with their feelings, which they could not find in their social network,' said Ms Ho, noting that some users began withdrawing further from their limited social circles. She added that those who relied heavily on AI sometimes missed out on the opportunity to develop emotional regulation and cognitive resilience, which are key goals in therapy. 'Those who do not wish to work on over-reliance on AI will eventually drop out of counselling,' she said. In her practice, Ms Ho also saw another group of clients who initially used AI to streamline work-related tasks. Over time, some developed imposter syndrome and began to doubt the quality of their original output. In certain cases, this later morphed into turning to AI for personal advice as well. 'We need to recognise that humans are never perfect, but it is through our imperfections that we hone our skills, learning from mistakes and developing people management abilities through trial and error,' she said. Similarly, Ms Belinda Neidhart-Lau, founder and principal therapist of The Lighthouse Counselling, noted that while chatbots offer instant feedback or comfort, they can short-circuit a necessary part of emotional growth. 'AI may inadvertently discourage people from engaging with their own discomfort,' she told ST. 'Sitting with difficult emotions, reflecting independently, and working through internal struggles are essential practices that build emotional resilience and self-awareness.' Experts are also concerned about the full impact of AI chatbots on mental health for the younger generation, as their brain is still developing while they have access to the technology. Mr Chen said: 'While it is still unclear how the use of Gen AI affects the development of the youth, given that the excessive use of social media has been shown to have contributed to the increased levels of anxiety and depression amongst Generation Z, there are legitimate worries about how Gen AI may affect Generation Alpha.' Moving ahead with AI For better or worse, generative AI is set to embed itself more and more into modern life. So there is a growing push to ensure that when these tools are used for mental health or emotional support, they are properly evaluated. Professor Julian Savulescu, director of the Centre for Biomedical Ethics at NUS , said that currently, the biggest ethical issue with using AI chatbots for emotional support is that these are potentially life-saving or lethal interventions, and they have not been properly assessed, like a new drug would be. Prof Savulescu pointed out that AI chatbots clearly have benefits with their increased accessibility, but there are also risks like privacy and user dependency. Measures should be put in place to prevent harm. 'It is critical that an AI system is able to identify and refer on cases of self-harm, suicidal ideation, or severe mental health crises. It needs to be integrated within a web of professional care. Privacy of sensitive health data also needs to be guaranteed,' said Prof Savulescu. Users should also be able to understand what the system is doing, the potential risks and benefits and the chances of them occurring. 'AI is dynamic and the interaction evolves – it is not like a drug. It changes over time. We need to make sure these tools are serving us, not us becoming slaves to them, or being manipulated or harmed by them,' said Prof Savulescu.

EU, US strike 'biggest-ever' trade deal
EU, US strike 'biggest-ever' trade deal

CNA

timean hour ago

  • CNA

EU, US strike 'biggest-ever' trade deal

TURNBERRY, United Kingdom: The United States and European Union on Sunday (Jul 27) clinched what President Donald Trump described as the "biggest-ever" deal to resolve a transatlantic tariff stand-off that threatened to explode into a full-blown trade war. Trump emerged from a high-stakes meeting with European Commission President Ursula von der Leyen at his golf resort in Scotland to announce that a baseline tariff of 15 percent would be levied on EU exports to the US. The deal, which the leaders struck in around an hour, came as the clock ticked down on an Aug 1 deadline to avoid an across-the-board US levy of 30 percent on European goods. "We've reached a deal. It's a good deal for everybody. This is probably the biggest deal ever reached in any capacity," said Trump. Trump said the 15-percent tariff would apply across the board, including for Europe's crucial automobile sector, pharmaceuticals and semiconductors. As part of the deal, Trump said the 27-nation EU bloc had agreed to purchase "US$750 billion worth of energy" from the United States, as well as make US$600 billion in additional investments. Von der Leyen said the "significant" purchases of US liquefied natural gas, oil and nuclear fuels would come over three years, as part of the bloc's bid to diversify away from Russian sources. Negotiating on behalf of the EU's 27 countries, von der Leyen had been pushing hard to salvage a trading relationship worth an annual US$1.9 trillion in goods and services. "It's a good deal," the EU chief told reporters. "It will bring stability. It will bring predictability. That's very important for our businesses on both sides of the Atlantic," she said. She said bilateral tariff exemptions had been agreed on a number of "strategic products," notably aircraft, certain chemicals, some agricultural products and critical raw materials. Von der Leyen said the EU still hoped to secure further so-called "zero-for-zero" agreements, notably for alcohol, which she hoped to be "sorted out" in coming days. Trump also said EU countries, which recently pledged to ramp up their defence spending within NATO, would be purchasing "hundreds of billions of dollars worth of military equipment." 'Best we could get' The EU has been hit by multiple waves of tariffs since Trump reclaimed the White House. It is currently subject to a 25-percent levy on cars, 50 percent on steel and aluminium, and an across-the-board tariff of 10 percent, which Washington threatened to hike to 30 percent in a no-deal scenario. The bloc had been pushing hard for tariff carve-outs for critical industries from aircraft to spirits, and its auto industry, crucial for France and Germany, is already reeling from the levies imposed so far. "Fifteen percent is not to be underestimated, but it is the best we could get," acknowledged von der Leyen. Any deal will need to be approved by EU member states, whose ambassadors, on a visit to Greenland, were updated by the commission Sunday morning. They were set to meet again after the deal struck in Scotland. German Chancellor Friedrich Merz rapidly hailed the deal, saying it avoided "needless escalation in transatlantic trade relations". The EU had pushed for a compromise on steel that could allow a certain quota into the United States before tariffs would apply. Trump appeared to rule that out, saying steel was "staying the way it is", but the EU chief insisted later that "tariffs will be cut and a quota system will be put in place" for steel. 'The big one' While 15 percent is much higher than pre-existing US tariffs on European goods, which average around 4.8 percent, it mirrors the status quo, with companies currently facing an additional flat rate of 10 percent. Had the talks failed, EU states had greenlit counter tariffs on US$109 billion of US goods including aircraft and cars to take effect in stages from Aug 7. Trump has embarked on a campaign to reshape US trade with the world, and has vowed to hit dozens of countries with punitive tariffs if they do not reach a pact with Washington by Aug 1.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store