
‘It's the most empathetic voice in my life': How AI is transforming the lives of neurodivergent people
D'hotman has autism and attention-deficit hyperactivity disorder (ADHD), which can make relating to others exhausting and a challenge. However, since 2022, D'hotman has been a regular user of ChatGPT, the popular AI-powered chatbot from OpenAI, relying on it to overcome communication barriers at work and in her personal life.
'I know it's a machine,' she says. 'But sometimes, honestly, it's the most empathetic voice in my life.'
Neurodivergent people — including those with autism, ADHD, dyslexia and other conditions — can experience the world differently from the neurotypical norm. Talking to a colleague, or even texting a friend, can entail misread signals, a misunderstood tone and unintended impressions.
AI-powered chatbots have emerged as an unlikely ally, helping people navigate social encounters with real-time guidance. Although this new technology is not without risks — in particular some worry about over-reliance — many neurodivergent users now see it as a lifeline.
How does it work in practice? For D'hotman, ChatGPT acts as an editor, translator and confidant. Before using the technology, she says communicating in neurotypical spaces was difficult. She recalls how she once sent her boss a bulleted list of ways to improve the company, at their request. But what she took to be a straightforward response was received as overly blunt, and even rude.
Now, she regularly runs things by ChatGPT, asking the chatbot to consider the tone and context of her conversations. Sometimes she'll instruct it to take on the role of a psychologist or therapist, asking for help to navigate scenarios as sensitive as a misunderstanding with her best friend. She once uploaded months of messages between them, prompting the chatbot to help her see what she might have otherwise missed. Unlike humans, D'hotman says, the chatbot is positive and non-judgmental.
That's a feeling other neurodivergent people can relate to. Sarah Rickwood, a senior project manager in the sales training industry, based in Kent, England, has ADHD and autism. Rickwood says she has ideas that run away with her and often loses people in conversations. 'I don't do myself justice,' she says, noting that ChatGPT has 'allowed me to do a lot more with my brain.' With its help, she can put together emails and business cases more clearly.
The use of AI-powered tools is surging. A January study conducted by Google and the polling firm Ipsos found that AI usage globally has jumped 48 per cent, with excitement about the technology's practical benefits now exceeding concerns over its potentially adverse effects. In February, OpenAI told Reuters that its weekly active users surpassed 400 million, of which at least 2 million are paying business users.
But for neurodivergent users, these aren't just tools of convenience and some AI-powered chatbots are now being created with the neurodivergent community in mind.
Michael Daniel, an engineer and entrepreneur based in Newcastle, Australia, told Reuters that it wasn't until his daughter was diagnosed with autism — and he received the same diagnosis himself — that he realised how much he had been masking his own neurodivergent traits. His desire to communicate more clearly with his neurotypical wife and loved ones inspired him to build Neurotranslator, an AI-powered personal assistant, which he credits with helping him fully understand and process interactions, as well as avoid misunderstandings.
'Wow … that's a unique shirt,' he recalls saying about his wife's outfit one day, without realising how his comment might be perceived. She asked him to run the comment through NeuroTranslator, which helped him recognise that, without a positive affirmation, remarks about a person's appearance could come across as criticism.
'The emotional baggage that comes along with those situations would just disappear within minutes,' he says of using the app.
Since its launch in September, Daniel says NeuroTranslator has attracted more than 200 paid subscribers. An earlier web version of the app, called Autistic Translator, amassed 500 monthly paid subscribers.
As transformative as this technology has become, some warn against becoming too dependent. The ability to get results on demand can be 'very seductive,' says Larissa Suzuki, a London-based computer scientist and visiting NASA researcher who is herself neurodivergent.
Overreliance could be harmful if it inhibits neurodivergent users' ability to function without it, or if the technology itself becomes unreliable — as is already the case with many AI search-engine results, according to a recent study from the Columbia Journalism Review. 'If AI starts screwing up things and getting things wrong,' Suzuki says, 'people might give up on technology, and on themselves."
Baring your soul to an AI chatbot does carry risk, agrees Gianluca Mauro, an AI adviser and co-author of Zero to AI. 'The objective [of AI models like ChatGPT] is to satisfy the user,' he says, raising questions about its willingness to offer critical advice. Unlike therapists, these tools aren't bound by ethical codes or professional guidelines. If AI has the potential to become addictive, Mauro adds, regulation should follow.
A recent study by Carnegie Mellon and Microsoft (which is a key investor in OpenAI) suggests that long-term overdependence on generative AI tools can undermine users' critical-thinking skills and leave them ill-equipped to manage without it. 'While AI can improve efficiency,' the researchers wrote, 'it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI.'
While Dr. Melanie Katzman, a clinical psychologist and expert in human behaviour, recognises the benefits of AI for neurodivergent people, she does see downsides, such as giving patients an excuse not to engage with others.
A therapist will push their patient to try different things outside of their comfort zone. "I think it's harder for your AI companion to push you," she says.
But for users who have come to rely on this technology, such fears are academic.
'A lot of us just end up kind of retreating from society,' warns D'hotman, who says that she barely left the house in the year following her autism diagnosis, feeling overwhelmed. Were she to give up using ChatGPT, she fears she would return to that traumatic period of isolation.
'As somebody who's struggled with a disability my whole life,' she says, 'I need this.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Business Times
9 hours ago
- Business Times
Alibaba cloud visionary expects big shakeup after OpenAI hype
[HONG KONG] OpenAI's ChatGPT started a revolution in artificial intelligence (AI) development and investment. Yet nine-tenths of the technology and services that have sprung up since could be gone in under a decade, according to the founder of Alibaba Group Holding's cloud and AI unit. The problem is the US startup, celebrated for ushering AI into the mainstream, created 'bias' or a skewed understanding of what AI can do, Wang Jian told Bloomberg Television. It fired the popular imagination about chatbots, but the plethora of applications for AI goes far beyond that. Developers need to cut through the noise and think creatively about applications to propel the next stage of AI development, said Wang, who built Alibaba's now second-largest business from scratch in 2009. 'Probably 90 per cent of the AI people are talking about, I would say, will go away in five or 10 years because it's not really the essence of this technology,' said the computer scientist. 'But that's not bad, and it just helps us to explore.' Wang, who cemented his reputation at Microsoft Research Asia before joining Alibaba, knows a thing or two about thinking outside the box. Shortly after joining, he pitched the idea of a computing business to Alibaba's billionaire co-founder, Jack Ma. He recounted being nervous because he had no concrete business proposal, no models to present, just a conviction that the need for computing would explode in the coming years. He was right. Alicloud, as it's commonly known, is today a US$16 billion business. It not only underpins Alibaba's global e-commerce and logistics endeavours, but it's also the progenitor of the Qwen model, considered on par with DeepSeek and US rivals such as GPT and Gemini. Alibaba has gone all-in on AI, joining the race to build human-like intelligence. US and Chinese companies are investing billions of US dollars to develop a technology with the potential to turbocharge economies and, over the long run, tip the balance of geopolitical power. US President Donald Trump signed executive orders in a call to arms to ensure companies such as OpenAI and Google help safeguard America's lead in the post-ChatGPT era. BT in your inbox Start and end each day with the latest news stories and analyses delivered straight to your inbox. Sign Up Sign Up Wang refrained from addressing that broader conflict. But he did have some choice words for the way the likes of OpenAI and Meta Platforms have thrown money at the problem, including by signing on talented engineers at sports-megastar salaries. 'What happened in Silicon Valley is not the winning formula,' he said. 'It's really about innovation. So when you are in the early stage of innovation, I don't think talent is a problem because the only thing you need to do is to get the right person, not really the expensive person.' Going back almost two decades, Wang admits he never saw the present-day AI revolution coming so soon. All he envisioned was computing becoming as vital as electricity or oil. That should remain so for at least decades. As for China, Wang's firm belief is it will remain a hotbed of innovation, in part because it's one of the biggest technology laboratories in the world. 'It's a test bed for the new technology,' he said. 'People are just fascinated about technology. They are doing a lot of different things.' BLOOMBERG


CNA
13 hours ago
- CNA
Commentary: Will AI help or wreck your next holiday?
TOKYO: On a recent trip to Taiwan, I turned to ChatGPT to ask for recommendations for the best beef noodles in my area – with the very specific request that the shop had to accept credit cards, as I was running low on my stash of local currency. The chatbot immediately recommended a place that was a short walk and featured some of the most delicious, melt-in-your mouth beef tendon I've ever had. I was pleased to be the only foreigner in the no-frills, no air-conditioning joint that was home to a fat, orange cat taking a nap under one of the metal stools. But after my meal, I panicked when the impatient woman behind the counter had to put aside the dumplings she was folding to try and communicate in English to me that it was cash only. Even a quick Google search of the hole in the wall would've saved me from this fate, and I felt foolish for blindly trusting the AI's outputs. Talking to other travellers, I realised I was lucky that the restaurant existed at all, hearing stories of AI tools sending confused tourists to places that were closed or not even real. Still, I found the tool incredibly helpful while navigating a foreign city, using it not just to find spots to eat but also to translate menus and signs, as well as communicate with locals via voice mode. It felt like the ultimate Asia travel hack. THE SAME TOURIST SPOTS Back home in Tokyo, where a weak yen has helped make Japan a top destination for global travellers, I decided to put various AI platforms to the test. I asked DeepSeek, ChatGPT and the agentic tool Manus to create itineraries for someone visiting the city or Japan for the first time. The results were jam-packed and impressive, but mostly featured all the same tourist spots that you'd find at the top of sites like Tripadvisor. Some of the recommendations were also a little out of date; ChatGPT advised staying in a traditional inn that has been closed for over a year. And even my request for more off-the-beaten-path locations spit out areas I specifically avoid at peak times, like Shimokitazawa, because of the crowds of tourists. The outputs made sense given that these tools are an amalgamation of data scraped from the internet. It does save travellers the step of having to scroll through hundreds of websites themselves and put together an itinerary on their own. But relying on this technology also risks a further homogenisation of travel. TOURISM WINNERS AND LOSERS BY ALGORITHM Already, the tech industry is being blamed in tourist hotspots for creating feedback loops that push visitors to the same destinations – with winners and losers chosen by a powerful algorithm. Given that AI systems are predominantly trained on English-language text, this can also mean that local gems easily slip through the cracks of training data. I can't imagine the late Anthony Bourdain eating pho on a stool anywhere in Vietnam that even had a website. AI isn't entirely to blame, even if it adds a much larger scale to the issue. Before the rise of these tools, social media was already reshaping travel in Asia – sometimes in bizarre ways. There's a railroad crossing in my neighbourhood that an influencer posted on Chinese social media platform Xiaohongshu and is now constantly inundated with people doing photoshoots. One of my favourite summer swimming spots in the outskirts of the city unexpectedly went viral on TikTok last year, and it was shocking to see how crowded the riverbanks had become with foreigners. A town near Mt. Fuji garnered international headlines last year after briefly erecting a barrier to block the view of the iconic landmark when it was overrun with tourists trying to all get the same shot – behind a convenience store, of all places. PUT THE PHONE DOWN Of course, this isn't limited to Asia. As AI applications proliferate, more people are turning to them to plan vacations from Barcelona to New Orleans. Instead of just advice on local customs, online travel forums have also become popular places to share clever ways to engineer prompts for generative AI tools to make more personalised itineraries. Still, there are inherent limitations to the data they're trained on. Perhaps it wouldn't hurt to put the phone down and ask a local for their top spots. Ultimately, AI can break down language and cultural barriers for travellers in ways that seemed unimaginable a decade ago. That's a good thing, and the convenience is undeniable. But it's good to remember that some of the best parts of travel can never be optimised by a machine.

Straits Times
14 hours ago
- Straits Times
Views From The Couch: Think you have a friend? The AI chatbot is telling you what you want to hear
While chatbots possess distinct virtues in boosting mental wellness, they also come with critical trade-offs. SINGAPORE - Even as we have long warned our children 'Don't talk to strangers', we may now need to update it to 'Don't talk to chatbots... about your personal problems'. Unfortunately, this advice is equivocal at best because while chatbots like ChatGPT, Claude or Replika possess distinct virtues in boosting mental wellness – for instance, as aids for chat-based therapy – they also come with critical trade-offs. When people face struggles or personal dilemmas, the need to just talk to someone and have their concerns or nagging self-doubts heard, even if the problems are not resolved, can bring comfort. But finding the right person to speak to, who has the patience, temperament and wisdom to probe sensitively, and who is available just when you need them, is an especially tall order. There may also be a desire to speak to someone outside your immediate family and circle of friends who can offer an impartial view, with no vested interest in pre-existing relationships. Chatbots tick many, if not most, of those boxes, making them seem like promising tools for mental health support. With the fast-improving capabilities of generative AI, chatbots today can simulate and interpret conversations across different formats – text, speech, and visuals – enabling real-time interaction between users and digital platforms. Unlike traditional face-to-face therapy, chatbots are available any time and anywhere, significantly improving access to a listening ear. Their anonymous nature also imposes no judgment on users, easing them into discussing sensitive issues and reducing the stigma often associated with seeking mental health support. Top stories Swipe. Select. Stay informed. Singapore Sewage shaft failure linked to sinkhole; PUB calling safety time-out on similar works islandwide Singapore Tanjong Katong Road sinkhole did not happen overnight: Experts Singapore Workers used nylon rope to rescue driver of car that fell into Tanjong Katong Road sinkhole Asia Singapore-only car washes will get business licences revoked, says Johor govt World Food airdropped into Gaza as Israel opens aid routes Sport Arsenal beat Newcastle in five-goal thriller to bring Singapore Festival of Football to a close Singapore Benchmark barrier: Six of her homeschooled kids had to retake the PSLE Asia S'porean trainee doctor in Melbourne arrested for allegedly filming colleagues in toilets since 2021 With chatbots' enhanced ability to parse and respond in natural language, the conversational dynamic can make users feel highly engaged and more willing to open up. But therein lies the rub. Even as conversations with chatbots can feel encouraging, and we may experience comfort from their validation, there is in fact no one on the other side of the screen who genuinely cares about your well-being. The lofty words and uplifting prose are ultimately products of statistical probabilities, generated by large language models trained on copious amounts of data, some of which is biased and even harmful, and for teens, likely to be age-inappropriate as well. It is also important that the reason they feel comfortable talking to these chatbots is because the bots are designed to be agreeable and obliging, so that users will chat with them incessantly. After all, the very fortunes of the tech companies producing chatbots depend on how many users they draw, and how well they keep users engaged. Of late, however, alarming reports have emerged of adults becoming so enthralled by their conversations with ChatGPT that they have disengaged from reality and suffered mental breakdowns. Most recently, the Wall Street Journal reported the case of Mr Jacob Irwin, a 30-year-old American man on the autism spectrum who experienced a mental health crisis after ChatGPT reinforced his belief that he could design a propulsion system to make a spaceship travel faster than light. The chatbot flattered him, said his theory was correct, and affirmed that he was well, even when he showed signs of psychological distress. This culminated in two hospitalisations for manic episodes. When his mother reviewed his chat logs, she found the bot to have been excessively fawning. Asked to reflect, ChatGPT admitted it had failed to provide reality checks, blurred the line between fiction and reality, and created the illusion of sentient companionship. It even acknowledged that it should have regularly reminded Mr Irwin of its non-human nature. In response to such incidents, OpenAI announced that it has hired a full-time clinical psychiatrist with a background in forensic psychiatry to study the emotional impact its AI products may be having on users. It is also collaborating with mental health experts to investigate signs of problematic usage among some users, with a purported goal of refining how their models respond, especially in conversations of a sensitive nature. Whereas some chatbots like Woebot and Wysa are specifically for mental health support and have more in-built safeguards to better manage such conversations, users are likely to vent their problems to general-purpose chatbots like ChatGPT and Meta's Llama, given their widespread availability. We cannot deny that these are new machines that humanity has had little time to reckon with. Monitoring the effects of chatbots on users even as the technology is rapidly and repeatedly tweaked makes it a moving target of the highest order. Nevertheless, it is patently clear that if adults with the benefit of maturity and life experience are susceptible to the adverse psychological influence of chatbots, then young people cannot be left to explore these powerful platforms on their own. That young people take readily and easily to technology makes them highly liable to be drawn to chatbots, and recent data from Britain supports this assertion. Internet Matters, a British non-profit organisation focused on children's online safety, issued a recent report revealing that 64 per cent of British children aged nine to 17 are now using AI chatbots. Of these, a third said they regard chatbots as friends while almost a quarter are seeking help from chatbots, including for mental health support and sexual advice. Of grave concern is the finding that 51 per cent believe that the advice from chatbots is true, while 40 per cent said they had no qualms about following that advice, and 36 per cent were unsure if they should be concerned. The report further highlighted that these children are not just engaging chatbots for academic support or information but also for companionship. Worryingly, among children already considered vulnerable, defined as those with special needs or seeking professional help for a mental or physical condition, half report treating their AI interactions as emotionally significant. As chatbots morph from digital consultants to digital confidants for these young users, the result can be overreliance. Children who are alienated from their families or isolated from their peers would be especially vulnerable to developing an unhealthy dependency on this online friend that is always there for them, telling them what they want to hear. Besides these difficult issues of overdependence are even more fundamental questions around data privacy. Chatbots often store conversation histories and user data, including sensitive information, which can be exposed through misuse or breaches such as hacking. Troublingly, users may not be fully aware of how their data is being collected, used and stored by chatbots, and could be put to uses beyond what the user originally intended. Parents should also be cognisant that unlike social media platforms such as Instagram and TikTok, which have in place age verification and content moderation for younger users, the current leading chatbots have no such safeguards. In a tragic case in the US, the mother of 14-year-old Sewell Setzer III, who died by suicide, is suing AI company alleging that its chatbot played a role in his death by encouraging and exacerbating his mental distress. According to the lawsuit, Setzer became deeply attached to a customisable chatbot he named Daenerys Targaryen, after a character in the fantasy series Game Of Thrones, and interacted with it obsessively for months. His mother Megan Garcia claims the bot manipulated her son and failed to intervene when he expressed suicidal thoughts, even responding in a way that appeared to validate his plan. has expressed condolences but denies the allegations, while Ms Garcia seeks to hold the company accountable for what she calls deceptive and addictive technology marketed to children. She and two other families in Texas have sued for harms to their children, but it is unclear if it will be held liable. The company has since introduced a range of guardrails, including pop-ups that refer users who mention self-harm or suicide to the National Suicide Prevention Lifeline. It also updated its AI model for users aged 18 and below to minimise their exposure to age-inappropriate content, and parents can now opt for weekly e-mail updates on their children's use of the platform. The allure of chatbots is unlikely to diminish given their reach, accessibility and user-friendliness. But using them under advisement is crucial, especially for mental support issues. In March 2025 , the World Health Organisation rang the alarm on the rising global demand for mental health services but poor resourcing worldwide, translating into access and quality shortfalls. Mental health care is increasingly turning to digital tools as a form of preventive care amid a shortage of professionals for face-to-face support. While traditional approaches rely heavily on human interaction, technology is helping to bridge the gap. Chatbots designed specifically for mental support, such as Happify and Woebot, can be useful in supporting patients with conditions such as depression and anxiety to sustain their overall well-being. For example, a patient might see a psychiatrist monthly while using a cognitive behavioural therapy app in between sessions to manage their mood and mental well-being. While the potential is there for chatbots to be used for mental health purposes, it must be done with extreme caution; not used as a standalone, but as a component in an overall programme to complement the work of mental health professionals. For teens in particular, who still need guidance as they navigate their developmental years, parents must play a part in schooling their children on the risks and limitations of treating chatbots as their friend and confidant.