‘It's the most empathetic voice in my life': How AI is transforming the lives of neurodivergent people
-For Cape Town-based filmmaker Kate D'hotman, connecting with movie audiences comes naturally. Far more daunting is speaking with others. 'I've never understood how people [decipher] social cues,' the 40-year-old director of horror films says.
D'hotman has autism and attention-deficit hyperactivity disorder (ADHD), which can make relating to others exhausting and a challenge. However, since 2022, D'hotman has been a regular user of ChatGPT, the popular AI-powered chatbot from OpenAI, relying on it to overcome communication barriers at work and in her personal life.
'I know it's a machine,' she says. 'But sometimes, honestly, it's the most empathetic voice in my life.'
Neurodivergent people — including those with autism, ADHD, dyslexia and other conditions — can experience the world differently from the neurotypical norm. Talking to a colleague, or even texting a friend, can entail misread signals, a misunderstood tone and unintended impressions.
AI-powered chatbots have emerged as an unlikely ally, helping people navigate social encounters with real-time guidance. Although this new technology is not without risks — in particular some worry about over-reliance — many neurodivergent users now see it as a lifeline.
How does it work in practice? For D'hotman, ChatGPT acts as an editor, translator and confidant. Before using the technology, she says communicating in neurotypical spaces was difficult. She recalls how she once sent her boss a bulleted list of ways to improve the company, at their request. But what she took to be a straightforward response was received as overly blunt, and even rude.
Now, she regularly runs things by ChatGPT, asking the chatbot to consider the tone and context of her conversations. Sometimes she'll instruct it to take on the role of a psychologist or therapist, asking for help to navigate scenarios as sensitive as a misunderstanding with her best friend. She once uploaded months of messages between them, prompting the chatbot to help her see what she might have otherwise missed. Unlike humans, D'hotman says, the chatbot is positive and non-judgmental.
That's a feeling other neurodivergent people can relate to. Sarah Rickwood, a senior project manager in the sales training industry, based in Kent, England, has ADHD and autism. Rickwood says she has ideas that run away with her and often loses people in conversations. 'I don't do myself justice,' she says, noting that ChatGPT has 'allowed me to do a lot more with my brain.' With its help, she can put together emails and business cases more clearly.
The use of AI-powered tools is surging. A January study conducted by Google and the polling firm Ipsos found that AI usage globally has jumped 48%, with excitement about the technology's practical benefits now exceeding concerns over its potentially adverse effects. In February, OpenAI told Reuters that its weekly active users surpassed 400 million, of which at least 2 million are paying business users.
But for neurodivergent users, these aren't just tools of convenience and some AI-powered chatbots are now being created with the neurodivergent community in mind.
Michael Daniel, an engineer and entrepreneur based in Newcastle, Australia, told Reuters that it wasn't until his daughter was diagnosed with autism — and he received the same diagnosis himself — that he realised how much he had been masking his own neurodivergent traits. His desire to communicate more clearly with his neurotypical wife and loved ones inspired him to build Neurotranslator, an AI-powered personal assistant, which he credits with helping him fully understand and process interactions, as well as avoid misunderstandings.
'Wow … that's a unique shirt,' he recalls saying about his wife's outfit one day, without realising how his comment might be perceived. She asked him to run the comment through NeuroTranslator, which helped him recognise that, without a positive affirmation, remarks about a person's appearance could come across as criticism.
'The emotional baggage that comes along with those situations would just disappear within minutes,' he says of using the app.
Since its launch in September, Daniel says NeuroTranslator has attracted more than 200 paid subscribers. An earlier web version of the app, called Autistic Translator, amassed 500 monthly paid subscribers.
As transformative as this technology has become, some warn against becoming too dependent. The ability to get results on demand can be 'very seductive,' says Larissa Suzuki, a London-based computer scientist and visiting NASA researcher who is herself neurodivergent.
Overreliance could be harmful if it inhibits neurodivergent users' ability to function without it, or if the technology itself becomes unreliable — as is already the case with many AI search-engine results, according to a recent study from the Columbia Journalism Review. 'If AI starts screwing up things and getting things wrong,' Suzuki says, 'people might give up on technology, and on themselves."
Baring your soul to an AI chatbot does carry risk, agrees Gianluca Mauro, an AI adviser and co-author of Zero to AI. 'The objective [of AI models like ChatGPT] is to satisfy the user,' he says, raising questions about its willingness to offer critical advice. Unlike therapists, these tools aren't bound by ethical codes or professional guidelines. If AI has the potential to become addictive, Mauro adds, regulation should follow.
A recent study by Carnegie Mellon and Microsoft (which is a key investor in OpenAI) suggests that long-term overdependence on generative AI tools can undermine users' critical-thinking skills and leave them ill-equipped to manage without it. 'While AI can improve efficiency,' the researchers wrote, 'it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI.'
While Dr. Melanie Katzman, a clinical psychologist and expert in human behaviour, recognises the benefits of AI for neurodivergent people, she does see downsides, such as giving patients an excuse not to engage with others.
A therapist will push their patient to try different things outside of their comfort zone. "I think it's harder for your AI companion to push you," she says.
But for users who have come to rely on this technology, such fears are academic.
'A lot of us just end up kind of retreating from society,' warns D'hotman, who says that she barely left the house in the year following her autism diagnosis, feeling overwhelmed. Were she to give up using ChatGPT, she fears she would return to that traumatic period of isolation.
'As somebody who's struggled with a disability my whole life,' she says, 'I need this.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
28 minutes ago
- Forbes
OpenAI: ChatGPT Wants Legal Rights. You Need The Right To Be Forgotten.
As systems like ChatGPT move toward achieving legal privilege, the boundaries between identity, ... More memory, and control are being redefined, often without consent. When OpenAI CEO Sam Altman recently stated that conversations with ChatGPT should one day enjoy legal privilege, similar to those between a patient and a doctor or a client and a lawyer, he wasn't just referring to privacy. He was pointing toward a redefinition of the relationship between people and machines. Legal privilege protects the confidentiality of certain relationships. What's said between a patient and physician, or a client and attorney, is shielded from subpoenas, court disclosures, and adversarial scrutiny. Extending that same protection to AI interactions means treating the machine not as a tool, but as a participant in a privileged exchange. This is more than a policy suggestion. It's a legal and philosophical shift with consequences no one has fully reckoned with. It also comes at a time when the legal system is already being tested. In The New York Times' lawsuit against OpenAI, the paper has asked courts to compel the company to preserve all user prompts, including those the company says are deleted after 30 days. That request is under appeal. Meanwhile, Altman's suggestion that AI chats deserve legal shielding raises the question: if they're protected like therapy sessions, what does that make the system listening on the other side? People are already treating AI like a confidant. According to Common Sense Media, three in four teens have used an AI chatbot, and over half say they trust the advice they receive at least somewhat. Many describe a growing reliance on these systems to process everything from school to relationships. Altman himself has called this emotional over-reliance 'really bad and dangerous.' But it's not just teens. AI is being integrated into therapeutic apps, career coaching tools, HR systems, and even spiritual guidance platforms. In some healthcare environments, AI is being used to draft communications and interpret lab data before a doctor even sees it. These systems are present in decision-making loops, and their presence is being normalized. This is how it begins. First, protect the conversation. Then, protect the system. What starts as a conversation about privacy quickly evolves into a framework centered on rights, autonomy, and standing. We've seen this play out before. In U.S. law, corporations were gradually granted legal personhood, not because they were considered people, but because they acted as consistent legal entities that required protection and responsibility under the law. Over time, personhood became a useful legal fiction. Something similar may now be unfolding with AI—not because it is sentient, but because it interacts with humans in ways that mimic protected relationships. The law adapts to behavior, not just biology. The Legal System Isn't Ready For What ChatGPT Is Proposing There is no global consensus on how to regulate AI memory, consent, or interaction logs. The EU's AI Act introduces transparency mandates, but memory rights are still undefined. In the U.S., state-level data laws conflict, and no federal policy yet addresses what it means to interact with a memory‑enabled AI. (See my recent Forbes piece on why AI regulation is effectively dead—and what businesses need to do instead.) The physical location of a server is not just a technical detail. It's a legal trigger. A conversation stored on a server in California is subject to U.S. law. If it's routed through Frankfurt, it becomes subject to GDPR. When AI systems retain memory, context, and inferred consent, the server location effectively defines sovereignty over the interaction. That has implications for litigation, subpoenas, discovery, and privacy. 'I almost wish they'd go ahead and grant these AI systems legal personhood, as if they were therapists or clergy,' says technology attorney John Kheit. 'Because if they are, then all this passive data collection starts to look a lot like an illegal wiretap, which would thereby give humans privacy rights/protections when interacting with AI. It would also, then, require AI providers to disclose 'other parties to the conversation', i.e., that the provider is a mining party reading the data, and if advertisers are getting at the private conversations.' Infrastructure choices are now geopolitical. They determine how AI systems behave under pressure and what recourse a user has when something goes wrong. And yet, underneath all of this is a deeper motive: monetization. But they won't be the only ones asking questions. Every conversation becomes a four-party exchange: the user, the model, the platform's internal optimization engine, and the advertiser paying for access. It's entirely plausible for a prompt about the Pittsburgh Steelers to return a response that subtly inserts 'Buy Coke' mid-paragraph. Not because it's relevant—but because it's profitable. Recent research shows users are significantly worse at detecting unlabeled advertising when it's embedded inside AI-generated content. Worse, these ads are initially rated as more trustworthy until users discover they are, in fact, ads. At that point, they're also rated as more manipulative. 'In experiential marketing, trust is everything,' says Jeff Boedges, Founder of Soho Experiential. 'You can't fake a relationship, and you can't exploit it without consequence. If AI systems are going to remember us, recommend things to us, or even influence us, we'd better know exactly what they remember and why. Otherwise, it's not personalization. It's manipulation.' Now consider what happens when advertisers gain access to psychographic modeling: 'Which users are most emotionally vulnerable to this type of message?' becomes a viable, queryable prompt. And AI systems don't need to hand over spreadsheets to be valuable. With retrieval-augmented generation (RAG) and reinforcement learning from human feedback (RLHF), the model can shape language in real time based on prior sentiment, clickstream data, and fine-tuned advertiser objectives. This isn't hypothetical—it's how modern adtech already works. At that point, the chatbot isn't a chatbot. It's a simulation environment for influence. It is trained to build trust, then designed to monetize it. Your behavioral patterns become the product. Your emotional response becomes the target for optimization. The business model is clear: black-boxed behavioral insight at scale, delivered through helpful design, hidden from oversight, and nearly impossible to detect. We are entering a phase where machines will be granted protections without personhood, and influence without responsibility. If a user confesses to a crime during a legally privileged AI session, is the platform compelled to report it or remain silent? And who makes that decision? These are not edge cases. They are coming quickly. And they are coming at scale. Why ChatGPT Must Remain A Model—and Why Humans Must Regain Consent As generative AI systems evolve into persistent, adaptive participants in daily life, it becomes more important than ever to reassert a boundary: models must remain models. They cannot assume the legal, ethical, or sovereign status of a person quietly. And the humans generating the data that train these systems must retain explicit rights over their contributions. What we need is a standardized, enforceable system of data contracting, one that allows individuals to knowingly, transparently, and voluntarily contribute data for a limited, mutually agreed-upon window of use. This contract must be clear on scope, duration, value exchange, and termination. And it must treat data ownership as immutable, even during active use. That means: When a contract ends, or if a company violates its terms, the individual's data must, by law, be erased from the model, its training set, and any derivative products. 'Right to be forgotten' must mean what it says. But to be credible, this system must work both ways: This isn't just about ethics. It's about enforceable, mutual accountability. The user experience must be seamless and scalable. The legal backend must be secure. And the result should be a new economic compact—where humans know when they're participating in AI development, and models are kept in their place. ChatGPT Is Changing the Risk Surface. Here's How to Respond. The shift toward AI systems as quasi-participants—not just tools—will reshape legal exposure, data governance, product liability, and customer trust. Whether you're building AI, integrating it into your workflows, or using it to interface with customers, here are five things you should be doing immediately: ChatGPT May Get Privilege. You Should Get the Right to Be Forgotten. This moment isn't just about what AI can do. It's about what your business is letting it do, what it remembers, and who gets access to that memory. Ignore that, and you're not just risking privacy violations, you're risking long-term brand trust and regulatory blowback. At the very least, we need a legal framework that defines how AI memory is governed. Not as a priest, not as a doctor, and not as a partner, but perhaps as a witness. Something that stores information and can be examined when context demands it, with clear boundaries on access, deletion, and use. The public conversation remains focused on privacy. But the fundamental shift is about control. And unless the legal and regulatory frameworks evolve rapidly, the terms of engagement will be set, not by policy or users, but by whoever owns the box. Which is why, in the age of AI, the right to be forgotten may become the most valuable human right we have. Not just because your data could be used against you—but because your identity itself can now be captured, modeled, and monetized in ways that persist beyond your control. Your patterns, preferences, emotional triggers, and psychological fingerprints don't disappear when the session ends. They live on inside a system that never forgets, never sleeps, and never stops optimizing. Without the ability to revoke access to your data, you don't just lose privacy. You lose leverage. You lose the ability to opt out of prediction. You lose control over how you're remembered, represented, and replicated. The right to be forgotten isn't about hiding. It's about sovereignty. And in a world where AI systems like ChatGPT will increasingly shape our choices, our identities, and our outcomes, the ability to walk away may be the last form of freedom that still belongs to you.
Yahoo
an hour ago
- Yahoo
I Asked ChatGPT What Would Happen If Billionaires Paid Taxes at the Same Rate as the Upper Middle Class
There are many questions that don't have simple answers, either because they're too complex or they're hypothetical. One such question is what it might mean for billionaires to pay taxes at the same rate as the upper middle class, whose income starts, on average, at around $168,000, depending on where you live. Find Out: Read Next: ChatGPT may not be an oracle, but it can analyze information and offer trends and patterns, so I asked it what would happen if billionaires were required to pay anywhere near as much as the upper middle class. Here's what it said. A Fatter Government Larder For starters, ChatGPT said that if billionaires paid taxes like the upper middle class, the government would bring in a lot more money — potentially hundreds of billions of dollars more every year. 'That's because most billionaires don't make their money from salaries like upper-middle-class workers do. Instead, they grow their wealth through investments–stocks, real estate, and businesses–which are often taxed at much lower rates or not taxed at all until the assets are sold,' ChatGPT told me. Billionaire income is largely derived from capital appreciation, not wages. In other words, they make money on their money through interest. And as of yet, the U.S. tax code doesn't tax 'unrealized capital gains' so until you sell your assets, you could amass millions in appreciation and not pay a dime on it, ChatGPT shared. Learn More: What Do Billionaires Pay in Taxes? Right now, many billionaires pay an effective tax rate of around 8% or less, thanks to loopholes and tax strategies. Meanwhile, upper-middle-class households earning, say, $250,000 might pay around 20% to 24% of their income in taxes. (Keep in mind that the government doesn't apply one tax bracket to all income. You pay tax in layers, according to the IRS. As your income goes up, the tax rate on the next layer of income is higher. So you pay 12% on the first $47,150, then 22% on $47,151 to $100,525 and so on). So, if billionaires were taxed at the same rate as those upper-middle-class wage earners, 'it would level the playing field–and raise a ton of revenue that could be used for things like infrastructure, education or healthcare,' ChatGPT said. The Impact on Wealth Equality I wondered if taxing billionaires could have any kind of impact on wealth equality, as well. While it wouldn't put more money in other people's pockets, 'it could increase trust in the tax system, showing that the wealthiest aren't playing by a different set of rules,' ChatGPT said. It would also help curb 'the accumulation of dynastic wealth,' where the richest families essentially hoard wealth for generations without contributing proportionally to the system. But it's not a magic bullet. 'Wealth inequality is rooted in more than just taxes–wages, education access, housing costs, and corporate ownership all play a role,' ChatGPT said. Billionaires paying taxes doesn't stop them from being billionaires, either, it pointed out. Taxing Billionaires Is Not That Simple While in theory billionaires paying higher taxes 'would shift a much bigger share of the tax burden onto the very wealthy,' ChatGPT wrote, billionaires are not as liquid as they may seem. 'A lot of billionaire wealth is tied up in things like stocks they don't sell, so taxing that would require big changes to how the tax code works.' Also, billionaires are good at finding loopholes and account strategies — it might be hard to enforce. What's a Good Middle Ground? We don't live in a black and white world, however. There's got to be a middle ground, so I asked ChatGPT if there is a way to tax billionaires more, even if it's not quite how the upper middle class are taxed. A likely compromise would come from a policy decision, which isn't likely to be forthcoming anytime soon. President Donald Trump's One Big Beautiful Bill only offered more tax breaks to the wealthiest. However, policy proposals that have been floated, include: A minimum tax on billionaires where they might pay around 20% of their overall income Limiting deductions and closing tax loopholes that allow them to significantly reduce taxable income Tax unrealized gains (those assets that have only earned but not yet been sold), gradually. ChatGPT agreed that billionaires could pay more than they currently do, even if they don't pay exactly what upper-middle-class workers pay in percentage terms. 'The key is to design policies that are fair, enforceable, and politically feasible.' I asked how realistic such policy proposals are, and ChatGPT told me what I already knew: They're 'moderately realistic' but only with the 'right political alignment.' More From GOBankingRates 9 Downsizing Tips for the Middle Class To Save on Monthly Expenses This article originally appeared on I Asked ChatGPT What Would Happen If Billionaires Paid Taxes at the Same Rate as the Upper Middle Class Se produjo un error al recuperar la información Inicia sesión para acceder a tu portafolio Se produjo un error al recuperar la información Se produjo un error al recuperar la información Se produjo un error al recuperar la información Se produjo un error al recuperar la información


Business Insider
an hour ago
- Business Insider
Anthropic's $150 Billion Goal: How Amazon and Alphabet Could Benefit From the AI Surge
Artificial intelligence start-up Anthropic is in early talks to raise between $3 billion and $5 billion in a new funding round. The raise could push the company's valuation above $150 billion, according to the Financial Times. That would more than double its current $61.5 billion valuation, reached just a few months ago. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. Anthropic Is Growing Fast Anthropic is the company behind Claude, a large language model that competes with OpenAI's ChatGPT. It is backed by Amazon (AMZN) and Alphabet Inc. (GOOG), which have each committed billions in cloud credits and cash. Amazon has already invested up to $8 billion and is reportedly considering further investments to remain among Anthropic's largest shareholders. This funding round comes as competition in artificial intelligence intensifies. OpenAI is preparing to launch GPT-5 and is working with SoftBank (SFTBY) on a separate raise that could bring in tens of billions of dollars. Meanwhile, Anthropic has quietly increased its annualized recurring revenue from $1 billion at the start of the year to over $4 billion, driven mainly by enterprise subscriptions. Investors are closely watching the private AI space, but the real implications may lie with public companies that stand to benefit. Amazon and Alphabet have positioned themselves as infrastructure providers for leading model developers, such as Anthropic. A stronger Claude model, used widely in enterprise software and coding tools, could support growth in Amazon Web Services and Google Cloud revenue. The Middle East Is Calling The funding talks have also drawn interest from Middle Eastern sovereign wealth funds. Anthropic's leadership has expressed concerns internally about taking direct investment from the region, citing political risks. Even so, the company sold $500 million worth of shares to a fund linked to Abu Dhabi in 2023. A broader shift toward sovereign capital could influence how other private AI start-ups raise money. Anthropic remains private for now, but its valuation jump and enterprise growth highlight how quickly the AI market is scaling. Investors seeking exposure are most likely to find it through public companies that enable and support that growth. Using TipRanks' Comparison Tool, we've brought Amazon and Google side by side and compared them to gain a broader look at Anthropic's two most notable backers.