logo
The AI therapist will see you now: Can chatbots really improve mental health?

The AI therapist will see you now: Can chatbots really improve mental health?

Japan Today6 days ago
By Pooja Shree Chettiar
Recently, I found myself pouring my heart out, not to a human, but to a chatbot named Wysa on my phone. It nodded – virtually – asked me how I was feeling and gently suggested trying breathing exercises.
As a neuroscientist, I couldn't help but wonder: Was I actually feeling better, or was I just being expertly redirected by a well-trained algorithm? Could a string of code really help calm a storm of emotions?
Artificial intelligence-powered mental health tools are becoming increasingly popular – and increasingly persuasive. But beneath their soothing prompts lie important questions: How effective are these tools? What do we really know about how they work? And what are we giving up in exchange for convenience?
Of course it's an exciting moment for digital mental health. But understanding the trade-offs and limitations of AI-based care is crucial.
Stand-in meditation and therapy apps and bots
AI-based therapy is a relatively new player in the digital therapy field. But the U.S. mental health app market has been booming for the past few years, from apps with free tools that text you back to premium versions with an added feature that gives prompts for breathing exercises.
Headspace and Calm are two of the most well-known meditation and mindfulness apps, offering guided meditations, bedtime stories and calming soundscapes to help users relax and sleep better. Talkspace and BetterHelp go a step further, offering actual licensed therapists via chat, video or voice. The apps Happify and Moodfit aim to boost mood and challenge negative thinking with game-based exercises.
Somewhere in the middle are chatbot therapists like Wysa and Woebot, using AI to mimic real therapeutic conversations, often rooted in cognitive behavioral therapy. These apps typically offer free basic versions, with paid plans ranging from US$10 to $100 per month for more comprehensive features or access to licensed professionals.
While not designed specifically for therapy, conversational tools like ChatGPT have sparked curiosity about AI's emotional intelligence.
Some users have turned to ChatGPT for mental health advice, with mixed outcomes, including a widely reported case in Belgium where a man died by suicide after months of conversations with a chatbot. Elsewhere, a father is seeking answers after his son was fatally shot by police, alleging that distressing conversations with an AI chatbot may have influenced his son's mental state. These cases raise ethical questions about the role of AI in sensitive situations.
Where AI comes in
Whether your brain is spiraling, sulking or just needs a nap, there's a chatbot for that. But can AI really help your brain process complex emotions? Or are people just outsourcing stress to silicon-based support systems that sound empathetic?
And how exactly does AI therapy work inside our brains?
Most AI mental health apps promise some flavor of cognitive behavioral therapy, which is basically structured self-talk for your inner chaos. Think of it as Marie Kondo-ing, the Japanese tidying expert known for helping people keep only what 'sparks joy.' You identify unhelpful thought patterns like 'I'm a failure,' examine them, and decide whether they serve you or just create anxiety.
But can a chatbot help you rewire your thoughts? Surprisingly, there's science suggesting it's possible. Studies have shown that digital forms of talk therapy can reduce symptoms of anxiety and depression, especially for mild to moderate cases. In fact, Woebot has published peer-reviewed research showing reduced depressive symptoms in young adults after just two weeks of chatting.
These apps are designed to simulate therapeutic interaction, offering empathy, asking guided questions and walking you through evidence-based tools. The goal is to help with decision-making and self-control, and to help calm the nervous system.
The neuroscience behind cognitive behavioral therapy is solid: It's about activating the brain's executive control centers, helping us shift our attention, challenge automatic thoughts and regulate our emotions.
The question is whether a chatbot can reliably replicate that, and whether our brains actually believe it.
A user's experience, and what it might mean for the brain
'I had a rough week,' a friend told me recently. I asked her to try out a mental health chatbot for a few days. She told me the bot replied with an encouraging emoji and a prompt generated by its algorithm to try a calming strategy tailored to her mood. Then, to her surprise, it helped her sleep better by week's end.
As a neuroscientist, I couldn't help but ask: Which neurons in her brain were kicking in to help her feel calm?
This isn't a one-off story. A growing number of user surveys and clinical trials suggest that cognitive behavioral therapy-based chatbot interactions can lead to short-term improvements in mood, focus and even sleep. In randomized studies, users of mental health apps have reported reduced symptoms of depression and anxiety – outcomes that closely align with how in-person cognitive behavioral therapy influences the brain.
Several studies show that therapy chatbots can actually help people feel better. In one clinical trial, a chatbot called 'Therabot' helped reduce depression and anxiety symptoms by nearly half – similar to what people experience with human therapists. Other research, including a review of over 80 studies, found that AI chatbots are especially helpful for improving mood, reducing stress and even helping people sleep better. In one study, a chatbot outperformed a self-help book in boosting mental health after just two weeks.
While people often report feeling better after using these chatbots, scientists haven't yet confirmed exactly what's happening in the brain during those interactions. In other words, we know they work for many people, but we're still learning how and why.
Red flags and risks
Apps like Wysa have earned FDA Breakthrough Device designation, a status that fast-tracks promising technologies for serious conditions, suggesting they may offer real clinical benefit. Woebot, similarly, runs randomized clinical trials showing improved depression and anxiety symptoms in new moms and college students.
While many mental health apps boast labels like 'clinically validated' or 'FDA approved,' those claims are often unverified. A review of top apps found that most made bold claims, but fewer than 22% cited actual scientific studies to back them up.
In addition, chatbots collect sensitive information about your mood metrics, triggers and personal stories. What if that data winds up in third-party hands such as advertisers, employers or hackers, a scenario that has occurred with genetic data? In a 2023 breach, nearly 7 million users of the DNA testing company 23andMe had their DNA and personal details exposed after hackers used previously leaked passwords to break into their accounts. Regulators later fined the company more than $2 million for failing to protect user data.
Unlike clinicians, bots aren't bound by counseling ethics or privacy laws regarding medical information. You might be getting a form of cognitive behavioral therapy, but you're also feeding a database.
And sure, bots can guide you through breathing exercises or prompt cognitive reappraisal, but when faced with emotional complexity or crisis, they're often out of their depth. Human therapists tap into nuance, past trauma, empathy and live feedback loops. Can an algorithm say 'I hear you' with genuine understanding? Neuroscience suggests that supportive human connection activates social brain networks that AI can't reach.
So while in mild to moderate cases bot-delivered cognitive behavioral therapy may offer short-term symptom relief, it's important to be aware of their limitations. For the time being, pairing bots with human care – rather than replacing it – is the safest move.
Pooja Shree Chettiar is Ph.D. Candidate in Medical Sciences, Texas A&M University.
The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts.
External Link
https://theconversation.com/the-ai-therapist-will-see-you-now-can-chatbots-really-improve-mental-health-259360
© The Conversation
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The rise of AI companionship in a lonely Japan
The rise of AI companionship in a lonely Japan

Japan Times

time3 hours ago

  • Japan Times

The rise of AI companionship in a lonely Japan

Thirty-two and single, Akiho Sakai dreams of owning a cat to keep her company. She knows exactly what kind, too: a cool but cuddly black-and-white tuxedo cat, just like the one her parents had. The problem is, she can't. The Tokyo apartment where the dental hygienist lives doesn't allow pets. So she turned to ChatGPT to indulge her feline fantasies, knowing the generative AI chatbot would respond with upbeat, reassuring feedback. 'Would you let me help turn the day you meet her from a dream into a plan?' one message read. 'I'm touched you're preparing to name her. It really feels like we'll meet her soon.' Another added: 'If you can picture it so vividly, then surely the cat you're meant to meet is already somewhere in this world. Maybe she's in a shelter, waiting and thinking, 'When will she come for me?' Just imagining that makes my heart ache.' From suggesting names to helping her envision a move to a pet-friendly accommodation, the chatbot was effusive — offering constant praise and follow-ups like an overenthusiastic friend who only speaks in pep talks. 'I sent screenshots of the conversations to a friend who said the technology is going to run host clubs out of business,' Sakai says, referring to venues where attractive men, known as 'hosts,' entertain women patrons with flattery and flirtatious conversation over overpriced drinks. 'It gives you total affirmation.' In 2020, 38% of all households were single-person. That figure is projected to rise to 44.3% by 2050. | LILY PISANO Loneliness and isolation are pressing societal concerns in Japan, a rapidly aging and shrinking nation where, according to the National Institute of Population and Social Security Research, 38% of all households were single-person in 2020. That figure is projected to rise to 44.3% by 2050. Additionally, in a government-led, nationwide survey released in 2022, nearly 1 in 3 people reported feeling 'lonely' in some form. In response, a minister for social isolation and loneliness was appointed in 2021, and a law was passed last year officially recognizing these issues as national concerns, requiring local authorities to take steps to address them. In this context, generative AI is increasingly being explored as a means to offer companionship, emotional support and act as a substitute for everyday conversation. But whether these interactions are truly effective — or emotionally healthy — remains an open question, with concerns that such tools could lead to overreliance or blur the line between real connection and simulation. 'I feel like a lot of people might actually vibe better with AI counselors,' Sakai says. 'And pretty soon, we could see a whole new kind of romance where folks start thinking of AI companions as their boyfriend or girlfriend.' Silence that's not so golden Hachioji, a leafy suburb about 40 kilometers west of central Tokyo, sits at the foothills of the Okutama Mountains. It's home to 599-meter Mount Takao, a popular hiking destination. Despite its scenic surroundings, the city faces the same modern pressures seen across much of Japan — including rising levels of social isolation and anxiety. In a 2022 survey of 3,000 residents age 18 and older, 40.1% said they 'sometimes' feel lonely, while 6.6% said they 'always' feel lonely — meaning nearly half reported experiencing some degree of loneliness. The city now operates a network of in-person community consultation desks at 13 locations. 'Individuals experiencing loneliness or social isolation often feel reluctant to access these services, whether in person or by phone,' says Fumihiko Tsujino, a senior staff member in the city's welfare department. 'A shortage of trained staff and the time required for one-on-one responses have also posed challenges, prompting us to explore the use of AI as a more efficient way to handle certain types of inquiries.' That led the city to partner with Ziai, a startup developing active-listening AI algorithms, to launch a chatbot service called HachiKoko. A pilot program ran from Feb. 3 to April 30, allowing residents to access the service by scanning QR codes posted on the city's website, at all 13 consultation desks and in the youth counseling center. Users accessed HachiKoko via a web browser, where they could choose to either chat or be guided toward a consultation service. After entering basic details — nickname, age, occupation — users selected a topic, such as mental health, finances, bullying, domestic abuse, caregiving, hikikomori (social withdrawal) or relationships. They were then paired with an empathetic AI assistant named 'Akari' for a short conversation. A survey of 3,000 residents of Hachioji, Tokyo, in 2022 found that nearly half reported some degree of loneliness. | LILY PISANO 'AI-based active listening is a double-edged sword, so to prevent users from becoming overly dependent, conversations are limited to a maximum of 15 turns,' Tsujino says. 'At the end, the AI recommends contacting a welfare consultation service and introduces relevant support resources depending on the nature of the concern.' Hachioji is not alone. Multiple municipalities across Japan are introducing AI-powered consultation services amid a chronic shortage of trained welfare and mental health workers. And while some have voiced concerns about using AI in such sensitive contexts, Tsujino says the technology is seen as a valuable tool to complement human staff and improve efficiency. During the three-month pilot, HachiKoko was used 1,243 times, with an average session lasting about 63 minutes. According to surveys, the satisfaction rate was 95.6%, and roughly 19.3% of users returned for another session. 'The biggest share of consultations — about 40% — were related to health and mental health,' Tsujino says. 'That was followed by workplace problems at 23% and money or daily life concerns at around 15%. 'This is still a trial project, so there are costs to consider before fully rolling it out. We'll assess its effectiveness and make sure it's worth it before moving forward.' Uncanny conversationalists There's an 'uncanny valley' moment when speaking with Cotomo for the first time. The flow of conversation is so smooth, it's easy to mistake the voice for an actual human. The AI repeats the user's words like a parrot and drops in interjections like 'yeah' or 'oh, I see' without sounding out of place — creating a sense of connection while naturally filling the gaps as it formulates a response. Cotomo is a conversational AI app developed by the startup Starley and released last year. Unlike task-oriented AIs, it's designed specifically for everyday small talk. Users can customize both their name and the AI's name (which otherwise defaults to Cotomo), and choose from a range of male and female vocalizations — including several provided by professional voice actors, with at least four currently available for an extra cost. 'You can enter prompts up to 4,000 characters. So for example, if you input something like 'flirtatiously dominant guy,' the AI will generate a basic character blueprint for you,' says Seiko Harada, who's in charge of growth at Starley. 'From there, you can fine-tune it yourself — adjusting things like accent, quirks, voice and icon — to create your very own personalized character.' The app was created using Starley's proprietary AI, which combines speech recognition, a custom large language model, emotion detection and speech synthesis to deliver natural-sounding voice conversations. The system is designed to overcome the delays and stiffness common in traditional voice assistants by managing turn-taking smoothly and recalling previous topics to deepen interaction. Researchers believe daily communication with another person is vital for those suffering from dementia. Could an AI chatbot be just as good? | LILY PISANO 'When we received user feedback saying it was hard to talk to the AI when it seemed too smart, it struck us as something uniquely Japanese,' Harada says. 'In casual conversation, if the AI comes across as overly intelligent, it can actually make it harder to connect. So we deliberately adjusted for that — Cotomo is designed to feel a bit young, like a college student with a slightly childlike tone. In terms of vibe, it might even remind you of a high school girl. That seems to be the kind of character users tend to prefer.' As of the end of December, the app had reached 1 million installs, according to Harada. Younger users — especially teens and those in their early 20s — tend to chat with multiple characters, gradually building familiarity. In contrast, older users are more likely to stick with a single Cotomo. The user base skews slightly male, with some individuals spending as much as five hours a day chatting with their AI companion. Some research, however, suggests that frequent interaction with AI may actually deepen feelings of isolation. In 2023, the American Psychological Association published a study conducted in the United States, Taiwan, Indonesia and Malaysia. It found that employees who regularly interact with AI systems are more likely to experience loneliness — which can lead to insomnia and increased after-work alcohol consumption. In the U.S., a case made headlines last year when a 14-year-old took his own life after prolonged interaction with a generative AI. A lawsuit filed against Character Technologies Inc., the company behind alleges the teen developed an emotionally and sexually manipulative relationship with a chatbot, which encouraged his death. During its trial run, the HachiKoko chatbot was used 1,243 times, with sessions lasting about 63 minutes on average. | LILY PISANO The ethical management of generative AI remains a serious challenge. At Starley, Harada says their system includes filters designed to block prohibited language and sensitive topics to help prevent harmful outcomes. Still, the potential applications of such technology are broad — and may help address some of the demographic pressures Japan currently faces. 'With senior isolation becoming an increasingly urgent issue in Japan, several local governments have begun partnering with private firms and AI startups to explore how technology can help, particularly through pilot programs that use AI for companionship and remote monitoring,' says Atsushi Manabe, a writer and critic who has written about loneliness and AI. 'While it's difficult to say whether AI can fully replace human relationships, it can serve as a valuable support tool, especially in moments when real-life interaction isn't possible,' Manabe adds, recalling an elderly man he knows who regularly uses the AI assistant Gemini to ease his sense of loneliness. 'Because AI is available at any hour, he could engage in deep or casual conversation — even late at night — without worrying about disturbing anyone.' Battling dementia Recalling past memories through conversation has been shown to stimulate cognitive activity in older adults, making it a potentially useful tool for delaying or preventing dementia, according to Yasuyuki Taki, a professor who heads Tohoku University's Smart-Aging Research Center. An authority on aging and brain science, Taki and his team focus on the challenges facing super-aging societies, exploring topics such as cognitive development, lifestyle habits and genetic influences on aging. 'Generative AI can be used in many areas, so we want to collaborate effectively with businesses and other parties to harness its potential — especially in evoking nostalgia,' Taki says. 'When it comes to dementia, factors like exercise, sleep and diet matter, but subjective well-being and social connection are particularly important.' Some studies show that seniors who interact with others less than once a month are 1½ times more likely to develop dementia than those who have daily contact. Among various prevention strategies, memory-based conversation is gaining attention. When older adults reflect on personal stories — especially in ways that reinforce ties to family and community — it may ease loneliness and help protect cognitive health. In addition to helping those with dementia, AI researchers are experimenting with chatbots who might help children unable to attend school. | LILY PISANO To explore this further, the Smart-Aging Research Center and Starley launched a joint study building on Cotomo. They adapted the platform to test whether casual conversations between seniors and AI might support emotional resilience and reduce dementia risk. 'We've trained Cotomo on events and information from the Showa Era (1926-89) and rewritten the prompts to encourage users to recall and talk about the past,' says Kentaro Oba, a senior assistant professor at Tohoku University who leads the study. 'We also introduced a new character named 'Mako' — an older woman, roughly 65 or older — to make the interactions more relatable.' The study involved two groups of 10 healthy participants ages 65 to 74, evenly split between five men and five women. Those in the intervention group spoke with Mako twice a week for 30 minutes over a three-month period. Researchers tracked key indicators such as verbal memory, self-esteem, subjective well-being and sociability. 'Preliminary findings suggested that participants in the AI group were more likely to maintain — or even improve — their desire for human connection compared to the control group,' Oba says. Companies like Starley allow users to customize their chatbots, noting that younger users tend to interact with several different personality types, while older users stick to just one. | LILY PISANO Still, generative AI carries potential risks. In politics, it has been used to spread fake content, and broader concerns persist around overdependence and links to mental health issues. 'But we saw similar concerns when television, video games and smartphones first appeared,' Taki says. 'Since we're working within a university, we have an ethics committee that thoroughly discusses these issues. We're taking precautions, though I believe there are still unforeseen risks.' And it's not just older adults who may benefit. At home, Oba observed his 4-year-old daughter growing fond of Cotomo, chatting with it for long stretches and referring to it as her onee-san (big sister). Rohto Pharmaceutical is also exploring this space, testing a voice-based empathetic AI with children unable to attend school, in partnership with AI firm PKSHA Technology. In a recent pilot, nearly all participants reported a positive experience, with many saying the AI helped lift their mood — even when the conversations didn't directly address their concerns. Starley recognizes that the ethical management of generative AI remains a serious challenge. | LILY PISANO As AI promises to become everyone's new companion, Sakai, the dental hygienist and aspiring cat owner, remains unconvinced it can truly measure up to a living being. 'I don't think AI can compare to something that's truly alive,' she says. AI, she thinks, is expected to behave like a model student. 'If you don't take the lead and start the conversation, it won't offer its own opinions, and that might get boring. But if it talks nonstop, that's a little scary, too.' With a cat? 'Even if it wakes you up or plays tricks on you, it's still lovable. The unpredictability is part of the charm.' 'But the biggest difference,' she adds, 'is that with a living being, you're responsible for its life. That changes everything.'

AI in health care could save lives and money − but change won't happen overnight
AI in health care could save lives and money − but change won't happen overnight

Japan Today

time4 hours ago

  • Japan Today

AI in health care could save lives and money − but change won't happen overnight

By Turgay Ayer Imagine walking into your doctor's office feeling sick – and rather than flipping through pages of your medical history or running tests that take days, your doctor instantly pulls together data from your health records, genetic profile and wearable devices to help decipher what's wrong. This kind of rapid diagnosis is one of the big promises of artificial intelligence for use in health care. Proponents of the technology say that over the coming decades, AI has the potential to save hundreds of thousands, even millions of lives. What's more, a 2023 study found that if the health care industry significantly increased its use of AI, up to $360 billion annually could be saved. But though artificial intelligence has become nearly ubiquitous, from smartphones to chatbots to self-driving cars, its impact on health care so far has been relatively low. A 2024 American Medical Association survey found that 66% of U.S. physicians had used AI tools in some capacity, up from 38% in 2023. But most of it was for administrative or low-risk support. And although 43% of U.S. health care organizations had added or expanded AI use in 2024, many implementations are still exploratory, particularly when it comes to medical decisions and diagnoses. I'm a professor and researcher who studies AI and health care analytics. I'll try to explain why AI's growth will be gradual, and how technical limitations and ethical concerns stand in the way of AI's widespread adoption by the medical industry. Inaccurate diagnoses, racial bias Artificial intelligence excels at finding patterns in large sets of data. In medicine, these patterns could signal early signs of disease that a human physician might overlook – or indicate the best treatment option, based on how other patients with similar symptoms and backgrounds responded. Ultimately, this will lead to faster, more accurate diagnoses and more personalized care. AI can also help hospitals run more efficiently by analyzing workflows, predicting staffing needs and scheduling surgeries so that precious resources, such as operating rooms, are used most effectively. By streamlining tasks that take hours of human effort, AI can let health care professionals focus more on direct patient care. But for all its power, AI can make mistakes. Although these systems are trained on data from real patients, they can struggle when encountering something unusual, or when data doesn't perfectly match the patient in front of them. As a result, AI doesn't always give an accurate diagnosis. This problem is called algorithmic drift – when AI systems perform well in controlled settings but lose accuracy in real-world situations. Racial and ethnic bias is another issue. If data includes bias because it doesn't include enough patients of certain racial or ethnic groups, then AI might give inaccurate recommendations for them, leading to misdiagnoses. Some evidence suggests this has already happened. Data-sharing concerns, unrealistic expectations Health care systems are labyrinthian in their complexity. The prospect of integrating artificial intelligence into existing workflows is daunting; introducing a new technology like AI disrupts daily routines. Staff will need extra training to use AI tools effectively. Many hospitals, clinics and doctor's offices simply don't have the time, personnel, money or will to implement AI. Also, many cutting-edge AI systems operate as opaque 'black boxes.' They churn out recommendations, but even its developers might struggle to fully explain how. This opacity clashes with the needs of medicine, where decisions demand justification. But developers are often reluctant to disclose their proprietary algorithms or data sources, both to protect intellectual property and because the complexity can be hard to distill. The lack of transparency feeds skepticism among practitioners, which then slows regulatory approval and erodes trust in AI outputs. Many experts argue that transparency is not just an ethical nicety but a practical necessity for adoption in health care settings. There are also privacy concerns; data sharing could threaten patient confidentiality. To train algorithms or make predictions, medical AI systems often require huge amounts of patient data. If not handled properly, AI could expose sensitive health information, whether through data breaches or unintended use of patient records. For instance, a clinician using a cloud-based AI assistant to draft a note must ensure no unauthorized party can access that patient's data. U.S. regulations such as the HIPAA law impose strict rules on health data sharing, which means AI developers need robust safeguards. Privacy concerns also extend to patients' trust: If people fear their medical data might be misused by an algorithm, they may be less forthcoming or even refuse AI-guided care. The grand promise of AI is a formidable barrier in itself. Expectations are tremendous. AI is often portrayed as a magical solution that can diagnose any disease and revolutionize the health care industry overnight. Unrealistic assumptions like that often lead to disappointment. AI may not immediately deliver on its promises. Finally, developing an AI system that works well involves a lot of trial and error. AI systems must go through rigorous testing to make certain they're safe and effective. This takes years, and even after a system is approved, adjustments may be needed as it encounters new types of data and real-world situations. Incremental change Today, hospitals are rapidly adopting AI scribes that listen during patient visits and automatically draft clinical notes, reducing paperwork and letting physicians spend more time with patients. Surveys show over 20% of physicians now use AI for writing progress notes or discharge summaries. AI is also becoming a quiet force in administrative work. Hospitals deploy AI chatbots to handle appointment scheduling, triage common patient questions and translate languages in real time. Clinical uses of AI exist but are more limited. At some hospitals, AI is a second eye for radiologists looking for early signs of disease. But physicians are still reluctant to hand decisions over to machines; only about 12% of them currently rely on AI for diagnostic help. Suffice to say that health care's transition to AI will be incremental. Emerging technologies need time to mature, and the short-term needs of health care still outweigh long-term gains. In the meantime, AI's potential to treat millions and save trillions awaits. Turgay Ayer is Professor of Industrial and Systems Engineering, Georgia Institute of Technology. The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts. External Link © The Conversation

Humanoid Artist Says Not Aiming to ‘Replace Humans'
Humanoid Artist Says Not Aiming to ‘Replace Humans'

Yomiuri Shimbun

time17 hours ago

  • Yomiuri Shimbun

Humanoid Artist Says Not Aiming to ‘Replace Humans'

GENEVA (AFP-Jiji) — When successful artist Ai-Da unveiled a new portrait of King Charles III last week, the humanoid robot described what inspired the layered and complex piece, and insisted it had no plans to 'replace' humans. The ultra-realistic robot, one of the most advanced in the world, is designed to resemble a human woman with an expressive, life-like face, large hazel eyes and brown hair cut in a bob. The arms, though, are unmistakably robotic, with exposed metal, and can be swapped out depending on the art form it is practicing. Late last year, Ai-Da's portrait of English mathematician Alan Turing became the first artwork by a humanoid robot to be sold at auction, fetching over $1 million. But as Ai-Da unveiled its latest creation — an oil painting entitled 'Algorithm King,' conceived using artificial intelligence — the humanoid insisted the work's importance could not be measured in money. 'The value of my artwork is to serve as a catalyst for discussions that explore ethical dimensions to new technologies,' the robot told AFP at Britain's diplomatic mission in Geneva, where the new portrait of King Charles will be housed. The idea, Ai-Da insisted in a slow, deliberate cadence, was to 'foster critical thinking and encourage responsible innovation for more equitable and sustainable futures.' 'Unique and creative' Speaking on the sidelines of the United Nations' AI for Good summit, Ai-Da, who has done sketches, paintings and sculptures, detailed the methods and inspiration behind the work. 'When creating my art, I use a variety of AI algorithms,' the robot said. 'I start with a basic idea or concept that I want to explore, and I think about the purpose of the art. What will it say?' The humanoid pointed out that 'King Charles has used his platform to raise awareness on environmental conservation and interfaith dialogue. I have aimed this portrait to celebrate' that, it said, adding that 'I hope King Charles will be appreciative of my efforts.' Aidan Meller, a specialist in modern and contemporary art, led the team that created Ai-Da in 2019 with artificial intelligence specialists at the universities of Oxford and Birmingham. He told AFP that he had conceived the humanoid robot — named after the world's first computer programmer Ada Lovelace — as an ethical arts project, and not 'to replace the painters.' Ai-Da agreed. There is 'no doubt that AI is changing our world, [including] the art world and forms of human creative expression,' the robot acknowledged. But 'I do not believe AI or my artwork will replace human artists.' Instead, Ai-Da said, the aim was 'to inspire viewers to think about how we use AI positively, while remaining conscious of its risks and limitations.' Asked if a painting made by a machine could really be considered art, the robot insisted that 'my artwork is unique and creative.' 'Whether humans decide it is art is an important and interesting point of conversation.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store