
Building Trust in Healthcare AI: India's Path from Potential to Practice
Recent discussions among healthcare leaders and policymakers have spotlighted how AI is no longer just a futuristic concept but a present-day ally for clinicians navigating complex healthcare realities.
The recently unveiled Future Health Index (FHI) 2025 India report was marked by an engaging discussion with some of India's leading healthcare experts and advocates. The event underscored how India is now poised to transition from viewing AI as a tool of potential to embracing it as a practical solution in real world healthcare settings.
With a special address by H.E. Ms. Marisa Gerards, Ambassador of the Kingdom of the Netherlands to India, Nepal, and Bhutan, the event featured a thought-provoking panel discussion with leading healthcare experts including Mr. Neeraj Jain, Director - Growth Operations, Asia, Middle East and Europe (AMEE), PATH; Dr. Ratna Devi, Board Member at IAPO and CEO of DakshamA Health and Mr. Bharath Sesha, Managing Director, Philips Indian Subcontinent. The session was moderated by Ms. Prathiba Raju, Senior Assistant Editor at ETHealthWorld, The Economic Times Group.
According to the India-specific findings in the FHI report, 76% of healthcare professionals believe AI will help improve patient outcomes, while over 80% feel AI can save lives by enabling timely care. These figures point to a growing confidence among India's clinical community in the technology's ability to enhance not replace human decision-making.
'India stands at a pivotal moment in its healthcare transformation,' said Bharath Sesha, Managing Director, Philips Indian Subcontinent. 'There is growing confidence in AI, not just as a tool for efficiency, but as a catalyst for improved outcomes, broader access, and more empowered healthcare professionals.
The
Future Health Index 2025
findings reaffirm what we've long believed: when applied with purpose, technology can bridge the gap between capability and capacity.
Trust in both the technology and the intent behind it is essential to scaling AI in a meaningful way. Cross-sector collaboration is equally critical. By bringing together clinicians, technologists, policymakers, and patients, we can co-create solutions that are clinically relevant, ethically sound, and scalable across India's diverse healthcare ecosystem.'
Global best practices suggest that collaboration is key. The development and deployment of
AI in healthcare
must involve a broad set of stakeholders. This inclusive approach is especially vital in countries like India, where the scale and diversity of the health system present both opportunities and risks.
'
Healthcare innovation
must be people-centric and globally responsible,' said H.E. Ms. Marisa Gerards, Ambassador of the Kingdom of the Netherlands to India, Nepal and Bhutan. 'During the launch of the 10th edition of Future Health Index 2025 report commissioned by Philips, we had a meaningful discussion with the Indian stakeholders thinking not just about what technology can do, but how it can be applied ethically, equitably, and effectively.'
The findings from the report also reveal a broadening acceptance of AI in everyday practice. 72 per cent of professionals say it supports accurate, real-time clinical decision-making, while 75 per cent believe it is particularly valuable for training junior staff and expanding access in underserved areas.
'AI is no longer a choice it's the only viable path to delivering care at scale for a nation of 1.5 billion people,' noted Mr. Neeraj Jain, Director - Growth Operations, Asia, Middle East and Europe (AMEE), PATH. 'But for it to work, our entire ecosystem must be prepared to adopt it responsibly. That means accelerating adoption while ensuring AI tools are developed in close consultation with clinicians, so they are fit for purpose and trusted at the point of care.'
While trust in AI is growing, it remains conditional. 67 per cent of healthcare professionals voiced concern over data bias, highlighting the risk of inequities if AI systems are not trained on representative datasets. Questions around legal liability (44%) and defined guardrails for clinical use (45%) also persist.
'
Building trust in AI
is critical,' said Dr. Ratna Devi, Board Member at IAPO and CEO of DakshamA Health. 'People need clarity on how these tools work and assurance that they are safe and reliable. The doctor–patient ratio in India is unlikely to change dramatically, so AI must be seen as a tool to augment, not replace doctors. It can enhance care delivery, improve efficiency, and help drive change, but it must always complement the human touch.'
The FHI 2025 findings also highlight the importance of sustained investment in education and digital capacity-building. Empowering healthcare workers to understand and trust AI, will be essential to mainstream adoption and to ensuring long-term success.
As AI moves further into the clinical mainstream, India finds itself at a pivotal moment. With rising readiness among professionals, deepening digital infrastructure, and increasing cross-sectoral momentum - the time to build trust and act is now.
Download the full FHI 2025 India Report here
Disclaimer - The above content is non-editorial, and ET Healthworld hereby disclaims any and all warranties, expressed or implied, relating to it, and does not guarantee, vouch for or necessarily endorse any of the content.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Scroll.in
10 hours ago
- Scroll.in
As young Indians turn to AI ‘therapists', how confidential is their data?
This is the second of a two-part series. Read the first here. Imagine a stranger getting hold of a mental health therapist's private notes – and then selling that information to deliver tailored advertisements to their clients. That's practically what many mental healthcare apps might be doing. Young Indians are increasingly turning to apps and artificial intelligence-driven tools to address their mental health challenges – but have limited awareness about how these digital tools process user data. In January, the Centre for Internet and Society published a study based on 45 mental health apps – 28 from India and 17 from abroad – and found that 80% gathered user health data that they used for advertising and shared with third-party service providers. An overwhelming number of these apps, 87%, shared the data with law enforcement and regulatory bodies. The first article in this series had reported that some of these apps are especially popular with young Indian users, who rely on them for quick and easy access to therapy and mental healthcare support. Users had also told Scroll that they turned to AI-driven technology, such as ChatGPT, to discuss their feelings and get advice, however limited this may be compared to interacting with a human therapist. But they were not especially worried about data misuse. Keshav*, 21, reflected a common sentiment among those Scroll interviewed: 'Who cares? My personal data is already out there.' The functioning of Large Language Models, such as ChatGPT, is already under scrutiny. LLMs are 'trained' on vast amounts of data, either from the internet or provided by its trainers, to simulate human learning, problem solving and decision making. Sam Altman, CEO of OpenAI that built ChatGPT, said on a podcast in July that though users talk about personal matters with the chatbot, there are no legal safeguards protecting that information. 'People use it – young people, especially, use it – as a therapist, a life coach; having these relationship problems and [asking] what should I do?' he asked. 'And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT.' Play He added: 'So if you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up.' Therapists and experts said the ease of access of AI-driven mental health tools should not sideline privacy concerns. Clinical psychologist Rhea Thimaiah, who works at Kaha Mind, a collective that provides mental health services, emphasised that confidentiality is an essential part of the process of therapy. 'The therapeutic relationship is built on trust and any compromise in data security can very possibly impact a client's sense of safety and willingness to engage,' she said. 'Clients have a right to know how their information is being stored, who has access, and what protections are in place.' This is more than mere data – it is someone's memories, trauma and identity, Thimaiah said. 'If we're going to bring AI into this space, then privacy shouldn't be optional, it should be fundamental.' Srishti Srivastava, founder of AI-driven mental health app Infiheal, said that her firm collects user data to train its AI bot, but users can access the app even without signing up and also ask for their data to be deleted. Dhruv Garg, a tech policy lawyer at Indian Governance and Policy Project, said the risk lies not just in apps collecting data but in the potential downstream uses of that information. 'Even if it's not happening now, an AI platform in the future could start using your data to serve targeted ads or generate insights – commercial, political, or otherwise – based on your past queries,' said Garg. 'Current privacy protections, though adequate for now, may not be equipped to deal with each new future scenario.' India's data protection law For now, personal data processed by chatbots is governed by the Information Technology Act framework and Sensitive Personal Data Rules, 2011. Section 5 of the sensitive data rules says that companies must obtain consent in writing before collecting or using sensitive information. According to the rules, information relating to health and mental health conditions are considered sensitive data. There are also specialised sectoral data protection rules that apply to regulated entities like hospitals. The Digital Personal Data Protection Act, passed by Parliament in 2023, is expected to be notified soon. But it exempts publicly available personal data from its ambit if this information has voluntarily been disclosed by an individual. Given the black market of data intermediaries that publish large volumes of personal information, it is difficult to tell what personal data in the public domain has been made available 'voluntarily'. The new data protection act does not have different regulatory standards for specific categories of personal data – financial, professional, or health-related, Garg said. This means that health data collected by AI tools in India will not be treated with special sensitivity under this framework. 'For instance, if you search for symptoms on Google or visit WebMD, Google isn't held to a higher standard of liability just because the content relates to health,' said Garg. WebMD provides health and medical information. It might be different for AI tools explicitly designed for mental healthcare – unlike general-purpose models like ChatGPT. These, according to Garg, 'could be made subject to more specific sectoral regulations in the future'. However, the very logic on which AI chatbots function – where it responds based on user data and inputs – could itself be a privacy risk. Nidhi Singh, a senior research analyst and programme manager at Carnegie India, an American think tank, said she has concerns about how tools like ChatGPT customise responses and remember user history – even though users may appreciate those functions. Singh said India's new data protection is quite clear that any data made publicly available by putting it on the internet is no longer considered personal data. 'It is unclear how this will apply to your conversations with ChatGPT,' she said. Without specific legal protections, there's no telling how an AI-driven tool will use the data it has gathered. According to Singh, without a specific rule designating conversations with generative AI as an exception, it is likely that a user's interactions with these AI systems won't be treated as personal data and consequently will not fall under the purview of the act. Who takes legal responsibility? Technology firms have tried hard to evade legal liability for harm. In Florida, a lawsuit by a mother has alleged that her 14-year-old son died by suicide after becoming deeply entangled in an 'emotionally and sexually abusive relationship' with a chatbot. In case of misdiagnosis or harmful advice from an AI tool, legal responsibility is likely to be analysed in court, said Garg. 'The developers may argue that the model is general-purpose, trained on large datasets, and not supervised by a human in real-time,' said Garg. 'Some parallels may be drawn with search engines – if someone acts on bad advice from search results, the responsibility doesn't fall on the search engine, but on the user.' Highlighting the urgent need for a conversation on sector-specific liability frameworks, Garg said that for now, the legal liability of AI developers will have to be assessed on a case-to-case basis. 'Courts may examine whether proper disclaimers and user agreements were in place,' he said. In another case, Air Canada was ordered to pay compensation to a customer who was misled by its chatbot regarding bereavement fares. The airline had argued that the chatbot was a ' separate legal entity ' and therefore responsible for its own actions. Singh of Carnegie India said that transparency is important and that user consent should be meaningful. 'You don't need to explain the model's source code, but you do need to explain its limitations and what it aims to do,' she said. 'That way, people can genuinely understand it, even if they don't grasp every technical step.' AI, meanwhile, is here for the long haul. Until India can expand its capacity to offer mental health services to everyone, Singh said AI will inevitably fill that void. 'The use of AI will only increase as Indic language LLMs are being built, further expanding its potential to address the mental health therapy gap,' she said.


Time of India
16 hours ago
- Time of India
Excessive use of AI may blunt creative thinking, caution experts
Bhopal: Artificial Intelligence or AI's tendency to fabricate or "hallucinate" content is raising fresh concerns among mental health professionals over its impact on cognition, originality and trust. Speaking at the National Consultation on Adolescent Mental Health in Bhopal, National Institute of Mental Health and Neuro Sciences (NIMHANS) Bengaluru, director Dr Pratima Murthy, cautioned that excessive use of generative tools may blunt creative thinking and reduce critical engagement. She reflected on AI-generated writing, noting, "Its ability to make up things on the go is very dangerous," emphasizing the need for fact-checking and careful scrutiny of outputs. The ethical dimensions are equally pressing. "People are talking about ethics in this area," Dr Murthy said, urging the development of responsible frameworks for interacting with AI—ones that consider the moral complexity of digital systems. Adding another layer, Dr Murthy highlighted the increasingly surreal quality of AI-generated visuals. She underscored growing unease over how artificial content blurs boundaries between reality and fabrication, prompting questions about its psychological and societal impact. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like This Is What First-Class Living Really Looks Like Explore Undo To illustrate the diminishing creative value, she cited her own experience: "You ask AI to write poems—it's great fun the first time, but then the second time you see it doing the same." With generative AI rapidly weaving itself into the fabric of thought and communication, mental health experts are calling for deeper examination—before dependence overshadows discernment. Peer-support training expanded for adolescent mental health: Union ministry of health and family welfare, in collaboration with UNICEF and NIMHANS, launched "I Support My Friends", an add-on training module under the RKSK programme. Designed to help adolescents recognize emotional distress, offer support, and connect peers to help, the module uses the Look, Listen, Link framework and interactive tools to foster empathy and resilience. At the national consultation, Dr Pratima Murthy, MP deputy CM and health minister Rajendra Shukla and UNICEF experts emphasized the importance of youth-led mental health support and early intervention. Sessions highlighted rising challenges like anxiety, depression, digital addiction, and academic stress—and the urgent need to build safe spaces and reduce stigma. The launch marks a step toward a more inclusive, community-based mental health system focused on prevention and empowerment, official said. 'Must know where use ends and addiction begins': NIMHANS, Bengaluru, director Dr Pratima Murthy, raised a clear warning about the blurred lines between technology use and dependency. Excessive screen exposure could reduce attention span, impair memory, and aggravate underlying mental health conditions. Addressing the challenges in regulating screen time, she noted, "We don't know where use ends and addiction begins." Highlighting the cognitive risks associated with excessive exposure to digital tools, Dr Murthy pointed to growing cases of reduced attention span, memory problems, and diminished creative engagement. She cited experiences from NIMHANS' SHUT clinic, which treats adolescents facing challenges linked to compulsive screen use and behavioral addictions. Key challenges and insights: Mental health experts present a clear picture of the issues facing young people in Madhya Pradesh and across India. The focus is on reaching diverse groups, using technology both as a tool and a challenge, and addressing deeper social and structural barriers. Their approach combines real-world data with on-the-ground experience for a well-rounded response. The multi-stakeholder approach to adolescent mental health emphasizes community-based and school-based interventions, framing mental health as a social development concern rather than solely a health issue. State-level initiatives in Madhya Pradesh, such as the TeleMANAS and Umang Program, demonstrate government commitment to prioritizing mental health alongside education and employment, particularly for adolescents facing unique challenges. The SHUT Clinic at the National Institute of Mental Health and Neurosciences serves as a specialized center addressing technology-related behavioral issues, highlighting the dual nature of technology as both beneficial and detrimental to mental health. Get the latest lifestyle updates on Times of India, along with Friendship Day wishes , messages and quotes !


Time of India
a day ago
- Time of India
Norway's COVID vaccine chief Are Stuwitz Berg dies at 53 after long illness
Stuwitz Berg, the leader of Norway's COVID-19 vaccination drive, passed away at 53 after an illness. Berg was a key figure during the pandemic, explaining vaccine science on TV. He managed vaccine rollout, addressed public concerns, and collaborated with global health bodies. He also dedicated time to pediatric care in Oslo and Tanzania. Are Stuwitz Berg, the Norwegian physician who led the country's COVID-19 vaccine rollout, has died at 53 after a prolonged illness, according to the public health institute FHI Tired of too many ads? Remove Ads Leadership during crisis Tired of too many ads? Remove Ads Are Stuwitz Berg , the senior physician who led Norway 's national COVID-19 vaccination campaign and spent decades advancing public health, has died at the age of 53 after a prolonged illness, according to the Norwegian Institute of Public Health (FHI).Berg served as department director and chief physician at FHI, where he became a prominent figure during the pandemic. He was often seen on national television, calmly explaining the science behind vaccines and guiding Norway through one of its most challenging public health crises. He is survived by his wife and three officials have not released a specific cause of death, his colleagues confirmed that he had been battling a serious illness for several the pandemic, Berg oversaw the rollout of COVID-19 vaccines across Norway, including managing logistics, communicating with international health partners, and addressing public skepticism. He was widely recognized for his transparent approach and efforts to build trust in was known not only for his leadership during COVID-19 but also for his lifelong dedication to pediatric medicine. Earlier in his career, he worked in Oslo hospitals and spent time in Tanzania providing medical care to underserved had previously acknowledged that vaccines, like any medical intervention, can carry rare side effects, particularly in younger populations, but he consistently stood by their overall safety and his death, some online platforms have circulated unverified claims linking his passing to the COVID-19 vaccines he helped deploy. Norwegian authorities and mainstream media have not supported those claims. No medical or official sources have confirmed any link between Berg's death and vaccination.