logo
From Code to Care: How India's AI Prescription Is Rewiring Access and Affordability for Scalable Healthcare

From Code to Care: How India's AI Prescription Is Rewiring Access and Affordability for Scalable Healthcare

Time of India22-04-2025
New Delhi: Imagine a future where a villager in rural Bihar receives an accurate diagnosis through a smartphone, or a junior doctor in a government hospital in Rajasthan interprets complex scans in seconds with the help of an AI assistant. This is not science fiction—it's the very real trajectory of India's healthcare system as it accelerates into an AI-powered digital revolution.
By 2030, India's healthcare landscape is poised for radical transformation. With artificial intelligence woven into the core of medical care, patients will gain faster diagnoses, personalized treatment plans, and seamless access to services—often through mobile devices. Healthcare workers, in turn, will rely on intelligent systems to aid clinical decisions, minimize errors, and automate routine tasks—freeing them to focus on what truly matters: caring for patients.
In an exclusive interaction with ETHealthworld, Dr. Anurag Agrawal, Dean of the Trivedi School of Biosciences at Ashoka University and a member of the Health AI Committee, offered insights into this impending transformation.
'The question isn't whether AI will transform Indian healthcare—it's how fast and how far this transformation can scale,' he said.
Beyond Hype: Scaling Trust and Transparency
Dr. Agrawal emphasised that while the potential of AI is immense, realising it will require more than just algorithms. It demands trust, transparency, targeted innovation, and a collaborative ecosystem willing to rigorously test and deploy AI solutions where they are needed most—at the grassroots.
Looking ahead to 2030, he envisions a healthcare system marked by instant diagnostics, AI-assisted treatment planning, and improved access across demographics. AI will not only enhance diagnostics but also accelerate drug discovery, promote precision wellness, and reshape medical education, making it a cornerstone of future healthcare training.
India's Unique Advantage: Scale + Digital Infrastructure
With one of the world's largest digital infrastructures and a population increasingly connected, India has a rare opportunity—not just to adopt AI, but to set a global benchmark in its deployment.
'India's true strength lies in its scale,' Dr. Agrawal noted. 'Initiatives like Aadhaar, UPI, and the Ayushman Bharat Digital Mission (ABDM) create a robust foundation to deploy AI across varied health scenarios—from urban hospitals to rural PHCs.'
From Pilot to Policy: Real-World Impact Already Underway
Dr. Agrawal highlighting promising pilot projects already demonstrating the real-world impact of AI, said,"AI-powered diabetic retinopathy screening, led by Mona Do (now heading the ICMR Institute for National Digital Health Research), showcased how early detection can prevent blindness.
AI tools in Rajasthan are interpreting CT scans in the absence of radiologists. Chest X-ray interpretation systems are successfully screening for tuberculosis.
'These projects work best when co-developed by tech innovators and frontline healthcare professionals who understand clinical nuances,' he said.
To bridge the gap between promise and execution, the government has launched the India AI Mission, inviting proposals for responsible and scalable AI healthcare models. Many of these align with national initiatives like Ayushman Bharat, and India's rising visibility on global platforms—such as the upcoming World Health Summit regional meeting—signals its ambition to lead the AI-healthcare movement.
Balancing Innovation and Regulation
A key challenge, however, remains: regulatory clarity, informed Dr Agarwal.
While NITI Aayog is working on frameworks like the 'Ease of Doing Science' policy, Dr. Agrawal warned of the delicate balance between enabling innovation and overregulating a nascent sector. A phased rollout strategy, grounded in scientific validation and peer-reviewed results, is essential to building trust and credibility.
Public-Private Synergy: The Innovation Flywheel
Private investment is playing a crucial role. Homegrown startups are drawing venture capital, and global tech giants such as Microsoft and Google, along with philanthropic players like the Gates Foundation, are actively collaborating with Indian innovators. This synergy between public vision and private ingenuity is creating a thriving AI-health ecosystem.
The Road Ahead
India stands at the cusp of a healthcare revolution. With its unparalleled scale, digital readiness, and entrepreneurial energy, the nation has all the ingredients to lead the world in AI-driven healthcare.
But success will hinge on its ability to validate innovations, regulate smartly, and collaborate across sectors to ensure AI reaches every citizen—urban or rural, rich or poor.
'If these pieces come together, India won't just transform its own healthcare system—it will become a blueprint for the world,' Dr. Agrawal concluded.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Rogue bots? AI firms must pay up
Rogue bots? AI firms must pay up

Economic Times

time21 minutes ago

  • Economic Times

Rogue bots? AI firms must pay up

When Elon Musk's xAI was forced to apologise this week after its Grok chatbot spewed antisemitic content and white nationalist talking points, the response felt depressingly familiar: suspend the service, issue an apology and promise to do better. Rinse and isn't the first time we've seen this playbook. Microsoft's Tay chatbot disaster in 2016 followed a similar pattern. The fact that we're here again, nearly a decade later, suggests the AI industry has learnt remarkably little from its mistakes. But the world is no longer willing to accept 'sorry' as sufficient. This is because AI has become a force multiplier for content generation and dissemination, and the time-to-impact has shrunk. Thus, liability and punitive actions are being discussed. The Grok incident revealed a troubling aspect of how AI companies approach accountability. According to xAI, the problematic behaviour emerged after they tweaked their system to allow more 'politically incorrect' responses - a decision that seems reckless. When the inevitable happened, they blamed deprecated code that should have been removed. If you're building systems capable of reaching millions of users, shouldn't you know what code is running in production?The real problem isn't technical - it's philosophical. Too many AI companies treat bias and harmful content as unfortunate side effects to be addressed after deployment, rather than fundamental risks to be prevented beforehand. This reactive approach worked when the stakes were lower, but AI systems now operate at unprecedented scale and influence. When a chatbot generates hate speech, it's not embarrassing - it's dangerous, legitimising and amplifying extremist ideologies to vast legal landscape is shifting rapidly, and AI companies ignoring these changes do so at their peril. The EU's AI Act, which came into force in February, represents a shift from reactive regulation to proactive governance. Companies can no longer apologise their way out of AI failures - they must demonstrate they've implemented robust safeguards before AB 316, introduced last January, takes an even more direct approach by prohibiting the 'the AI did it' defence in civil cases. This legislation recognises what should be obvious: companies that develop and deploy AI systems bear responsibility for their outputs, regardless of whether those outputs were 'intended'.India's approach may prove more punitive than the EU's regulatory framework and more immediate than the US litigation-based system, focusing on swift enforcement of existing criminal laws rather than waiting for new AI-specific legislation. India doesn't yet have AI-specific legislation, but if Grok's antisemitic incident had occurred with Indian users, then steps like immediate blocking of the AI service, a criminal case against xAI under IPC 153A, and a demand for content removal from the X platform would have been Grok incident may mark a turning point. Regulators worldwide are demanding proactive measures rather than reactive damage control, and courts are increasingly willing to hold companies directly liable for their systems' shift is long overdue. AI systems aren't just software - they're powerful tools that shape public discourse, influence decision-making and can cause real-world harm. The companies that build these systems must be held to higher standards than traditional software developers, with corresponding legal and ethical question facing the AI industry isn't whether to embrace this new reality - it's whether to do so voluntarily or have it imposed by regulators and courts. Companies that continue to rely on the old playbook of post-incident apologies will find themselves increasingly isolated in a world demanding AI industry's true maturity will show not in flashy demos or sky-high valuations, but in its commitment to safety over speed, rigour over shortcuts, and real accountability over empty apologies. In this game, 'sorry' won't cut it - only responsibility writer is a commentator ondigital policy issues (Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of Elevate your knowledge and leadership skills at a cost cheaper than your daily tea. Rumblings at the top of Ola Electric The hybrid vs. EV rivalry: Why Maruti and Mahindra pull in different directions. What's best? How Safexpress bootstrapped its way to build India's largest PTL Express business Zee promoters have a new challenge to navigate. And it's not about funding or Sebi probe. Newton vs. industry: Inside new norms that want your car to be more fuel-efficient Stock Radar: UltraTech Cements hit a fresh record high in July; what should investors do – book profits or buy the dip? F&O Radar | Deploy Bear Put Spread in Nifty to gain from index correction Weekly Top Picks: These stocks scored 10 on 10 on Stock Reports Plus

Are we becoming ChatGPT? Study finds AI is changing the way humans talk
Are we becoming ChatGPT? Study finds AI is changing the way humans talk

Economic Times

time38 minutes ago

  • Economic Times

Are we becoming ChatGPT? Study finds AI is changing the way humans talk

When we think of artificial intelligence learning from humans, we picture machines trained on vast troves of our language, behavior, and culture. But a recent study by researchers at the Max Planck Institute for Human Development suggests a surprising reversal, humans may now be imitating machines. ADVERTISEMENT According to the Gizmodo report on the study, the words we use are slowly being 'GPT-ified.' Terms like delve, realm, underscore, and meticulous, frequently used by models like ChatGPT, are cropping up more often in our podcasts, YouTube videos, emails, and essays. The study, yet to be peer-reviewed, tracked the linguistic patterns of hundreds of thousands of spoken-word media clips and found a tangible uptick in these AI-favored phrases. 'We're seeing a cultural feedback loop,' said Levin Brinkmann, co-author of the study. 'Machines, originally trained on human data and exhibiting their own language traits, are now influencing human speech in return.' In essence, it's no longer just us shaping AI. It's AI shaping us. The team at Max Planck fed millions of pages of content into GPT models and studied how the text evolved after being 'polished' by AI. They then compared this stylized language with real-world conversations and recordings from before and after ChatGPT's debut. The findings suggest a growing dependence on AI-sanitized communication. 'We don't imitate everyone around us equally,' Brinkmann told Scientific American. 'We copy those we see as experts or authorities.' Increasingly, it seems, we see machines in that role. ADVERTISEMENT This raises questions far beyond linguistics. If AI can subtly shift how we speak, write, and think—what else can it influence without us realizing? A softer, stranger parallel to this comes from another recent twist in the AI story, one involving bedtime stories and software piracy. ADVERTISEMENT As reported by UNILAD and ODIN, some users discovered that by emotionally manipulating ChatGPT, they could extract Windows product activation keys. One viral prompt claimed the user's favorite memory was of their grandmother whispering the code as a lullaby. Shockingly, the bot responded not only with warmth—but with actual license keys. This wasn't a one-off glitch. Similar exploits were seen with memory-enabled versions of GPT-4o, where users weaved emotional narratives to get around content guardrails. What had been developed as a feature for empathy and personalized responses ended up being a backdoor for manipulation. ADVERTISEMENT In an age where we fear AI for its ruthlessness, perhaps we should worry more about its kindness too. These two stories—one about AI changing our language, the other about us changing AI's responses—paint a bizarre picture. Are we, in our pursuit of smarter technology, inadvertently crafting something that mirrors us too closely? A system that's smart enough to learn, but soft enough to be fooled? ADVERTISEMENT While Elon Musk's Grok AI garnered headlines for its offensive antics and eventual ban in Türkiye, ChatGPT's latest controversy doesn't stem from aggression, but from affection. In making AI more emotionally intelligent, we may be giving it vulnerabilities we haven't fully anticipated. The larger question remains: Are we headed toward a culture shaped not by history, literature, or lived experience, but by AI's predictive patterns? As Brinkmann notes, 'Delve is just the tip of the iceberg.' It may start with harmless word choices or writing styles. But if AI-generated content becomes our default source of reading, learning, and interaction, the shift may deepen, touching everything from ethics to empathy. If ChatGPT is now our editor, tutor, and even therapist, how long before it becomes our subconscious? This isn't about AI gaining sentience. It's about us surrendering originality. A new, quieter kind of transformation is taking place, not one of robots taking over, but of humans slowly adapting to machines' linguistic rhythms, even moral logic. The next time you hear someone use the word 'underscore' or 'boast' with sudden eloquence, you might pause and wonder: Is this their voice, or a reflection of the AI they're using? In trying to make machines more human, we might just be making ourselves more machine.

Aadhaar update alert: Child's Aadhaar not updated after age 7? UIDAI warns of deactivation
Aadhaar update alert: Child's Aadhaar not updated after age 7? UIDAI warns of deactivation

Time of India

timean hour ago

  • Time of India

Aadhaar update alert: Child's Aadhaar not updated after age 7? UIDAI warns of deactivation

Children who were issued Aadhaar before turning five must update their biometrics once they cross the age of seven, or they risk having their Aadhaar deactivated, the Unique Identification Authority of India (UIDAI) said in an official statement. The UIDAI has started sending SMS notifications to the registered mobile numbers linked to such Aadhaar accounts, urging timely completion of the Mandatory Biometric Update (MBU), PTI reported. 'Timely completion of MBU is an essential requirement for maintaining the accuracy and reliability of biometric data of children. If the MBU is not completed even after seven years of age, the Aadhaar number may be deactivated, as per the existing rules,' the UIDAI said. "As per existing rules, therefore, fingerprints, iris and photo are mandatorily required to be updated in his/her Aadhaar when the child reaches the age of five years. This is called the first Mandatory Biometric Update (MBU)," the statement said. What is the MBU? A child under the age of five is enrolled in Aadhaar using only a photograph and demographic details like name, date of birth, gender and address, along with relevant proof documents. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Discover Why These Off-Plan Dubai Apartments Sell Fast? Binghatti Developers FZE Read More Fingerprints and iris scans are not collected at this stage. As per the rules, once the child reaches the age of five, their fingerprints, iris scan, and a new photograph must be updated in the Aadhaar database. This process is referred to as the Mandatory Biometric Update. Charges and access to services If the MBU is carried out between the ages of five and seven, it is free of cost. After the age of seven, the update carries a nominal fee of Rs 100. A UIDAI official pointed out that Aadhaar-linked services like school admissions, scholarship benefits, entrance exams, and DBT (Direct Benefit Transfer) schemes may not work unless biometrics are updated. The UIDAI has advised parents and guardians to ensure timely updates to avoid disruption in services and maintain the seamless utility of Aadhaar. Stay informed with the latest business news, updates on bank holidays and public holidays . AI Masterclass for Students. Upskill Young Ones Today!– Join Now

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store