Latest news with #Physics-InformedNeuralNetworks


Time of India
a day ago
- Health
- Time of India
What if doctors could practise your surgery on a virtual YOU first? Welcome to the future of Indian healthcare
Imagine practising a surgery on a virtual version of your body before doctors operate on the real you. That's not science fiction anymore; it's happening in India. As quoted by TOI, senior heart transplant surgeon Dr K R Balakrishnan now makes a stop at IIT Madras before performing surgeries on complicated heart patients. At the biomedical engineering lab, he works on 3D virtual versions of his patients, also called digital twins. These twins help the doctor and his team analyse blood vessels, muscles and more before deciding the best course of treatment. What Exactly Is a Digital Twin? A digital twin is a computer-based copy of a real-world object or human. It receives real-time data from its original source, helping doctors make accurate medical decisions. The concept first appeared in aerospace engineering, but now it's being used in hospitals too. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Top 5 Dividend Stocks for May 2025 Seeking Alpha Read Now Using sensors and medical test results, doctors can create a virtual model of a patient and try out different surgeries or treatments before doing anything to the actual patient. Digital Twins at IIT Madras Professor R Krishnakumar, who used to design digital twins for tyre companies, now heads the biomedical engineering lab at IIT-M. Quoted by TOI, he said, 'Give us the medical records of a patient, and his digital twin will be ready in 45 minutes. An hour later, doctors can test treatment options on this synthetic patient.' Live Events Sometimes, doctors don't need a full 3D model, a simple graph can help them decide if the patient needs a life-saving device like an intra-aortic balloon pump. According to Krishnakumar, 'Nine times out of ten, the system's decision has been right.' How Surgeons Use Digital Twins Surgeons at JIPMER (Jawaharlal Institute of Postgraduate Medical Education and Research) in Puducherry are also working with digital twins. They've created 3D models of the brain to plan surgeries for deep-seated tumours. Neurosurgeon Dr M S Gopalakrishnan, quoted by TOI, said, 'We rehearse surgeries virtually and choose the safest and most effective method before operating.' These rehearsals are done using virtual reality (VR), which helps doctors practise every move and avoid risky areas. Once the plan is ready, it's loaded into a computer-guided system that helps during the real surgery by overlaying the virtual route onto the real-time view of the brain using augmented reality (AR). What's Next in Digital Twin Tech? According to Dr Gopalakrishnan, the next step is for digital twins to give feedback during live surgery. 'If I move a patient's brain lobe in the operating room, the virtual twin should tell me what could happen next,' he said. This level of smart interaction may soon be possible using Physics-Informed Neural Networks (PINN). These allow the twins to be smarter and more accurate, even when data is limited or biological processes are complex. Beyond Surgery: Managing Chronic Illnesses Digital twins aren't just for surgery. In cancer care, doctors use them to test treatments and reduce side effects. In diabetes, they help track sugar levels and suggest lifestyle changes that can even reverse the disease. Dr Arjun Suresh, a general medicine expert, quoted by TOI, said, 'Right now, we treat sugar levels reactively. With digital twins and real-time data from glucose monitors, we can be proactive.' A team led by Dr Rajan Ravichandran is also working on using digital twins to predict kidney problems in diabetic patients. A New Era of Drug Discovery Digital twins are also helping in drug development. They make it possible to run virtual clinical trials and test drug reactions without using real humans. This saves time and money. Some Challenges Still Remain Though the technology is promising, doctors admit it's not perfect. Dr Balakrishnan said, 'There are still issues with data quality, how we use the models, and training people to use them well. Plus, there are ethical concerns about how much influence these tools should have on treatment decisions.' Still, as digital twins grow smarter and more accessible, they may become a routine part of treatment, guiding doctors, saving lives, and making medicine more precise than ever before. Inputs from TOI


Time Magazine
07-06-2025
- Business
- Time Magazine
AI Can't Replace Education—Unless We Let It
As commencement ceremonies celebrate the promise of a new generation of graduates, one question looms: will AI make their education pointless? Many CEOs think so. They describe a future where AI will replace engineers, doctors, and teachers. Meta CEO Mark Zuckerberg recently predicted AI will replace mid-level engineers who write the company's computer code. NVIDIA's Jensen Huang has even declared coding itself obsolete. While Bill Gates admits the breakneck pace of AI development is 'profound and even a little bit scary,' he celebrates how it could make elite knowledge universally accessible. He, too, foresees a world where AI replaces coders, doctors, and teachers, offering free high-quality medical advice and tutoring. Despite the hype, AI cannot 'think' for itself or act without humans—for now. Indeed, whether AI enhances learning or undermines understanding hinges on a crucial decision: Will we allow AI to just predict patterns? Or will we require it to explain, justify, and stay grounded in the laws of our world? AI needs human judgment, not just to supervise its output but also to embed scientific guardrails that give it direction, grounding, and interpretability. Physicist Alan Sokal recently compared AI chatbots to a moderately good student taking an oral exam. 'When they know the answer, they'll tell it to you, and when they don't know the answer they're really good at bullsh*tting,' he said at an event at the University of Pennsylvania. So, unless a user knows a lot about a given subject, according to Sokal, one might not catch a 'bullsh*tting' chatbot. That, to me, perfectly captures AI's so-called 'knowledge.' It mimics understanding by predicting word sequences but lacks the conceptual grounding. That's why 'creative' AI systems struggle to distinguish real from fake, and debates have emerged about whether large language models truly grasp cultural nuance. When teachers worry that AI tutors may hinder students' critical thinking, or doctors fear algorithmic misdiagnosis, they identify the same flaw: machine learning is brilliant at pattern recognition, but lacks the deep knowledge born of systematic, cumulative human experience and the scientific method. That is where a growing movement in AI offers a path forward. It focuses on embedding human knowledge directly into how machines learn. PINNs (Physics-Informed Neural Networks) and MINNs (Mechanistically Informed Neural Networks) are examples. The names might sound technical, but the idea is simple: AI gets better when it follows the rules, whether they are laws of physics, biological systems, or social dynamics. That means we still need humans not just to use knowledge, but to create it. AI works best when it learns from us. I see this in my own work with MINNs. Instead of letting an algorithm guess what works based on past data, we program it to follow established scientific principles. Take a local family lavender farm in Indiana. For this kind of business, blooming time is everything. Harvesting too early or late reduces essential oil potency, hurting quality and profits. An AI may waste time combing through irrelevant patterns. However, a MINN starts with plant biology. It uses equations linking heat, light, frost, and water to blooming to make timely and financially meaningful predictions. But it only works when it knows how the physical, chemical, and biological world works. That knowledge comes from science, which humans develop. Imagine applying this approach to cancer detection: breast tumors emit heat from increased blood flow and metabolism, and predictive AI could analyze thousands of thermal images to identify tumors based solely on data patterns. However, a MINN, like the one recently developed by researchers at the Rochester Institute of Technology, uses body-surface temperature data and embeds bioheat transfer laws directly into the model. That means, instead of guessing, it understands how heat moves through the body, allowing it to identify what's wrong, what's causing it, why, and precisely where it is by utilizing the physics of heat flow through tissue. In one case, a MINN predicted a tumor's location and size within a few millimeters, grounded entirely in how cancer disrupts the body's heat signature. The takeaway is simple: humans are still essential. As AI becomes sophisticated, our role is not disappearing. It is shifting. Humans need to 'call bullsh*t' when an algorithm produces something bizarre, biased, or wrong. That isn't just a weakness of AI. It is humans' greatest strength. It means our knowledge also needs to grow so we can steer the technology, keep it in check, ensure it does what we think it does, and help people in the process. The real threat isn't that AI is getting smarter. It is that we might stop using our intelligence. If we treat AI as an oracle, we risk forgetting how to question, reason, and recognize when something doesn't make sense. Fortunately, the future doesn't have to play out like this. We can build systems that are transparent, interpretable, and grounded in the accumulated human knowledge of science, ethics, and culture. Policymakers can fund research into interpretable AI. Universities can train students who blend domain knowledge with technical skills. Developers can adopt frameworks like MINNs and PINNs that require models to stay true to reality. And all of us—users, voters, citizens—can demand that AI serve science and objective truth, not just correlations. After more than a decade of teaching university-level statistics and scientific modeling, I now focus on helping students understand how algorithms work 'under the hood' by learning the systems themselves, rather than using them by rote. The goal is to raise literacy across the interconnected languages of math, science, and coding. This approach is necessary today. We don't need more users clicking 'generate' on black-box models. We need people who can understand the AI's logic, its code and math, and catch its 'bullsh*t.' AI will not make education irrelevant or replace humans. But we might replace ourselves if we forget how to think independently, and why science and deep understanding matter. The choice is not whether to reject or embrace AI. It's whether we'll stay educated and smart enough to guide it.