
JEE rankers choose MIT over IIT for research, global options
He won't be alone. Ved Lahoti, last year's Rank 1 and the candidate who has scored the highest in the entrance exam (352/360) in recent history, is also wrapping up his Powai chapter in exchange for a fully funded scholarship at MIT.
There's a murmur of a trend here. In 2020, it was Chirag Falor who chose MIT over IIT. Before that, it was Chitraang Murdia, who spent a year at IIT Bombay before transferring to MIT. He now holds a PhD from Berkeley.
"It seems like MIT trusts the rigour of JEE Advanced and the promise of our Olympiad stars," said Prof Vijay Singh, once at IIT Kanpur, who later superannuated from the Homi Bhabha Centre for Science Education.
Jalgaon's Devesh has a record that goes far beyond the JEE rank: three gold medals, two from International Junior Science Olympiads in 2021 and 2022, and one from the International Chemistry Olympiad in 2024. In 2020, he was awarded the Bal Shakti Puraskar.
At 12, when most children were still tracing constellations in the sky, Devesh was mapping their vanishing — authoring a paper on light pollution.
He received admission to MIT in March but sat for JEE Advanced anyway — a "back-up," he calls it. Devesh is not alone. But the others chose to stay for a year, to feel the pulse on Indian campuses, before going overseas.
They too were accepted by American universities, whose doors — as Professor Vijay Singh notes — open wide for those with Olympiad medals and high JEE scores.
The students' reasons for the switch are apparent. "I'm fully satisfied with IIT Bombay. But it lags in research. Globally, it's not even in the top 100. So, I applied to MIT — and when it came through, I took it. A lot of students have taken transfer to MIT and when I asked them, they said the transfer was truly worth it," said Lahoti.
Nishank Abhangi, who spent 2019–2020 immersed in IIT Bombay's rigour before packing for MIT, and Mahit Gadhewala, All India Rank 9 in 2022, who also left after a year at IITB, are the other examples. Prof Singh said the first such student to do so was Raghu Mahajan, who spent a year at IIT Delhi, then took off to MIT, completed his PhD at Stanford, and is currently spending a year at the International Centre for Theoretical Sciences, Bengaluru.
"He was very committed to coming back to India," recalled Prof Singh. Lahoti too shares the same feelings. "I have no plans to settle in the USA."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
&w=3840&q=100)

Business Standard
a day ago
- Business Standard
'Can't interfere': SC dismisses NEET-UG 2025 plea challenging answer key
The Supreme Court (SC) on Friday dismissed a petition seeking a revision of the NEET-UG 2025 results over an alleged error in the final answer key, ruling that it would not entertain individual grievances stemming from a national-level examination, according to a report by LiveLaw. A bench of Justices PS Narasimha and R Mahadevan refused to entertain the plea filed by candidate Shivam Gandhi Raina, who challenged the National Testing Agency's (NTA's) answer to question number 136 (code no 47). The petition had also sought a stay on the ongoing counselling process. 'We have dismissed identical matters earlier,' the bench said. 'We agree there may be multiple correct answers, but we cannot interfere in an exam taken by lakhs of candidates. This is not an individual case; thousands could be affected.' Candidate argued for extra marks, cited NCERT Senior advocate R Balasubramaniam, appearing for the petitioner, argued that even a single mark could materially impact a candidate's rank and career trajectory. He pointed to the apex court's intervention in NEET-UG 2024, where errors in the exam were rectified following review by an expert committee from IIT-Delhi. However, Justice Narasimha clarified that the 2024 case involved systemic irregularities and broader procedural concerns. 'This is not the same context,' he said, declining the petitioner's request to convene an expert panel for review. The disputed question Cardiac activities of the heart are regulated by: A- Nodal Tissue, B- A special neural centre in the medulla oblongata, C- Adrenal medullary hormones, D- Adrenal cortical hormones. While the NTA recognised Option 2 (A, B, and C) as the correct answer, Raina contended that, based on the NCERT Class XI Biology textbook, the answer should include all four options. He claimed that correcting the key would award him five additional marks, significantly improving his All India Rank of 6,783 and General Category Rank of 3,195. Despite acknowledging the potential impact on individual students, the court reiterated that it would not intervene in the result declarations of a national-level examination unless systemic failings were involved. With the dismissal, the NEET-UG 2025 counselling process will proceed.


Scroll.in
2 days ago
- Scroll.in
Can ChatGPT ‘rot' your brain as MIT study claims?
Since ChatGPT appeared almost three years ago, the impact of artificial intelligence technologies on learning has been widely debated. Are they handy tools for personalised education, or gateways to academic dishonesty? Most importantly, there has been concern that using AI will lead to a widespread 'dumbing down', or decline in the ability to think critically. If students use AI tools too early, the argument goes, they may not develop basic skills for critical thinking and problem-solving. Is that really the case? According to a recent study by scientists from MIT, it appears so. Using ChatGPT to help write essays, the researchers say, can lead to 'cognitive debt' and a 'likely decrease in learning skills'. So what did the study find? Brain vs AI Over the course of four months, the MIT team asked 54 adults to write a series of three essays using either AI (ChatGPT), a search engine, or their own brains ('brain-only' group). The team measured cognitive engagement by examining electrical activity in the brain and through linguistic analysis of the essays. The cognitive engagement of those who used AI was significantly lower than the other two groups. This group also had a harder time recalling quotes from their essays and felt a lower sense of ownership over them. Interestingly, participants switched roles for a final, fourth essay (the brain-only group used AI and vice versa). The AI-to-brain group performed worse and had engagement that was only slightly better than the other group's during their first session, far below the engagement of the brain-only group in their third session. The authors claim this demonstrates how prolonged use of AI led to participants accumulating 'cognitive debt'. When they finally had the opportunity to use their brains, they were unable to replicate the engagement or perform as well as the other two groups. Cautiously, the authors note that only 18 participants (six per condition) completed the fourth, final session. Therefore, the findings are preliminary and require further testing. Does AI really make us stupider? These results do not necessarily mean that students who used AI accumulated 'cognitive debt'. In our view, the findings are due to the particular design of the study. The change in neural connectivity of the brain-only group over the first three sessions was likely the result of becoming more familiar with the study task, a phenomenon known as the familiarisation effect. As study participants repeat the task, they become more familiar and efficient, and their cognitive strategy adapts accordingly. When the AI group finally got to 'use their brains', they were only doing the task once. As a result, they were unable to match the other group's experience. They achieved only slightly better engagement than the brain-only group during the first session. To fully justify the researchers' claims, the AI-to-brain participants would also need to complete three writing sessions without AI. Similarly, the fact the brain-to-AI group used ChatGPT more productively and strategically is likely due to the nature of the fourth writing task, which required writing an essay on one of the previous three topics. As writing without AI required more substantial engagement, they had a far better recall of what they had written in the past. Hence, they primarily used AI to search for new information and refine what they had previously written. What are the implications? To understand the current situation with AI, we can look back to what happened when calculators first became available. Back in the 1970s, their impact was regulated by making exams much harder. Instead of doing calculations by hand, students were expected to use calculators and spend their cognitive efforts on more complex tasks. Effectively, the bar was significantly raised, which made students work equally hard (if not harder) than before calculators were available. The challenge with AI is that, for the most part, educators have not raised the bar in a way that makes AI a necessary part of the process. Educators still require students to complete the same tasks and expect the same standard of work as they did five years ago. In such situations, AI can indeed be detrimental. Students can for the most part offload critical engagement with learning to AI, which results in ' metacognitive laziness '. However, just like calculators, AI can and should help us accomplish tasks that were previously impossible – and still require significant engagement. For example, we might ask teaching students to use AI to produce a detailed lesson plan, which will then be evaluated for quality and pedagogical soundness in an oral examination. In the MIT study, participants who used AI were producing the 'same old' essays. They adjusted their engagement to deliver the standard of work expected of them. The same would happen if students were asked to perform complex calculations with or without a calculator. The group doing calculations by hand would sweat, while those with calculators would barely blink an eye. Learning how to use AI Current and future generations need to be able to think critically and creatively and solve problems. However, AI is changing what these things mean. Producing essays with pen and paper is no longer a demonstration of critical thinking ability, just as doing long division is no longer a demonstration of numeracy. Knowing when, where and how to use AI is the key to long-term success and skill development. Prioritising which tasks can be offloaded to an AI to reduce cognitive debt is just as important as understanding which tasks require genuine creativity and critical thinking. Vitomir Kovanovic is Associate Professor and Associate Director of the Centre for Change and Complexity in Learning (C3L), Education Futures, University of South Australia. Rebecca Marrone.


Mint
2 days ago
- Mint
Going nuclear will be the only way to keep the lights on as AI guzzles ever more electricity
Nishant Sahdev Artificial intelligence consumes energy in such bulk that its rise has thrown the world into an infrastructure emergency. Thankfully, nuclear power is not just viable, its risks have been on the decline. It's the only way out now. Nuclear energy is the only scalable source of clean electricity in existence that runs 24/7. Gift this article Recently, I was in a conversation with MIT researchers on artificial intelligence (AI) and nuclear energy. While discussing the subject, we saw a video clip of a data centre that looked like a giant fridge but buzzed like a jet engine. Inside, thousands of AI chips were training a new language model—one that could write poems, analyse genomes or simulate the weather on Mars. Recently, I was in a conversation with MIT researchers on artificial intelligence (AI) and nuclear energy. While discussing the subject, we saw a video clip of a data centre that looked like a giant fridge but buzzed like a jet engine. Inside, thousands of AI chips were training a new language model—one that could write poems, analyse genomes or simulate the weather on Mars. What struck me wasn't the intelligence of this machine. It was the sheer energy it was devouring. The engineer said, 'This one building consumes as much power as a small town." That's when the magnitude of the challenge hit me: If AI is our future, how on earth will we power it? Also Read: AI as infrastructure: India must develop the right tech All that intelligence takes energy. A lot of it. More than most people realize. And as someone who's spent years studying the physics of energy systems, I believe we are about to hit a hard wall. To be blunt: AI is growing faster than our ability to power it. And unless we confront this, the very tools meant to build our future could destabilize our energy systems—or drag us backward on climate. One solution has been pinpointed by the AI industry: nuclear energy. Most people don't associate AI with power plants. But every chatbot and image generator is backed by vast data centres full of servers, fans and GPUs running day and night. These machines don't sip power. They guzzle it. In 2022, data centres worldwide consumed around 460 terawatt-hours. But that's just the baseline. Goldman Sachs projects that by 2030, AI data centres will use 165% more electricity than they did in 2023. And it's not just about scale. It's about reliability. AI workloads can't wait for the sun to shine or wind to blow. They need round-the-clock electricity, without fluctuations or outages. That rules out intermittent renewables for a large share of the load—at least for now. Also Read: Rely on modern geothermal energy to power our AI ambitions Can power grids handle it?: The short answer: not without big changes. In the US, energy planners are already bracing for strain. States like Virginia and Georgia are seeing huge surges in electricity demand from tech campuses. One recent report estimated that by 2028, America will need 56 gigawatts of new power generation capacity just for data centres. That's equivalent to building 40 new large power plants in less than four years. The irony? AI is often promoted as a solution to climate change. But without clean and scalable energy, its growth could have the opposite effect. For example, Google's carbon emissions rose 51% from 2019 to 2024 by its own assessment, largely on account of AI's appetite for power. This is an infrastructure emergency. Enter nuclear energy—long seen as a relic of the Cold War or a post-Chernobyl nightmare. But in a world hungry for carbon-free baseload power, nuclear power is making a quiet comeback. Let's be clear: nuclear energy is the only scalable source of clean electricity in existence that runs 24/7. A single large reactor can power multiple data centres without emitting carbon or depending on weather conditions. Also Read: India should keep all its nuclear power options in play Tech companies are already acting: Microsoft signed a deal to reopen part of the Three Mile Island nuclear plant to power its AI operations. Google is investing in small modular reactors (SMRs). These are compact next-generation nuclear units that are designed to be safer, faster to build and considered ideal for campuses. They're early signs of a strategic shift: AI companies are realizing that if they want to build the future, they'll have to power it themselves. As a physicist, I've always been fascinated by nuclear energy's elegance. A single uranium pellet—smaller than a fingertip—holds the same energy as a tonne of coal. The energy density is unmatched. But it's not just about big reactors anymore. The excitement stems from advanced reactors. SMRs can be built in factories, shipped by truck and installed near tech campuses or even remote towns. Molten salt reactors and micro-reactors promise even greater safety and efficiency, with lower waste. New materials and AI-assisted monitoring make this technology far safer than past generations. For the first time in decades, nuclear power is both viable and vital. But let's talk about the risks: I'm not naïve. Nuclear still carries a stigma—and poses real challenges. Take cost and time; building or reviving reactors takes years and billions of dollars. Even Microsoft's project will face regulatory hurdles. Or waste; we still need better systems for storing radioactive materials over the long-term. Or consider control; if tech giants start building private nuclear plants, will public utilities fall behind? Who gets priority during shortages? And of course, we must be vigilant about safety and non-proliferation. The last thing we want is a tech-driven nuclear revival that ignores the hard lessons of history. But here's the bigger risk: doing nothing. Letting power demand explode while we rely on fossil fuels to catch up would be a disaster. We live in strange times. Our brightest engineers are teaching machines to think. But they still haven't solved how to power those machines sustainably. As a physicist, I believe we must act quickly—not just to make AI smarter, but to make its foundation stronger. Nuclear energy may not be perfect. But in the race to power our most powerful technology yet, it may be the smartest bet we've got. The AI revolution can't run on good intentions. It will be run on electricity. But where will it come from? The author is a theoretical physicist at the University of North Carolina at Chapel Hill, United States. He posts on X @NishantSahdev Topics You May Be Interested In