logo
Scored 93.4% marks in high school. failed in JEE and NEET. then got admission in world's top institute, he is....

Scored 93.4% marks in high school. failed in JEE and NEET. then got admission in world's top institute, he is....

India.com30-04-2025
Scored 93.4% marks in high school, failed in JEE and NEET, then got admission in world's top institute, he is…
Success Story: 'Failing and learning from it makes a person stronger than someone who never takes risks.' This quote perfectly describes Hritwik Haldar, whose success story is inspiring several aspirants who give up after one or two failures. Haldar studied in a government school. He was not a brilliant student and faced difficulties in his studies. But rather than giving up, he kept moving forward. Let's know Hritwik Haldar's story. Who is Hritwik Haldar?
Hritwik Haldar hails from West Bengal and received his early education in a Bengali-medium government school. Like most of the students, he also found studies a burden and used to appear in exams by rote learning. He changed his method of studying when he reached Class 10th. Haldar started focusing on understanding the subject instead of rote learning and soon he started getting interested in studies.
With the new study method, he got positive results and scored a brilliant 93.4 percent in high school.
After clearing Class 12, Hrithik gave several competitive exams, such as JEE, JEE Advanced, NEET and KVPY but did not succeed. He did not give up. A New Path
After completing his secondary education, he continued his studies at Ramakrishna Mission School in Belur. Despite not passing the KVPY SB exam a second time, he achieved a top-ten ranking in the SC category, securing admission to the Indian Institute of Science Education and Research (IISER) Pune, a highly ranked institution. Studying At World's Top Institute
Hrithik, a former student of a government school, achieved a 9.1 GPA at IISER Pune before gaining admission to the Massachusetts Institute of Technology (MIT), a prestigious institution ranked 13th globally by QS World University Rankings. His success at MIT followed strong academic performance throughout his studies at IISER Pune.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Going nuclear will be the only way to keep the lights on as AI guzzles ever more electricity
Going nuclear will be the only way to keep the lights on as AI guzzles ever more electricity

Mint

time10 minutes ago

  • Mint

Going nuclear will be the only way to keep the lights on as AI guzzles ever more electricity

Nishant Sahdev Artificial intelligence consumes energy in such bulk that its rise has thrown the world into an infrastructure emergency. Thankfully, nuclear power is not just viable, its risks have been on the decline. It's the only way out now. Nuclear energy is the only scalable source of clean electricity in existence that runs 24/7. Gift this article Recently, I was in a conversation with MIT researchers on artificial intelligence (AI) and nuclear energy. While discussing the subject, we saw a video clip of a data centre that looked like a giant fridge but buzzed like a jet engine. Inside, thousands of AI chips were training a new language model—one that could write poems, analyse genomes or simulate the weather on Mars. Recently, I was in a conversation with MIT researchers on artificial intelligence (AI) and nuclear energy. While discussing the subject, we saw a video clip of a data centre that looked like a giant fridge but buzzed like a jet engine. Inside, thousands of AI chips were training a new language model—one that could write poems, analyse genomes or simulate the weather on Mars. What struck me wasn't the intelligence of this machine. It was the sheer energy it was devouring. The engineer said, 'This one building consumes as much power as a small town." That's when the magnitude of the challenge hit me: If AI is our future, how on earth will we power it? Also Read: AI as infrastructure: India must develop the right tech All that intelligence takes energy. A lot of it. More than most people realize. And as someone who's spent years studying the physics of energy systems, I believe we are about to hit a hard wall. To be blunt: AI is growing faster than our ability to power it. And unless we confront this, the very tools meant to build our future could destabilize our energy systems—or drag us backward on climate. One solution has been pinpointed by the AI industry: nuclear energy. Most people don't associate AI with power plants. But every chatbot and image generator is backed by vast data centres full of servers, fans and GPUs running day and night. These machines don't sip power. They guzzle it. In 2022, data centres worldwide consumed around 460 terawatt-hours. But that's just the baseline. Goldman Sachs projects that by 2030, AI data centres will use 165% more electricity than they did in 2023. And it's not just about scale. It's about reliability. AI workloads can't wait for the sun to shine or wind to blow. They need round-the-clock electricity, without fluctuations or outages. That rules out intermittent renewables for a large share of the load—at least for now. Also Read: Rely on modern geothermal energy to power our AI ambitions Can power grids handle it?: The short answer: not without big changes. In the US, energy planners are already bracing for strain. States like Virginia and Georgia are seeing huge surges in electricity demand from tech campuses. One recent report estimated that by 2028, America will need 56 gigawatts of new power generation capacity just for data centres. That's equivalent to building 40 new large power plants in less than four years. The irony? AI is often promoted as a solution to climate change. But without clean and scalable energy, its growth could have the opposite effect. For example, Google's carbon emissions rose 51% from 2019 to 2024 by its own assessment, largely on account of AI's appetite for power. This is an infrastructure emergency. Enter nuclear energy—long seen as a relic of the Cold War or a post-Chernobyl nightmare. But in a world hungry for carbon-free baseload power, nuclear power is making a quiet comeback. Let's be clear: nuclear energy is the only scalable source of clean electricity in existence that runs 24/7. A single large reactor can power multiple data centres without emitting carbon or depending on weather conditions. Also Read: India should keep all its nuclear power options in play Tech companies are already acting: Microsoft signed a deal to reopen part of the Three Mile Island nuclear plant to power its AI operations. Google is investing in small modular reactors (SMRs). These are compact next-generation nuclear units that are designed to be safer, faster to build and considered ideal for campuses. They're early signs of a strategic shift: AI companies are realizing that if they want to build the future, they'll have to power it themselves. As a physicist, I've always been fascinated by nuclear energy's elegance. A single uranium pellet—smaller than a fingertip—holds the same energy as a tonne of coal. The energy density is unmatched. But it's not just about big reactors anymore. The excitement stems from advanced reactors. SMRs can be built in factories, shipped by truck and installed near tech campuses or even remote towns. Molten salt reactors and micro-reactors promise even greater safety and efficiency, with lower waste. New materials and AI-assisted monitoring make this technology far safer than past generations. For the first time in decades, nuclear power is both viable and vital. But let's talk about the risks: I'm not naïve. Nuclear still carries a stigma—and poses real challenges. Take cost and time; building or reviving reactors takes years and billions of dollars. Even Microsoft's project will face regulatory hurdles. Or waste; we still need better systems for storing radioactive materials over the long-term. Or consider control; if tech giants start building private nuclear plants, will public utilities fall behind? Who gets priority during shortages? And of course, we must be vigilant about safety and non-proliferation. The last thing we want is a tech-driven nuclear revival that ignores the hard lessons of history. But here's the bigger risk: doing nothing. Letting power demand explode while we rely on fossil fuels to catch up would be a disaster. We live in strange times. Our brightest engineers are teaching machines to think. But they still haven't solved how to power those machines sustainably. As a physicist, I believe we must act quickly—not just to make AI smarter, but to make its foundation stronger. Nuclear energy may not be perfect. But in the race to power our most powerful technology yet, it may be the smartest bet we've got. The AI revolution can't run on good intentions. It will be run on electricity. But where will it come from? The author is a theoretical physicist at the University of North Carolina at Chapel Hill, United States. He posts on X @NishantSahdev Topics You May Be Interested In

Scientifically Speaking: Does ChatGPT really make us stupid?
Scientifically Speaking: Does ChatGPT really make us stupid?

Hindustan Times

timea day ago

  • Hindustan Times

Scientifically Speaking: Does ChatGPT really make us stupid?

A few months after I took apart articles that mischaracterised a study of AI tools as causing people to lose critical thinking skills in a column for HT, viral headlines are at it again. A number of headlines have been screaming that using artificial intelligence (AI) rots our brains. Some said it outright, others preferred the more polite 'cognitive decline', but they meant almost the same thing. Add this to the growing list of things AI is blamed for: making us lazy, dumber, and incapable of independent thought. If you read only those headlines, you couldn't be blamed for thinking that most of humanity was capable of exalted reasoning before large language models turned our brains to pulp. If we're worried about AI making us stupid, what's our excuse for the pre-AI era? Large language models like ChatGPT and Claude are now pervasive. But even a few years into their use, a lot of the talk about them remains black and white. In one camp, the techno-optimists tell us that AI super intelligence, which can do everything better than all of us, is just around the corner. In the other, there's a group that blames just about everything that goes wrong anywhere on AI. If only the truth were that simple. The Massachusetts Institute of Technology (MIT) study that sparked this panic deserves a sober hearing, not hysteria. Researchers at the Media Lab asked a worthwhile question. How does using AI to write affect our brain activity? But answering that is harder than it looks. The researchers recruited 54 Boston-area university students, divided them into three groups, and each wrote 20-minute essays on philosophical topics while monitoring their brain activity with EEG sensors. One group used ChatGPT, another used Google Search, and a third used only their brains. Over four months, participants tackled questions like 'Does true loyalty require unconditional support?' What the researchers claim is that ChatGPT users showed less brain activity during the task, struggled to recall what they'd written, and felt less ownership of their work. They call this 'cognitive debt.' The conclusion sounds familiar. Outsourcing thinking weakens engagement. Writing is hard. Writing philosophical essays on abstract topics is harder. It's valuable work, and going through the nearly 200 pages of the paper takes time. The authors note that their findings aren't peer-reviewed and come with significant limitations. I wonder how many headline writers bothered to read past the summary. Because the limitations are notable. The study involved 54 students from elite universities, writing brief philosophy essays. Brain activity was measured using EEGs, which is less sensitive and more ambiguous than brain scans using fMRI (functional magnetic resonance imaging). If AI really damages how we think, then what participants did between sessions matters. Over four months, were the 'brain-only' participants really avoiding ChatGPT for all their coursework? With hundreds of millions using ChatGPT weekly, that seems unlikely. You'd want to compare people who never used AI to those who used it regularly before drawing strong conclusions about brain rot. Also Read: ChatGPT now does what these 3 apps do, but faster and smarter And here's the problem with stretching a small study on writing philosophical college-level essays too far. While journalists were busy writing sensational headlines about 'brain rot,' they missed the bigger picture. Most of us are using ChatGPT to avoid thinking about things we'd rather not think about anyway. Later this month, I'm travelling to Vietnam. I could spend hours sorting out my travel documents, emailing hotels about pickups and tours, and coordinating logistics. Instead, I'll use AI to draft those communications, check them, and move on. One day maybe my AI agent will talk to their AI agent and spare us both, but we're not there yet. In this case, using AI doesn't make me stupid. It makes me efficient. It frees up mental energy and time for things I actually want to focus on, like writing this column. This is the key point, and one I think that got lost. Learning can't be outsourced to AI. It still has to be done the hard way. But collectively and individually we do get to choose what's worth learning. Also Read: Ministries brief House panel on AI readiness When I use GPS instead of memorizing routes, maybe my spatial memory dulls a bit, but I still get where I'm going. When I use a calculator, my arithmetic gets rusty, but that doesn't mean I don't understand math. If anyone wants to train their brain like a London cabbie or Shakuntala Devi, they can. But most of us prefer to save the effort. Our goal isn't to use our brains for everything. It's to use them for the things that matter to us. I write my own columns because I enjoy the process and feel I have something to say. When I stop feeling that way, I'll stop. But I'm happy to let AI handle my travel logistics, routine correspondence, and other mental busywork. Rather than fearing this transition, we might ask: What uniquely human activities will we choose to pursue with the time and mental energy AI frees up? We're still in the early days of understanding AI's cognitive impacts. Some promise AI will make us all geniuses; others warn it will turn our brains to mush. The verdict isn't in, despite what absolutists on both sides claim. Anirban Mahapatra is a scientist and author, most recently of the popular science book, When The Drugs Don't Work: The Hidden Pandemic That Could End Medicine. The views expressed are personal.

Students using ChatGPT show less critical thinking: Study
Students using ChatGPT show less critical thinking: Study

Hindustan Times

timea day ago

  • Hindustan Times

Students using ChatGPT show less critical thinking: Study

When Jocelyn Leitzinger had her university students write about times in their lives they had witnessed discrimination, she noticed that a woman named Sally was the victim in many of the stories. In the experiment, the ChatGPT group appeared to be mostly focused on copying and pasting.(Unsplash/representational) "It was very clear that ChatGPT had decided this is a common woman's name," said Leitzinger, who teaches an undergraduate class on business and society at the University of Illinois in Chicago. "They weren't even coming up with their own anecdotal stories about their own lives," she told AFP. Leitzinger estimated that around half of her 180 students used ChatGPT inappropriately at some point last semester -- including when writing about the ethics of artificial intelligence (AI), which she called both "ironic" and "mind-boggling". So she was not surprised by recent research, which suggested that students who use ChatGPT to write essays engage in less critical thinking. The preprint study, which has not been peer-reviewed, was shared widely online and clearly struck a chord with some frustrated educators. The team of MIT researchers behind the paper have received more than 3,000 emails from teachers of all stripes since it was published online last month, lead author Nataliya Kosmyna told AFP. 'Soulless' AI essays For the small study, 54 adult students from the greater Boston area were split into three groups. One group used ChatGPT to write 20-minute essays, one used a search engine, and the final group had to make do with only their brains. The researchers used EEG devices to measure the brain activity of the students, and two teachers marked the essays. The ChatGPT users scored significantly worse than the brain-only group on all levels. The EEG showed that different areas of their brains connected to each other less often. And more than 80 percent of the ChatGPT group could not quote anything from the essay they had just written, compared to around 10 percent of the other two groups. By the third session, the ChatGPT group appeared to be mostly focused on copying and pasting. The teachers said they could easily spot the "soulless" ChatGPT essays because they had good grammar and structure but lacked creativity, personality and insight. However, Kosmyna pushed back against media reports claiming the paper showed that using ChatGPT made people lazier or more stupid. She pointed to the fourth session, when the brain-only group used ChatGPT to write their essay and displayed even higher levels of neural connectivity. Kosmyna emphasised it was too early to draw conclusions from the study's small sample size, but called for more research into how AI tools could be used more carefully to help learning. Ashley Juavinett, a neuroscientist at the University of California San Diego who was not involved in the research, criticised some "offbase" headlines that wrongly extrapolated from the preprint. "This paper does not contain enough evidence nor the methodological rigour to make any claims about the neural impact of using LLMs (large language models such as ChatGPT) on our brains," she told AFP. Thinking outside the bot Leitzinger said the research reflected how she had seen student essays change since ChatGPT was released in 2022, as both spelling errors and authentic insight became less common. Sometimes students do not even change the font when they copy and paste from ChatGPT, she said. But Leitzinger called for empathy for students, saying they can get confused when the use of AI is being encouraged by universities in some classes but is banned in others. The usefulness of new AI tools is sometimes compared to the introduction of calculators, which required educators to change their ways. But Leitzinger worried that students do not need to know anything about a subject before pasting their essay question into ChatGPT, skipping several important steps in the process of learning. A student at a British university in his early 20s who wanted to remain anonymous told AFP he found ChatGPT was a useful tool for compiling lecture notes, searching the internet and generating ideas. "I think that using ChatGPT to write your work for you is not right because it's not what you're supposed to be at university for," he said. The problem goes beyond high school and university students. Academic journals are struggling to cope with a massive influx of AI-generated scientific papers. Book publishing is also not immune, with one startup planning to pump out 8,000 AI-written books a year. "Writing is thinking, thinking is writing, and when we eliminate that process, what does that mean for thinking?" Leitzinger asked.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store