logo
Interview with Jagadish Shukla, author of A Billion Butterflies: A Life in Climate and Chaos Theory

Interview with Jagadish Shukla, author of A Billion Butterflies: A Life in Climate and Chaos Theory

The Hindu24-06-2025
Eminent climate scientist Dr. Jagadish Shukla has devoted a lifetime to improving seasonal weather predictions, and especially monsoonal predictions for India. He grew up in rural Uttar Pradesh and seeing how people's lives depended on the monsoon and information around it, made it his mission to forecast seasonal weather events. In doing so, he has changed the course of modern weather prediction. He tells the story in his new book, A Billion Butterflies: A Life in Climate and Chaos Theory, a personal memoir as well as a log about the course weather and climate science has taken. Edited excerpts from an interview.
One of the things that makes your book fascinating is that it deals with a topic that people talk of daily, but has a limited understanding of. A fascinating line says, 'Climate is what you expect, weather is what you get.' What does that mean?
All that it means is that long-term average weather is climate. Typically, a 30-year average of values is considered as climate. So what you expect to happen on a certain date is based on this, and what actually happens – weather -- is over and above that. The reason it is important to understand this is that we tend to think climate is fixed, but it is not. It is changing every day and changing in a well defined manner and it is also different over different places.
The title of the book and your area of study refer to the chaos theory and thereby the butterfly effect. When applied to climate science, does it really mean that we are looking at the variables that go into the forecast models?
First of all, the equations that define weather and climate are the same; just that weather does not consider some big factors like chemistry, aerosols etc. The butterfly effect is all about weather. Predictions are based on what happens today and the equations chosen. However, these predictions, hold good only for a few days.
Even with improvements in computing and satellite observations, accuracy begins to get tricky after 10 days. This is because the equations which do the prediction are non-linear and small errors on the first day can lead to very large variations a few days ahead. And that's the origin of the word 'butterfly effect' as defined by one of my advisers, Professor Edward Lorenz from MIT. What is even more interesting is when he first spoke of this effect on forecasts, he used the analogy of a seagull flapping its wings over an ocean. The butterfly terminology came much later because the actual graphical result of his paper resembles a flapping butterfly!
My motivation when studying the monsoon was to find exceptions to the butterfly effect and I found it eventually -- it was the ocean temperatures. Science is not just about experiments and ideas; it is also about communicating those ideas. My work showed that once ocean temperatures are included as a factor, even a billion butterflies flapping their wings could not affect it significantly.
It is evident from your work that meteorology and forecasting has improved dramatically, including in India. How are we placed in terms of how we look at climate change?
Tthe very first supercomputer that came to India in 1989 was for weather. While we have kept pace since and our weather forecasts are comparable to what is happening globally, our monsoon forecasts still need work.
In terms of climate, it is disappointing that developed countries like the U.S. has shown great reluctance to accept the reality of climate change. India requires a national effort towards climate assessment and adaption for buy-in and action from policy makers and effective governance.
You were the lead author of the IPCC assessment report that shared the Nobel Peace Prize along with Al Gore in 2007. Do you think it was a kind of a global turning point in terms of climate change discourse?
I think so. And it had one good effect as well as a very bad one.
The good part was that this was the first time scientists could conclusively state and prove that human activities are negatively affecting global climate. Eight years later at the Paris climate change conference (COP21), nearly 200 countries agreed to a legally binding international treaty to make efforts to limit global warming and temperature rise.
The bad news came from the U.S. and perhaps elsewhere. This was the point where the fossil fuel industry stepped up their attacks on actively trying to disprove climate science through both overt and covert means. It really is the worst combination of politics and profit motives undermining one of society's greatest challenges.
It almost seems as if your life is driven forward by destiny. And you keep referring to the monsoon. How much of a critical part was it in your early life and in shaping your career?
As far as my personal life was concerned, especially early on, it just felt like things were happening on their own; with many things being beyond my control. It was much later that I started making my own decisions.
So far as the monsoon is concerned, that certainly has been the central part of my journey. In my village Mirdha, monsoons or its failure, had a profound effect on life, including food on your plate. And so, I went to MIT with a very clear aim – to be able to predict the monsoon. Because that was the way I felt I could help my village, my country, the agricultural community. Twice in my life I was very close to shifting to other spheres of work, but my interest and efforts remained focused on the monsoon.
What does a life dedicated to scientific rigour mean? Does it take a toll on your personal life?
Oh certainly, it does. When you are excited about what you are doing and you think you are making progress, you tend to ignore some aspects of your personal life. I often feel that perhaps my children did not have enough time to be with me and know me better. There was a point where my daughter asked what her dad looks like. That said I am indebted to the complete support and trust of my wife.
You have gone back to your village and helped set up a woman's college and contributed otherwise nationally as well. So would you say that your life has sort of come full circle?
I wouldn't call it a full circle; rather life has been like that all along. I was always involved with family, Mirdha, India and science – to the extent that some people believed that I was doing all of this to eventually run for a political office!
We have seen that climate change affects certain strata of society more than others. How well do you think we are prepared to adapt to these changes?
People say that climate change is the biggest problem facing us. For me, it is only one of the two biggest problems. The other being inequality and lack of social justice. In India for example, we go to international forums and say that our per capita income is relatively small and so we should be exempt from taking serious climate action. But when you look closely, it is less than 10% of the population that is responsible for most of the actual emissions. While it is the remaining 90% that will bear the brunt of the impacts of climate change.
As far as I am concerned, climate action in the end is a sort of a fight against the injustices that exists in this world.
What really stands out from the book is how you are driven by a great belief in your own understanding of life. Even if this has meant standing contrary to existing view points.
Yes, I have conviction. But I have also been open to being proven wrong.
In modern society, especially democracies like the U.S., there is always a lot of talk about liberty and freedom; but not so much about happiness. Thanks to my mother, right from my childhood, I have understood that giving to others and society is one of the best ways to attain this.
Billion Butterflies: A Life in Climate & Chaos Theory Jagadish Shukla Macmillan ₹699
The interviewer is a birder and writer based in Chennai.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Scientifically Speaking: Does ChatGPT really make us stupid?
Scientifically Speaking: Does ChatGPT really make us stupid?

Hindustan Times

time17 hours ago

  • Hindustan Times

Scientifically Speaking: Does ChatGPT really make us stupid?

A few months after I took apart articles that mischaracterised a study of AI tools as causing people to lose critical thinking skills in a column for HT, viral headlines are at it again. A number of headlines have been screaming that using artificial intelligence (AI) rots our brains. Some said it outright, others preferred the more polite 'cognitive decline', but they meant almost the same thing. Add this to the growing list of things AI is blamed for: making us lazy, dumber, and incapable of independent thought. If you read only those headlines, you couldn't be blamed for thinking that most of humanity was capable of exalted reasoning before large language models turned our brains to pulp. If we're worried about AI making us stupid, what's our excuse for the pre-AI era? Large language models like ChatGPT and Claude are now pervasive. But even a few years into their use, a lot of the talk about them remains black and white. In one camp, the techno-optimists tell us that AI super intelligence, which can do everything better than all of us, is just around the corner. In the other, there's a group that blames just about everything that goes wrong anywhere on AI. If only the truth were that simple. The Massachusetts Institute of Technology (MIT) study that sparked this panic deserves a sober hearing, not hysteria. Researchers at the Media Lab asked a worthwhile question. How does using AI to write affect our brain activity? But answering that is harder than it looks. The researchers recruited 54 Boston-area university students, divided them into three groups, and each wrote 20-minute essays on philosophical topics while monitoring their brain activity with EEG sensors. One group used ChatGPT, another used Google Search, and a third used only their brains. Over four months, participants tackled questions like 'Does true loyalty require unconditional support?' What the researchers claim is that ChatGPT users showed less brain activity during the task, struggled to recall what they'd written, and felt less ownership of their work. They call this 'cognitive debt.' The conclusion sounds familiar. Outsourcing thinking weakens engagement. Writing is hard. Writing philosophical essays on abstract topics is harder. It's valuable work, and going through the nearly 200 pages of the paper takes time. The authors note that their findings aren't peer-reviewed and come with significant limitations. I wonder how many headline writers bothered to read past the summary. Because the limitations are notable. The study involved 54 students from elite universities, writing brief philosophy essays. Brain activity was measured using EEGs, which is less sensitive and more ambiguous than brain scans using fMRI (functional magnetic resonance imaging). If AI really damages how we think, then what participants did between sessions matters. Over four months, were the 'brain-only' participants really avoiding ChatGPT for all their coursework? With hundreds of millions using ChatGPT weekly, that seems unlikely. You'd want to compare people who never used AI to those who used it regularly before drawing strong conclusions about brain rot. Also Read: ChatGPT now does what these 3 apps do, but faster and smarter And here's the problem with stretching a small study on writing philosophical college-level essays too far. While journalists were busy writing sensational headlines about 'brain rot,' they missed the bigger picture. Most of us are using ChatGPT to avoid thinking about things we'd rather not think about anyway. Later this month, I'm travelling to Vietnam. I could spend hours sorting out my travel documents, emailing hotels about pickups and tours, and coordinating logistics. Instead, I'll use AI to draft those communications, check them, and move on. One day maybe my AI agent will talk to their AI agent and spare us both, but we're not there yet. In this case, using AI doesn't make me stupid. It makes me efficient. It frees up mental energy and time for things I actually want to focus on, like writing this column. This is the key point, and one I think that got lost. Learning can't be outsourced to AI. It still has to be done the hard way. But collectively and individually we do get to choose what's worth learning. Also Read: Ministries brief House panel on AI readiness When I use GPS instead of memorizing routes, maybe my spatial memory dulls a bit, but I still get where I'm going. When I use a calculator, my arithmetic gets rusty, but that doesn't mean I don't understand math. If anyone wants to train their brain like a London cabbie or Shakuntala Devi, they can. But most of us prefer to save the effort. Our goal isn't to use our brains for everything. It's to use them for the things that matter to us. I write my own columns because I enjoy the process and feel I have something to say. When I stop feeling that way, I'll stop. But I'm happy to let AI handle my travel logistics, routine correspondence, and other mental busywork. Rather than fearing this transition, we might ask: What uniquely human activities will we choose to pursue with the time and mental energy AI frees up? We're still in the early days of understanding AI's cognitive impacts. Some promise AI will make us all geniuses; others warn it will turn our brains to mush. The verdict isn't in, despite what absolutists on both sides claim. Anirban Mahapatra is a scientist and author, most recently of the popular science book, When The Drugs Don't Work: The Hidden Pandemic That Could End Medicine. The views expressed are personal.

Students using ChatGPT show less critical thinking: Study
Students using ChatGPT show less critical thinking: Study

Hindustan Times

time18 hours ago

  • Hindustan Times

Students using ChatGPT show less critical thinking: Study

When Jocelyn Leitzinger had her university students write about times in their lives they had witnessed discrimination, she noticed that a woman named Sally was the victim in many of the stories. In the experiment, the ChatGPT group appeared to be mostly focused on copying and pasting.(Unsplash/representational) "It was very clear that ChatGPT had decided this is a common woman's name," said Leitzinger, who teaches an undergraduate class on business and society at the University of Illinois in Chicago. "They weren't even coming up with their own anecdotal stories about their own lives," she told AFP. Leitzinger estimated that around half of her 180 students used ChatGPT inappropriately at some point last semester -- including when writing about the ethics of artificial intelligence (AI), which she called both "ironic" and "mind-boggling". So she was not surprised by recent research, which suggested that students who use ChatGPT to write essays engage in less critical thinking. The preprint study, which has not been peer-reviewed, was shared widely online and clearly struck a chord with some frustrated educators. The team of MIT researchers behind the paper have received more than 3,000 emails from teachers of all stripes since it was published online last month, lead author Nataliya Kosmyna told AFP. 'Soulless' AI essays For the small study, 54 adult students from the greater Boston area were split into three groups. One group used ChatGPT to write 20-minute essays, one used a search engine, and the final group had to make do with only their brains. The researchers used EEG devices to measure the brain activity of the students, and two teachers marked the essays. The ChatGPT users scored significantly worse than the brain-only group on all levels. The EEG showed that different areas of their brains connected to each other less often. And more than 80 percent of the ChatGPT group could not quote anything from the essay they had just written, compared to around 10 percent of the other two groups. By the third session, the ChatGPT group appeared to be mostly focused on copying and pasting. The teachers said they could easily spot the "soulless" ChatGPT essays because they had good grammar and structure but lacked creativity, personality and insight. However, Kosmyna pushed back against media reports claiming the paper showed that using ChatGPT made people lazier or more stupid. She pointed to the fourth session, when the brain-only group used ChatGPT to write their essay and displayed even higher levels of neural connectivity. Kosmyna emphasised it was too early to draw conclusions from the study's small sample size, but called for more research into how AI tools could be used more carefully to help learning. Ashley Juavinett, a neuroscientist at the University of California San Diego who was not involved in the research, criticised some "offbase" headlines that wrongly extrapolated from the preprint. "This paper does not contain enough evidence nor the methodological rigour to make any claims about the neural impact of using LLMs (large language models such as ChatGPT) on our brains," she told AFP. Thinking outside the bot Leitzinger said the research reflected how she had seen student essays change since ChatGPT was released in 2022, as both spelling errors and authentic insight became less common. Sometimes students do not even change the font when they copy and paste from ChatGPT, she said. But Leitzinger called for empathy for students, saying they can get confused when the use of AI is being encouraged by universities in some classes but is banned in others. The usefulness of new AI tools is sometimes compared to the introduction of calculators, which required educators to change their ways. But Leitzinger worried that students do not need to know anything about a subject before pasting their essay question into ChatGPT, skipping several important steps in the process of learning. A student at a British university in his early 20s who wanted to remain anonymous told AFP he found ChatGPT was a useful tool for compiling lecture notes, searching the internet and generating ideas. "I think that using ChatGPT to write your work for you is not right because it's not what you're supposed to be at university for," he said. The problem goes beyond high school and university students. Academic journals are struggling to cope with a massive influx of AI-generated scientific papers. Book publishing is also not immune, with one startup planning to pump out 8,000 AI-written books a year. "Writing is thinking, thinking is writing, and when we eliminate that process, what does that mean for thinking?" Leitzinger asked.

Brains on autopilot: MIT study warns AI is eroding human thought, Here's how to stay intellectually alive
Brains on autopilot: MIT study warns AI is eroding human thought, Here's how to stay intellectually alive

Time of India

time2 days ago

  • Time of India

Brains on autopilot: MIT study warns AI is eroding human thought, Here's how to stay intellectually alive

AI-generated representative image Inventions have redefined the very existence of humankind, challenging us to alter the way we think, learn, and live. The printing press etched history in bold letters. Calculators reshaped arithmetic. Now, artificial intelligence has entered the scene, permeating every niche of human life and painting it with a palette of new possibilities. Yet, like every groundbreaking invention, this too carries its fair share of repercussions. But what happens when the very tools built to extend the human mind begin to replace it? The answer is unsettling: it produces a generation with crippled thinking abilities. A profound transition is already underway, one that, like an asymptomatic disease, may erupt into a full-blown cognitive pandemic in the years ahead. Generative AI systems like ChatGPT promise instant answers, elegant prose, and streamlined tasks. But we now stand on the precipice of bidding adieu to creativity. Beneath the sheen of this alluring technology lies a deeper question: Are we keeping our thinking abilities on the shelf, and completely forgetting how to think? A striking study by the Massachusetts Institute of Technology (MIT) has surfaced some troubling trends. And no, it's not good news for the next generation. Inside the MIT study: The brain on ChatGPT Computer scientist Nataliya Kosmyna and her team at MIT's Media Lab set out to investigate whether heavy reliance on AI tools like ChatGPT alters the way our brains function. The experiment involved 60 college students aged 18 to 39, who were assigned to write short essays using one of three methods: ChatGPT, Google Search, or no external tools at all. Equipped with EEG headsets to monitor their neural activity, participants crafted essays in response to prompts like 'Should we always think before we speak?' The results? Students who wrote without any assistance demonstrated the highest levels of cognitive engagement, showing strong neural connectivity across brain regions responsible for memory, reasoning, and decision-making. They thought harder and more deeply. By contrast, ChatGPT users showed the lowest neural activity. Their thinking was fragmented, their recall impaired, and their essays often lacked originality. Many participants could not even remember what they had written, clear evidence that the information had not been internalised. AI hadn't just helped them write. It had done the writing for them. Their brains had taken a backseat. The risk of outsourcing thought The cognitive offloading, as the researchers have named it, is not about the convenience, it is about the control. The more we allow machines to handle the hard segments of thinking, the less frequently we are exercising our brain muscles for critical thinking, creativity, and memory formation. Over time, these muscles can weaken. When participants who initially used ChatGPT were later asked to write without it, their brain activity increased, but it never met the levels of those who had worked independently from the start. It provided a clear inference that the potential for deep thinking is on the verge of erosion. Tools reflect intent, not intelligence It is usually the invention that is treated as a scapegoat, but more than that, it depends on the way we use it. The problem is not the tool, but how we decide to put it to use. As one teacher once said, 'Every tool carries with it a story, not of what it is, but how it is used.' AI, like a pair of scissors, is brilliant in design, but only when built with everyone in mind. For decades, scissors excluded left-handed children, not because the tool was faulty, but because its design lacked inclusivity. AI shares no different story. There are two roads: it can either democratise education or further deepen inequality. It can hone creativity or dull it. Our actions will decide which road we are pushing our next generation to traverse. According to the World Bank, students from disadvantaged backgrounds are 50% less likely to access AI-powered learning tools compared to their peers (World Bank, 2024). And as UNESCO's 2024 Global Education Monitoring Report reveals, nearly 40% of teachers worldwide fear being replaced by machines. But those outcomes are not the fault of AI. They're the result of how we've chosen to implement it. Used well, AI can elevate learning When utilised cautiously, AI can still elevate the quality of education. A study by Mc Kinsey Global Institute has shown that personalised learning with the help of AI tools can bolster a student's performance by 30%. The Organization for Economic Co-operation and Development (OECD 2022) study shared similar findings, adding weight to the stance by stating that it can mitigate teacher workloads, critical, given that educators spend 50% of their time on administrative duties. In rural India, digital initiatives like National Digital Education Architecture (NDEAR) aim to use AI to reach over 250 million school-age children who lack access to quality teachers. However, even in a world driven and dominated by artificial intelligence, the human element in learning cannot be substituted. The struggle for reflection, the delight of discovery dwell at the heart of human learning. As it is said, we must begin with the end in mind. Are we cultivating a cohort of students to complete the tasks, or ones who can think beyond limits and add meaning? 'AI is already born. We must learn to co-exist.' In a conversation with The Times of India, Siddharth Rajgarhia, Chief Learner and Director of DPS, said it emphatically: 'AI is already born; we cannot keep it back in the womb. It is important to learn to co-exist with the guest and keep our human element alive.' That co-existence begins by redefining the role of AI, not as a shortcut, but as a companion in the learning journey. Here's how educators and students can stay intellectually alive in the age of automation: Think before you prompt : Encourage students to brainstorm ideas independently before turning to AI. Reclaim authorship : Every AI-assisted draft should be critically revised and fully owned by the student. Foster metacognition : Teach learners to reflect on how they think, not just what they produce. Center equity in design : Ensure tools are accessible to all learners, not just the digitally privileged few. Use AI to deepen, not replace, curiosity : Let it challenge assumptions, not hand out ready answers. Final thought: Let AI assist, but let humans lead The brain was never meant to idle. It was designed to wrestle with complexity, to stumble and reframe, to wonder and imagine. When we surrender that process to machines—when we allow AI to become the default setting for thought—we risk losing more than creativity. We risk losing cognitive ownership. The human brain was never made to sit idle. It is designed to grapple with complexity, to stumble and reframe, to wonder and imagine. When we hand over that task to machines, we allow AI to become the default setting for thought; we are losing more than creativity. We are keeping at stake our cognitive ownership, our voices, and our opinions. When we forget to think, we let go of the very power of being human. AI is not the negative protagonist or a bane here. We need to understand that it amplifies our intentions, good or bad, lazy or inspired. The future of learning and the workplace does not depend on the fastest prompt or smartest algorithm. It stands on the shoulders of the brightest minds who have kept their curiosity intact and who resist easy answers. At the core of learning lie educators who remind us that the goal of education is not just knowledge, it is wisdom. We so wish that prompts could generate wisdom and a human element. Alas, they cannot. It needs to be developed by the vanguards of imagination. Is your child ready for the careers of tomorrow? Enroll now and take advantage of our early bird offer! Spaces are limited.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store