logo
How the ‘productive struggle' strengthens learning

How the ‘productive struggle' strengthens learning

The Hindua day ago
You wince at the set of differential equations you need to solve. You barely understand the topic, and you have to plough through a whole page of them. You are tempted to turn to ChatGPT to get through this assignment. It's not graded, so you needn't feel guilty for using the bot.
But how will you learn to solve them unless you grapple with them on your own? Though it's going to be a long evening, you decide to wrestle with the equations, knowing that it is the only way to get a firmer handle on them.
Origins
The term 'productive struggle' was coined by James Hiebert and Douglas Grouws, in the context of Maths instruction, to describe the effort students have to make to decipher complex problems slightly beyond their current levels. In a paper in the Journal of Mathematics Teacher Education, Hiroko Warshauer avers that perseverance is a key element of productive struggle. Only when students persist on challenging tasks that are slightly beyond their level can they gain mastery of a concept.
Further, a student's environment plays a significant role in promoting perseverance. Teachers may foster active engagement by 'questioning, clarifying, interpreting, confirming students' thinking' and coaxing them to discuss problems with their peers, says Warshauer. When teachers communicate that struggle is a part of the learning process, students know that it's okay to labour over sums. Because many students experience Maths anxiety and tend to give up when problems become demanding, it's important to reassure them that contending with problems is an integral aspect of learning. Letting students know that confusion, doubt, and mistakes are essential elements of the learning process can mitigate their anxiety.
Asking students to explain their reasoning helps them become more accepting of productive struggle. Instead of focusing on the final answer, teachers may coax students to articulate the steps involved in finding the answer. They may also urge them to approach and solve problems in different ways. These exercises need to be done in a non-judgmental space where students are not afraid of taking risks and making mistakes. The whole point is for students to appreciate the process of thinking. Warshauer also recommends that teachers anticipate points of likely struggle and provide leading questions to propel students' thinking forward.
Across subjects
Of course, productive struggle is not limited to mathematics but is applicable to all disciplines. A post on progresslearning.com titled What is productive struggle in education? describes this phenomenon in the context of reading. When students are given a text that is just above their current level of 'proficiency', they have to actively engage with it to understand its contents. To comprehend a challenging text, students need to deploy an array of critical thinking skills like making connections, questioning, drawing inferences, summarising and identifying key points and supporting details. As they engage with the material, students are likely to feel befuddled and frustrated. But sticking with it and trying to understand it is what leads to deeper learning.
While some students may sail through the primary years of schooling, everyone, including those considered bright or brilliant, struggles with learning as the content gets more complex. The ability to persist with productive struggle is what differentiates proficient students from their mediocre peers. Don't imagine that toppers don't wrestle with confusing sums and dense texts. Just as everyone's muscles grow stronger when they do the hard work of lifting weights, our neuronal connections also grow more robust and refined when we engage in mental workouts.
The only caveat is that you need to find the optimal level of challenge without burning yourself out. While mild to moderate frustration is expected, if a subject is causing you deep anguish, you may seek help from your professor, peers or a tutor. If none of the strategies work, consider shifting to another course.
The writer is visiting faculty at the School of Education, Azim Premji University, Bengaluru, and the co-author of Bee-Witched.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

‘Lightbulb moment': Reddit post goes viral after ChatGPT solves 10-year-old medical mystery; netizens share similar shocking stories
‘Lightbulb moment': Reddit post goes viral after ChatGPT solves 10-year-old medical mystery; netizens share similar shocking stories

Time of India

time2 hours ago

  • Time of India

‘Lightbulb moment': Reddit post goes viral after ChatGPT solves 10-year-old medical mystery; netizens share similar shocking stories

In a surprising incident, a Reddit user claims that ChatGPT, an artificial intelligence chatbot developed by OpenAI, helped the user solve a decade-long health mystery that even doctors couldn't crack for years. The post is now going viral on social media. But how was that possible? Let's find out. The viral thread on Reddit, titled 'ChatGPT solved a 10+ year problem no doctors could figure out,' was shared by user @Adventurous-Gold6935 and grabbed massive attention in no time. As per the post, for over ten years, doctors dealt with unexplained symptoms, despite multiple MRIs, CT scans, blood tests, and even screening for Lyme disease. How did ChatGPT help the user crack the decade-long medical mystery? The user went on to state in the post that they consulted several specialists, including neurologists, and also got treated at the US's top hospital networks, but nothing worked However, when the user ran all their symptoms and lab results through ChatGPT, the artificial intelligence flagged a possible genetic mutation called homozygous A1298C MTHFR which is also known to affect B12 processing in the human body. 'A lightbulb moment' As soon as the AI tool showed the suggestion, it was a lightbulb moment for the user, and they took the suggestion to their doctor; his reaction was 'super shocked,' the user noted. According to the post, the diagnosis made sense, but it had never come up before. 'Not sure how they didn't think to test me for MTHFR mutation." ChatGPT solved a 10+ year problem no doctors could figure out byu/Adventurous-Gold6935 inChatGPT Redditors react to the viral post As soon as the post surfaced on social media, it quickly went viral on the internet, and it has garnered over 9,000 upvotes with reactions mixed with amazement and frustration. One user wrote, "It did the same for me. I've been vomiting for 15+ years. I've done every single gastric exam and allergy test there is, and lately got diagnosed with anxiety and the meds actually helped, but it never stopped. After prompting it my exams it suggested me to check an otorhinolaryngologist for dizziness. After a brain scan, It turns out I've been living with a massive labyrinthitis caused by a nerve pinch in my brain. Fully treatable." ChatGPT solved a 10+ year problem no doctors could figure out | Credit: Reddit | @Adventurous-Gold6935 While another added, "Wow, I had a really similar experience. After a seizure-like episode sent me to the ER, the first neurologist I saw told me that I was making it all up for attention. Thankfully, the second neurologist actually took me seriously. He did a spinal tap- which no one had thought to do before- and ruled out meningitis. Turns out I had occipital neuralgia, caused by a pinched nerve in my brain. The symptoms were brutal: sudden electric shock-like pain from the back of my head to my forehead and behind my eyes, light sensitivity, numbness and tingling. So glad you got your answer and a treatable diagnosis - rooting for your recovery!" 'The first thing I wanted to do on this post was say 'living with this must have been a real MTHFR' I think that makes me a bad person. But I'm happy for OP,' another added. While one said, 'Doc here, internal medicine. I love what is happening, and I might need to get a new job at some point, lol. But be careful, it does make silly mistakes from time to time.' (Disclaimer: The information is based on a viral Reddit post and is not verified by medical professionals. Always consult an expert regarding a medical condition.) To stay updated on the stories that are going viral, follow Indiatimes Trending.

Reddit user claims ChatGPT uncovered medical condition doctors overlooked for a decade
Reddit user claims ChatGPT uncovered medical condition doctors overlooked for a decade

Mint

time4 hours ago

  • Mint

Reddit user claims ChatGPT uncovered medical condition doctors overlooked for a decade

A user on Reddit has claimed that ChatGPT helped crack a medical case that stumped doctors for over a decade. The post, titled 'ChatGPT solved a 10+ year problem no doctors could figure out,' was shared by user @Adventurous-Gold6935. For more than ten years, they said they dealt with unexplained symptoms, despite multiple MRIs, CT scans, blood tests, and even screening for Lyme disease. They consulted specialists, including neurologists, and were treated at what they describe as one of the US's top hospital networks. Still, no diagnosis. The user ran all their symptoms and lab results through ChatGPT. The AI flagged a possible genetic mutation- homozygous A1298C MTHFR, known to affect B12 processing in the body, even when blood levels seem normal. That was the lightbulb moment. Also Read | US woman uses ChatGPT to pay off nearly ₹20 lakh in credit card debt They took the suggestion to their doctor. His reaction? 'Super shocked,' the user wrote. The diagnosis made sense, but it had never come up before. 'Not sure how they didn't think to test me for MTHFR mutation,' the post said. After starting supplements and making adjustments, the user says their condition has improved significantly. Internet reacts The post now has more than 9,000 upvotes, with reactions ranging from amazement to frustration. A user shared, 'Wow, I had a really similar experience. After a seizure-like episode sent me to the ER, the first neurologist I saw told me that I was making it all up for attention. Thankfully, the second neurologist actually took me seriously. He did a spinal tap- which no one had thought to do before- and ruled out meningitis. Turns out I had occipital neuralgia, caused by a pinched nerve in my brain.' Another user wrote, "The first thing I wanted to do on this post was say 'living with this must have been a real MTHFR' I think that makes me a bad person. But I'm happy for OP." "Well, insurance companies are using AI to deny coverage so I think people should do their best to check their work on a similar platform. It's kinda neat IMO," the third user wrote.

AI may now match humans in spotting emotion, sarcasm in online chats
AI may now match humans in spotting emotion, sarcasm in online chats

Business Standard

time6 hours ago

  • Business Standard

AI may now match humans in spotting emotion, sarcasm in online chats

When we write something to another person, over email or perhaps on social media, we may not state things directly, but our words may instead convey a latent meaning – an underlying subtext. We also often hope that this meaning will come through to the reader. But what happens if an artificial intelligence (AI) system is at the other end, rather than a person? Can AI, especially conversational AI, understand the latent meaning in our text? And if so, what does this mean for us? Latent content analysis is an area of study concerned with uncovering the deeper meanings, sentiments and subtleties embedded in text. For example, this type of analysis can help us grasp political leanings present in communications that are perhaps not obvious to everyone. Understanding how intense someone's emotions are or whether they're being sarcastic can be crucial in supporting a person's mental health, improving customer service, and even keeping people safe at a national level. These are only some examples. We can imagine benefits in other areas of life, like social science research, policy-making and business. Given how important these tasks are – and how quickly conversational AI is improving – it's essential to explore what these technologies can (and can't) do in this regard. Work on this issue is only just starting. Current work shows that ChatGPT has had limited success in detecting political leanings on news websites. Another study that focused on differences in sarcasm detection between different large language models – the technology behind AI chatbots such as ChatGPT – showed that some are better than others. Finally, a study showed that LLMs can guess the emotional 'valence' of words – the inherent positive or negative 'feeling' associated with them. Our new study published in Scientific Reports tested whether conversational AI, inclusive of GPT-4 – a relatively recent version of ChatGPT – can read between the lines of human-written texts. The goal was to find out how well LLMs simulate understanding of sentiment, political leaning, emotional intensity and sarcasm – thus encompassing multiple latent meanings in one study. This study evaluated the reliability, consistency and quality of seven LLMs, including GPT-4, Gemini, Llama-3.1-70B and Mixtral 8 × 7B. We found that these LLMs are about as good as humans at analysing sentiment, political leaning, emotional intensity and sarcasm detection. The study involved 33 human subjects and assessed 100 curated items of text. For spotting political leanings, GPT-4 was more consistent than humans. That matters in fields like journalism, political science, or public health, where inconsistent judgement can skew findings or miss patterns. GPT-4 also proved capable of picking up on emotional intensity and especially valence. Whether a tweet was composed by someone who was mildly annoyed or deeply outraged, the AI could tell – although, someone still had to confirm if the AI was correct in its assessment. This was because AI tends to downplay emotions. Sarcasm remained a stumbling block both for humans and machines. The study found no clear winner there – hence, using human raters doesn't help much with sarcasm detection. Why does this matter? For one, AI like GPT-4 could dramatically cut the time and cost of analysing large volumes of online content. Social scientists often spend months analysing user-generated text to detect trends. GPT-4, on the other hand, opens the door to faster, more responsive research – especially important during crises, elections or public health emergencies. Journalists and fact-checkers might also benefit. Tools powered by GPT-4 could help flag emotionally charged or politically slanted posts in real time, giving newsrooms a head start. There are still concerns. Transparency, fairness and political leanings in AI remain issues. However, studies like this one suggest that when it comes to understanding language, machines are catching up to us fast – and may soon be valuable teammates rather than mere tools. Although this work doesn't claim conversational AI can replace human raters completely, it does challenge the idea that machines are hopeless at detecting nuance. Our study's findings do raise follow-up questions. If a user asks the same question of AI in multiple ways – perhaps by subtly rewording prompts, changing the order of information, or tweaking the amount of context provided – will the model's underlying judgements and ratings remain consistent? Further research should include a systematic and rigorous analysis of how stable the models' outputs are. Ultimately, understanding and improving consistency is essential for deploying LLMs at scale, especially in high-stakes settings.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store