Latest news with #MichaelGerlich


Mint
8 hours ago
- Science
- Mint
Does AI make you stupid?
AS ANYBODY WHO has ever taken a standardised test will know, racing to answer an expansive essay question in 20 minutes or less takes serious brain power. Having unfettered access to artificial intelligence (AI) would certainly lighten the mental load. But as a recent study by researchers at the Massachusetts Institute of Technology (MIT) suggests, that help may come at a cost. Over the course of a series of essay-writing sessions, students working with as well as without ChatGPT were hooked up to electroencephalograms (EEGs) to measure their brain activity as they toiled. Across the board, the AI users exhibited markedly lower neural activity in parts of the brain associated with creative functions and attention. Students who wrote with the chatbot's help also found it much harder to provide an accurate quote from the paper that they had just produced. The findings are part of a growing body of work on the potentially detrimental effects of AI use for creativity and learning. This work points to important questions about whether the impressive short-term gains afforded by generative AI may incur a hidden long-term debt. The MIT study augments the findings of two other high-profile studies on the relationship between AI use and critical thinking. The first, by researchers at Microsoft Research, surveyed 319 knowledge workers who used generative AI at least once a week. The respondents described undertaking more than 900 tasks, from summarising lengthy documents to designing a marketing campaign, with the help of AI. According to participants' self-assessments, only 555 of these tasks required critical thinking, such as having to review an AI output closely before passing it to a client, or revising a prompt after the AI generated an inadequate result on the first go. The rest of the tasks were deemed essentially mindless. Overall, a majority of workers reported needing either less or much less cognitive effort to complete tasks with generative-AI tools such as ChatGPT, Google Gemini or Microsoft's own Copilot AI assistant, compared with doing those tasks without AI. Another study, by Michael Gerlich, a professor at SBS Swiss Business School, asked 666 individuals in Britain how often they used AI and how much they trusted it, before posing them questions based on a widely used critical-thinking assessment. Participants who made more use of AI scored lower across the board. Dr Gerlich says that after the study was published he was contacted by hundreds of high-school and university teachers dealing with growing AI adoption among their students who, he says, 'felt that it addresses exactly what they currently experience". Whether AI will leave people's brains flabby and weak in the long term remains an open question. Researchers for all three studies have stressed that further work is needed to establish a definitive causal link between elevated AI use and weakened brains. In Dr Gerlich's study, for example, it is possible that people with greater critical-thinking prowess are just less likely to lean on AI. The MIT study, meanwhile, had a tiny sample size (54 participants in all) and focused on a single narrow task. Moreover, generative-AI tools explicitly seek to lighten people's mental loads, as many other technologies do. As long ago as the 5th century BC, Socrates was quoted as grumbling that writing is not 'a potion for remembering, but for reminding". Calculators spare cashiers from computing a bill. Navigation apps remove the need for map-reading. And yet few would argue that people are less capable as a result. There is little evidence to suggest that allowing machines to do users' mental bidding alters the brain's inherent capacity for thinking, says Evan Risko, a professor of psychology at the University of Waterloo who, along with a colleague, Sam Gilbert, coined the term 'cognitive offloading" to describe how people shrug off difficult or tedious mental tasks to external aids. The worry is that, as Dr Risko puts it, generative AI allows one to 'offload a much more complex set of processes". Offloading some mental arithmetic, which has only a narrow set of applications, is not the same as offloading a thought process like writing or problem-solving. And once the brain has developed a taste for offloading, it can be a hard habit to kick. The tendency to seek the least effortful way to solve a problem, known as 'cognitive miserliness", could create what Dr Gerlich describes as a feedback loop. As AI-reliant individuals find it harder to think critically, their brains may become more miserly, which will lead to further offloading. One participant in Dr Gerlich's study, a heavy user of generative AI, lamented 'I rely so much on AI that I don't think I'd know how to solve certain problems without it." Many companies are looking forward to the possible productivity gains from greater adoption of ai. But there could be a sting in the tail. 'Long-term critical-thinking decay would likely result in reduced competitiveness," says Barbara Larson, a professor of management at Northeastern University. Prolonged AI use could also make employees less creative. In a study at the University of Toronto, 460 participants were instructed to propose imaginative uses for a series of everyday objects, such as a car tyre or a pair of trousers. Those who had been exposed to ideas generated by AI tended to produce answers deemed less creative and diverse than a control group who worked unaided. When it came to the trousers, for instance, the chatbot proposed stuffing a pair with hay to make half of a scarecrow—in effect suggesting trousers be reused as trousers. An unaided participant, by contrast, proposed sticking nuts in the pockets to make a novelty bird feeder. There are ways to keep the brain fit. Dr Larson suggests that the smartest way to get ahead with AI is to limit its role to that of 'an enthusiastic but somewhat naive assistant". Dr Gerlich recommends that, rather than asking a chatbot to generate the final desired output, one should prompt it at each step on the path to the solution. Instead of asking it 'Where should I go for a sunny holiday?", for instance, one could start by asking where it rains the least, and proceed from there. Members of the Microsoft team have also been testing AI assistants that interrupt users with 'provocations" to prompt deeper thought. In a similar vein, a team from Emory and Stanford Universities have proposed rewiring chatbots to serve as 'thinking assistants" that ask users probing questions, rather than simply providing answers. One imagines that Socrates might heartily approve. Get with the program Such strategies might not be all that useful in practice, even in the unlikely event that model-builders tweaked their interfaces to make chatbots clunkier, or slower. They could even come at a cost. A study by Abilene Christian University in Texas found that AI assistants which repeatedly jumped in with provocations degraded the performance of weaker coders on a simple programming task. Other potential measures to keep people's brains active are more straightforward, if also rather more bossy. Overeager users of generative AI could be required to come up with their own answer to a query, or simply wait a few minutes, before they're allowed to access the AI. Such 'cognitive forcing" may lead users to perform better, according to Zana Buçinca, a researcher at Microsoft who studies these techniques, but will be less popular. 'People do not like to be pushed to engage," she says. Demand for workarounds would therefore probably be high. In a demographically representative survey conducted in 16 countries by Oliver Wyman, a consultancy, 47% of respondents said they would use generative-AI tools even if their employer forbade it. The technology is so young that, for many tasks, the human brain remains the sharpest tool in the toolkit. But in time both the consumers of generative ai and its regulators will have to assess whether its wider benefits outweigh any cognitive costs. If stronger evidence emerges that ai makes people stupid, will they care?


Globe and Mail
4 days ago
- Globe and Mail
AI's cost to critical thinking
This is the weekly Work Life newsletter. If you are interested in more careers-related content, sign up to receive it in your inbox. School may be out for the summer, but the conversation around artificial intelligence in education is heating up. Joe Castaldo, a Globe and Mail reporter covering AI and technology, recently joined The Decibel podcast to discuss how AI may be dulling students' critical thinking skills and answer the question, 'What are we losing when we rely too much on AI?' To set the scene, Mr. Castaldo retold a story from Swiss business professor Michael Gerlich: Mr. Gerlich was sitting in a university auditorium behind a student who was using ChatGPT during a lecture to generate questions that the student would go on to ask the guest speaker. The problem was they were questions the speaker had already extensively answered. 'The student wasn't even paying attention,' Mr. Castaldo says. It's one small example of a growing trend that sparked a study by Mr. Gerlich. Mr. Gerlich surveyed more than 600 students to explore the connection between AI usage and critical thinking. 'He found the higher somebody's AI use, the lower their critical thinking skills,' Mr. Castaldo says. 'And it was most pronounced for younger people, like under 25.' While the study didn't prove causation, it raises flags among educators. Some professors reported seeing students who couldn't make even basic academic decisions without consulting AI. However, it's not just students leaning on AI, knowledge workers are too. According to a survey by workplace technology platform OwlLabs and Pulse, nearly 67 per cent of companies are using AI and 46 per cent of employees report they're either heavily using AI at work or somewhat reliant on it. This surge brings a cost. A study by Microsoft Research and Carnegie Mellon surveyed 319 knowledge workers and found the more confident someone was in AI's abilities, the less critical thinking they reported. The survey revealed a few motivational barriers that cause workers to opt-out of critical thinking, including: These barriers contribute to a broader pattern: even well-intentioned or capable knowledge workers may opt out of critical engagement when organizational structures or task demands don't support it. Microsoft's research suggests that without motivating workers to critique outputs, AI tools tend to shift cognition from production to oversight — and that can be a slippery slope. 'It's not that the tools themselves are bad, it's how we use them. We can use them in good, effective ways, but a lot of that comes down to the individual's motivation,' Mr. Castaldo says. From boardrooms to classrooms, the real test will be how leaders cultivate environments where AI challenges us, not just does things for us. 76 per cent That's how many employers are already using some kind of personality and skills tests in assessing job candidates, according to a recent report from TestGorilla. Read more Many hiring managers have been faced with the same challenge: when a new role pops up that demands new skills, do they recruit new talent or retain and retrain the people already on the team? This article says that the classic '50-per-cent rule,' popularized by Robert Townsend, still holds weight – especially in today's fast-moving skills landscape. The rule advises giving proven internal candidates a shot, even if they only meet half the job's requirements. The missing piece? Support. With mentorship and a strong learning culture, employees can grow into roles while boosting retention and engagement. Read more 'This stage of life has largely been ignored. That's an injustice that we need to change, particularly when you think about not just the impact to one's own personal health, but the impact to the economy and to society over all. This is an issue that demands urgent attention and action,' says Janet Ko, co-founder of the Menopause Foundation of Canada. In this article, The Globe explores how the lack of menopause awareness and support doesn't only affect women at work, but the broader economy. It also covers some of the positive changes we've seen at Canadian workplaces and how we can create more inclusive, productive workplaces. Read more Canada is witnessing a growing trend of high‑earning individuals leaving the country. Tax advisors report a sharp uptick in wealthy Canadians exploring or finalizing non‑resident status – a significant jump compared to a decade ago. Read more


NDTV
30-05-2025
- NDTV
AI Announcer At Graduation Ceremony Sparks Debate: "Innovation Or Impersonal?"
Students at New York City's Pace University were left shocked when college authorities used artificial intelligence (AI) to read aloud their names during the graduation ceremony. While professors or human announcers usually read the names, calling the students to the stage to receive their degrees, the ceremony at the US university featured a voice, entirely created using AI. A viral video posted by @therundownai on Instagram showed the graduating students standing in a line with their phones out. After reaching one of the faculty members on the stage, they showed a QR code, which was promptly scanned. Soon, a raspy, synthetic voice, generated by AI, uttered the student's name over the sound system. According to a report in the New York Post, prior to the ceremony, students were directed to a website where they could phonetically spell their names and confirm the pronunciation. View this post on Instagram A post shared by The Rundown AI (@therundownai) Also Read | PETA Demands Renaming World Milk Day To 'Bovine Mammary Secretion Day' Social media reacts While a section of social media users appreciated the effort to ensure correct pronunciations, others felt the approach lacked the personal touch of a human announcer. "Imagine a school that would expel you for using AI to write a paper, but will use AI to read graduate names for them," said one user, while another added: "This is just lazy." A third commented: "I would appreciate having my name said correctly." AI's impact on students While colleges are using AI to read the names without any mistakes, studies have shown that the technology is having a negative impact on students. Earlier this year, a study published in the journal Societies showed that AI tools were diminishing the critical thinking abilities of students. Analysis from more than 650 people aged 17 and over in the UK showed evidence of lower critical thinking skills among the young people who extensively delegated their memory and problem-solving tasks to AI through a phenomenon known as cognitive offloading. "Younger participants who exhibited higher dependence on AI tools scored lower in critical thinking compared to their older counterparts," wrote lead author Michael Gerlich of SBS Swiss Business School. The participants acknowledged that their reliance on AI for decision-making and memory tasks had them concerned about losing critical thinking skills. Some even expressed concerns that AI was altering their decisions through its own, inherent bias.
Yahoo
27-04-2025
- Science
- Yahoo
Experts Concerned That AI Is Making Us Stupider
Artificial intelligence might be creeping its way into every facet of our lives — but that doesn't mean it's making us smarter. Quite the reverse. A new analysis of recent research by The Guardian looked at a potential irony: whether we're giving up more than we gain by shoehorning AI into our day-to-day work, offloading so many intellectual tasks that it erodes our own cognitive abilities. The analysis points to a number of studies that suggest a link between cognitive decline and AI tools, especially in critical thinking. One research article, published in the journal Frontiers in Psychology — and itself run through ChatGPT to make "corrections," according to a disclaimer that we couldn't help but notice — suggests that regular use of AI may cause our actual cognitive chops and memory capacity to atrophy. Another study, by Michael Gerlich of the Swiss Business School in the journal Societies, points to a link between "frequent AI tool usage and critical thinking abilities," highlighting what Gerlich calls the "cognitive costs of AI tool reliance." The researcher uses an example of AI in healthcare, where automated systems make a hospital more efficient at the cost of full-time professionals whose job is "to engage in independent critical analysis" — to make human decisions, in other words. None of that is as far-fetched as it sounds. A broad body of research has found that brain power is a "use it or lose it" asset, so it makes sense that turning to ChatGPT for everyday challenges like writing tricky emails, doing research, or solving problems would have negative results. As humans offload increasingly complex problems onto various AI models, we also become prone to treating AI like a "magic box," a catch-all capable of doing all our hard thinking for us. This attitude is heavily pushed by the AI industry, which uses a blend of buzzy technical terms and marketing hype to sell us on ideas like "deep learning," "reasoning," and "artificial general intelligence." Case in point, another recent study found that a quarter of Gen Zers believe AI is "already conscious." By scraping thousands of publicly available datapoints in seconds, AI chatbots can spit out seemingly thoughtful prose, which certainly gives the appearance of human-like sentience. But it's that exact attitude that experts warn is leading us down a dark path. "To be critical of AI is difficult — you have to be disciplined," says Gerlich. "It is very challenging not to offload your critical thinking to these machines." The Guardian's analysis also cautions against painting with too broad a brush and blaming AI, exclusively, for the decline in basic measures of intelligence. That phenomenon has plagued Western nations since the 1980s, coinciding with the rise of neoliberal economic policies that led governments in the US and UK to roll back funding for public schools, disempower teachers, and end childhood food programs. Still, it's hard to deny stories from teachers that AI cheating is nearing crisis levels. AI might not have started the trend, but it may well be pushing it to grim new extremes. More on AI: Columbia Student Kicked Out for Creating AI to Cheat, Raises Millions to Turn It Into a Startup