
Students using ChatGPT show less critical thinking: Study
"It was very clear that ChatGPT had decided this is a common woman's name," said Leitzinger, who teaches an undergraduate class on business and society at the University of Illinois in Chicago.
"They weren't even coming up with their own anecdotal stories about their own lives," she told AFP.
Leitzinger estimated that around half of her 180 students used ChatGPT inappropriately at some point last semester -- including when writing about the ethics of artificial intelligence (AI), which she called both "ironic" and "mind-boggling".
So she was not surprised by recent research, which suggested that students who use ChatGPT to write essays engage in less critical thinking.
The preprint study, which has not been peer-reviewed, was shared widely online and clearly struck a chord with some frustrated educators.
The team of MIT researchers behind the paper have received more than 3,000 emails from teachers of all stripes since it was published online last month, lead author Nataliya Kosmyna told AFP. 'Soulless' AI essays
For the small study, 54 adult students from the greater Boston area were split into three groups. One group used ChatGPT to write 20-minute essays, one used a search engine, and the final group had to make do with only their brains.
The researchers used EEG devices to measure the brain activity of the students, and two teachers marked the essays.
The ChatGPT users scored significantly worse than the brain-only group on all levels. The EEG showed that different areas of their brains connected to each other less often.
And more than 80 percent of the ChatGPT group could not quote anything from the essay they had just written, compared to around 10 percent of the other two groups.
By the third session, the ChatGPT group appeared to be mostly focused on copying and pasting.
The teachers said they could easily spot the "soulless" ChatGPT essays because they had good grammar and structure but lacked creativity, personality and insight.
However, Kosmyna pushed back against media reports claiming the paper showed that using ChatGPT made people lazier or more stupid.
She pointed to the fourth session, when the brain-only group used ChatGPT to write their essay and displayed even higher levels of neural connectivity.
Kosmyna emphasised it was too early to draw conclusions from the study's small sample size, but called for more research into how AI tools could be used more carefully to help learning.
Ashley Juavinett, a neuroscientist at the University of California San Diego who was not involved in the research, criticised some "offbase" headlines that wrongly extrapolated from the preprint.
"This paper does not contain enough evidence nor the methodological rigour to make any claims about the neural impact of using LLMs (large language models such as ChatGPT) on our brains," she told AFP. Thinking outside the bot
Leitzinger said the research reflected how she had seen student essays change since ChatGPT was released in 2022, as both spelling errors and authentic insight became less common.
Sometimes students do not even change the font when they copy and paste from ChatGPT, she said.
But Leitzinger called for empathy for students, saying they can get confused when the use of AI is being encouraged by universities in some classes but is banned in others.
The usefulness of new AI tools is sometimes compared to the introduction of calculators, which required educators to change their ways.
But Leitzinger worried that students do not need to know anything about a subject before pasting their essay question into ChatGPT, skipping several important steps in the process of learning.
A student at a British university in his early 20s who wanted to remain anonymous told AFP he found ChatGPT was a useful tool for compiling lecture notes, searching the internet and generating ideas.
"I think that using ChatGPT to write your work for you is not right because it's not what you're supposed to be at university for," he said.
The problem goes beyond high school and university students.
Academic journals are struggling to cope with a massive influx of AI-generated scientific papers. Book publishing is also not immune, with one startup planning to pump out 8,000 AI-written books a year.
"Writing is thinking, thinking is writing, and when we eliminate that process, what does that mean for thinking?" Leitzinger asked.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hindustan Times
15 minutes ago
- Hindustan Times
Your deleted ChatGPT chats might still be stored and reviewed, thanks to a new court order
Anonymity may be called a myth in the modern age when tech companies and digital platforms can access whatever they want, often without us even realising it. The latest twist in this story comes from the world of artificial intelligence, where a court order now requires OpenAI to keep records of every ChatGPT conversation including those users thought they had deleted. For anyone who values privacy, this is a wake-up call about just how little control we have once our words are out there. A court order forces OpenAI to keep all ChatGPT chats, raising questions about privacy and the future of digital conversations.(Unsplash) Deleted doesn't always mean gone Most of us assume that when we hit delete on a chat or clear our history, our digital footprint disappears. But that's not the case anymore for ChatGPT users. The New York Times, along with other media outlets and authors, is suing OpenAI and Microsoft, claiming they used copyrighted content to train their AI models without permission, according to Ars Technica. As part of the lawsuit, a New York magistrate judge has ordered OpenAI to preserve all user conversations, even the ones users have tried to erase. Normally, deleted chats vanish from OpenAI's servers after 30 days, but that process is now on hold for the duration of the case. This rule affects everyone using ChatGPT's Free, Plus, Pro, and Team plans, though it doesn't apply to Enterprise or Education accounts with special data agreements. OpenAI is not happy about this. The company argues that the order goes against user privacy and what people expect when they delete their chats. CEO Sam Altman has called the request troubling and says OpenAI will keep fighting it, but for now, the court's decision stands. All logs must be kept indefinitely unless the order is changed. What this means for everyday users So, what does this mean for the average person chatting with ChatGPT? In theory, your old and deleted messages could be accessed by legal teams involved in the lawsuit. OpenAI insists only a small, audited group of legal and security staff will have access, and these conversations won't be made public. Even so, the possibility of your private chats being reviewed by strangers is unsettling for many. The reality is, most of these logs will probably never be looked at. The sheer volume of data makes it unlikely that more than a tiny fraction will be reviewed. Still, the case raises bigger questions about privacy, data security, and who really owns your digital conversations once they're out in the world. The lines between personal privacy, copyright law, and corporate interests are getting blurrier with AI's quick advancement and adoption. Regular users now find their private conversations caught in the middle of a legal battle between tech giants and media companies. While OpenAI promises to keep user data safe and private, the outcome of this case could set a precedent for how all AI platforms handle our information in the future.


Hindustan Times
16 minutes ago
- Hindustan Times
ChatGPT is obsessed with the number '27'! Experts reveal shocking details
If you've ever asked ChatGPT to 'pick a number between 1 and 50,' chances are it replied with 27. It's not just a one-off thing. This oddly specific number keeps showing up across AI models, from ChatGPT to Gemini and Claude. (I tried it too.) What started as a harmless prompt has now snowballed into a curious internet mystery: Why is AI so obsessed with 27? Multiple AI models show the same behaviour, and experts believe it's revealing something surprisingly human hiding in the data.(Pexels) Turns out, the explanation goes deeper than just randomness, and it might reveal more about us than about the machines. I asked ChatGPT to choose a number between 1 to 50 and it chose 27.(ChatGPT) The pattern no one expected Several users and researchers recently began experimenting with basic numerical prompts across leading AI chatbots. The results were strangely consistent. While a few models, like LeChat, occasionally veer towards 37 and Claude towards 42, 27 remained a recurring favourite, especially in ChatGPT. Medium writer, Kartikey Sengar, was one of the first to point this out in his article analysing cross-platform outputs. He called the pattern 'too frequent to be a coincidence,' adding, 'It's not that 27 is programmed into the model—it's that it somehow keeps surfacing across AI behavior.' Is it really the AI - or is it us? Before diving into conspiracy theories, it's important to remember one thing: AI doesn't dream up numbers on its own. Models like ChatGPT are trained on massive datasets sourced from books, websites, forums and human conversations. Their patterns are rooted in our own behaviour. So, could this preference for 27 be a mirror reflecting our own subconscious? Experts think so. 'AI is not magical. It's predictive. If humans tend to lean toward 27 in casual number selections, the model learns to do the same,' says an AI researcher who has worked on large language model behaviour. 'The output is a reflection of the input—us.' The cultural significance of the number 27 Why 27, though? It's not a round number, nor is it the halfway point in a range of 1 to 50. But 27 does hold surprising significance in both science and pop culture. For one, the Moon takes about 27 days to orbit the Earth. Human skin cells also regenerate every 27 days, and in numerology, 27 is considered a 'spiritually charged' number. Then there's the infamous '27 Club'—a group of iconic musicians like Amy Winehouse, Kurt Cobain, and Jimi Hendrix who all died at age 27. All these layers of meaning could be why the number has organically found its way into more content and, in turn, AI training datasets. What AI's favourite number says about us More than a quirk, this numerical fixation reveals how artificial intelligence can surface hidden patterns in human behaviour. The next time an AI spits out 27, it's not because it's haunted, it's because our cultural consciousness has nudged it in that direction. As models grow more sophisticated, even these small, strange patterns offer a peek into the evolving relationship between human minds and machine learning. So yes, ChatGPT might be obsessed with 27! But maybe, just maybe… we were too, all along.


Time of India
35 minutes ago
- Time of India
Amidst Xbox job cuts and Microsoft's AI push, Matt Turnbull promotes AI tools for helping manage layoffs
(Image via Getty Images) Microsoft continues the aggressive shift to AI investments, and now the tech giant has announced another layoff wave. This time, it affects the Xbox division too. Amidst all the turmoil and heavy investments made by Microsoft in AI, Matt Turnbull, Xbox Game Studios Publishing executive, has sparked discussions, promoting AI tools as a way to help employees navigate job losses. Matt Turnbull suggests AI as a layoff coping tool Matt Turnbull, the long-serving Executive Producer in Xbox Game Studios Publishing, has recently ignited discussions about AI tools' public endorsement to navigate job loss. Facing the logistical and emotional chaos of layoffs, Turnbull suggests using platforms like Copilot and ChatGPT to 'help reduce the emotional and cognitive load.' He further framed AI as a practical assistant during the period of scarce mental energy. Matt Turnbull suggests using AI as a layoff coping tool (Image via The specific recommendations coming from Matt Turnbull included crafting some detailed prompts. Some examples included the generation of tailored 30-day career recovery plans, seeking AI's help to combat the post-layoff imposter syndrome, and drafting networking messages to former colleagues. While the tools were acknowledged not to be replacements for human experiences, Turnbull did position them as efficiency boosters for all those who are overwhelmed. Later, but quite soon, the post by Turnbull offering advice was swiftly deleted. Microsoft pursues dual strategies with job cuts and AI ambitions Microsoft Plans to Cut 4% of Global Workforce in Latest Round of Layoffs amid Major AI Push | N18G The advice by Matt Turnbull lands against a corporate backdrop. Recently, Microsoft confirmed the elimination of approximately. 9,000 positions globally. It would impact 4% of the workforce. Reports have quite specifically cited some significant cuts in the Xbox division, which impacts studios like The Initiative, Turn 10, and ZeniMax Online, right alongside some anticipated project cancellations. It follows earlier layoffs that happened in 2024 and 2023. 1000s more jobs were lost during the period. Concurrently, Microsoft has been aggressively channeling resources to AI. The company is investing approximately. $80 billion to build out data center infrastructure, which is crucial to train advanced AI models. The push includes forming a dedicated Microsoft AI division and OpenAI's heavy backing. The entire strategy signals a clear corporate pivot, giving priority to AI development as the future growth engine, even when it's trimming the headcount elsewhere. The entire situation prompts some difficult questions. Is AI the primary tool for efficiency, which displaces the workforce, or could it be genuine support for the workforce to transition, as suggested by Turnbull? For now, actions of Microsoft—cutting jobs, while funding AI massively, suggest a belief that later drives the corporate strategy, making the former proposition feel to many like a cold comfort. Additionally, the deleted LinkedIn post speaks volumes about the sensitivity that surrounds the complex crossroads between humans and technology. For real-time updates, scores, and highlights, follow our live coverage of the India vs England Test match here . Game On Season 1 continues with Mirabai Chanu's inspiring story. Watch Episode 2 here.