logo
'Writing is thinking': do students who use ChatGPT learn less?

'Writing is thinking': do students who use ChatGPT learn less?

The Stara day ago
PARIS: When Jocelyn Leitzinger had her university students write about times in their lives they had witnessed discrimination, she noticed that a woman named Sally was the victim in many of the stories.
"It was very clear that ChatGPT had decided this is a common woman's name," said Leitzinger, who teaches an undergraduate class on business and society at the University of Illinois in Chicago.
"They weren't even coming up with their own anecdotal stories about their own lives," she told AFP.
Leitzinger estimated that around half of her 180 students used ChatGPT inappropriately at some point last semester – including when writing about the ethics of artificial intelligence (AI), which she called both "ironic" and "mind-boggling".
So she was not surprised by recent research which suggested that students who use ChatGPT to write essays engage in less critical thinking.
The preprint study, which has not been peer-reviewed, was shared widely online and clearly struck a chord with some frustrated educators.
The team of MIT researchers behind the paper have received more than 3,000 emails from teachers of all stripes since it was published online last month, lead author Nataliya Kosmyna told AFP.
'Soulless' AI essays
For the small study, 54 adult students from the greater Boston area were split into three groups. One group used ChatGPT to write 20-minute essays, one used a search engine, and the final group had to make do with only their brains.
The researchers used EEG devices to measure the brain activity of the students, and two teachers marked the essays.
The ChatGPT users scored significantly worse than the brain-only group on all levels. The EEG showed that different areas of their brains connected to each other less often.
And more than 80% of the ChatGPT group could not quote anything from the essay they had just written, compared to around 10% of the other two groups.
By the third session, the ChatGPT group appeared to be mostly focused on copying and pasting.
The teachers said they could easily spot the "soulless" ChatGPT essays because they had good grammar and structure but lacked creativity, personality and insight.
However Kosmyna pushed back against media reports claiming the paper showed that using ChatGPT made people lazier or more stupid.
She pointed to the fourth session, when the brain-only group used ChatGPT to write their essay and displayed even higher levels of neural connectivity.
Kosmyna emphasised it was too early to draw conclusions from the study's small sample size but called for more research into how AI tools could be used more carefully to help learning.
Ashley Juavinett, a neuroscientist at the University of California San Diego who was not involved in the research, criticised some "offbase" headlines that wrongly extrapolated from the preprint.
"This paper does not contain enough evidence nor the methodological rigour to make any claims about the neural impact of using LLMs (large language models such as ChatGPT) on our brains," she told AFP.
Thinking outside the bot
Leitzinger said the research reflected how she had seen student essays change since ChatGPT was released in 2022, as both spelling errors and authentic insight became less common.
Sometimes students do not even change the font when they copy and paste from ChatGPT, she said.
But Leitzinger called for empathy for students, saying they can get confused when the use of AI is being encouraged by universities in some classes but is banned in others.
The usefulness of new AI tools is sometimes compared to the introduction of calculators, which required educators to change their ways.
But Leitzinger worried that students do not need to know anything about a subject before pasting their essay question into ChatGPT, skipping several important steps in the process of learning.
A student at a British university in his early 20s who wanted to remain anonymous told AFP he found ChatGPT was a useful tool for compiling lecture notes, searching the internet and generating ideas.
"I think that using ChatGPT to write your work for you is not right because it's not what you're supposed to be at university for," he said.
The problem goes beyond high school and university students.
Academic journals are struggling to cope with a massive influx of AI-generated scientific papers. Book publishing is also not immune, with one startup planning to pump out 8,000 AI-written books a year.
"Writing is thinking, thinking is writing, and when we eliminate that process, what does that mean for thinking?" Leitzinger asked. – AFP
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

European companies urge EU to delay AI rules
European companies urge EU to delay AI rules

The Sun

time4 hours ago

  • The Sun

European companies urge EU to delay AI rules

BRUSSELS: Dozens of Europe's biggest companies urged the EU to hit the pause button on its landmark AI rules on Thursday, warning that going too fast could harm the bloc's ability to lead in the global AI race. The European Union's sweeping rules on artificial intelligence entered into force last year but the bloc has since pivoted to strengthening its industries in the face of fierce competition from China and the United States. The new US administration under President Donald Trump has also taken aim at the rules. Vice President JD Vance in February lambasted the EU over 'excessive' regulation. Now, 46 top executives including from France's Airbus and Mistral, Dutch tech giant ASML and Germany's Lufthansa and Mercedes-Benz are calling on Brussels to reassess the rules. They accused the EU's complex regulation of putting 'Europe's AI ambitions at risk, as it jeopardises not only the development of European champions, but also the ability of all industries to deploy AI at the scale required by global competition'. The CEOs urged the European Commission to propose a two-year pause and for 'further simplification of the new rules' for which a code of practice has yet to be released. The pause, the CEOs said, should apply to obligations on general-purpose AI models -- like OpenAI's ChatGPT -- and high-risk AI systems that were due to take effect in August 2025 and August 2026 respectively. The EU's law takes a risk-based approach to the technology. The higher the risk to Europeans' rights or health, for example, the greater the systems' obligations to protect individuals from harm. The EU has been working on the long-delayed code to provide guidance on how the rules should apply to general-purpose AI models, including Google's Gemini and Mistral's Le Chat. There are expectations that the code will be watered down and the commission has said it will be published before the rules on GPAI models kick in next month. AFP has a deal with Mistral allowing its chatbot to draw on the agency's articles to formulate responses. – AFP

Explainer-Will the EU delay enforcing its AI Act?
Explainer-Will the EU delay enforcing its AI Act?

The Star

time4 hours ago

  • The Star

Explainer-Will the EU delay enforcing its AI Act?

FILE PHOTO: A copy of "The European Union Artificial Intelligence (AI) Act" on display during the AI & Big Data Expo 2025 at the Olympia, in London, Britain, February 5, 2025. REUTERS/Isabel Infantes/File Photo STOCKHOLM (Reuters) -With less than a month to go before parts of the European Union's AI Act come into force, companies are calling for a pause in the provisions and getting support from some politicians. Groups representing big U.S. tech companies such as Google owner Alphabet and Facebook owner Meta, and European companies such as Mistral and ASML have urged the European Commission to delay the AI Act by years. WHAT IS THE AUGUST 2 DEADLINE? Under the landmark act that was passed a year earlier after intense debate between EU countries, its provisions would come into effect in a staggered manner over several years. Some important provisions, including rules for general purpose AI (GPAI) models, are due to apply on August 2. GPAI, which includes foundation models like those made by Google, Mistral and OpenAI, will be subject to transparency requirements such as drawing up technical documentation, complying with EU copyright law and providing detailed summaries about the content used for algorithm training. The companies will also need to test for bias, toxicity, and robustness before launching. AI models classed as posing a systemic risk and high-impact GPAI will have to conduct model evaluations, assess and mitigate risks, conduct adversarial testing, report to the European Commission on serious incidents and provide information on their energy efficiency. WHY DO COMPANIES WANT A PAUSE? For AI companies, the enforcement of the act means additional costs for compliance. And for ones that make AI models, the requirements are tougher. But companies are also unsure how to comply with the rules as there are no guidelines yet. The AI Code of Practice, a guidance document to help AI developers to comply with the act, missed its publication date of May 2. "To address the uncertainty this situation is creating, we urge the Commission to propose a two-year 'clock-stop' on the AI Act before key obligations enter into force," said an open letter published on Thursday by a group of 45 European companies. It also called for simplification of the new rules. Another concern is that the act may stifle innovation, particularly in Europe where companies have smaller compliance teams than their U.S. counterparts. WILL IT BE POSTPONED? The European Commission has not yet commented on whether it will postpone the enforcement of the new rules in August. However, EU tech chief Henna Virkkunen promised on Wednesday to publish the AI Code of Practice before next month. Some political leaders, such as Swedish Prime Minister Ulf Kristersson, have also called the AI rules "confusing" and asked the EU to pause the act. "A bold 'stop-the-clock' intervention is urgently needed to give AI developers and deployers legal certainty, as long as necessary standards remain unavailable or delayed," tech lobbying group CCIA Europe said. The European Commission did not respond immediatelyt to requests for comment. (Reporting by Supantha Mukherjee in Stockholm and Foo Yun Chee in Brussels. Editing by Mark Potter)

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store