'Writing is thinking': Do students who use ChatGPT learn less?
When Jocelyn Leitzinger had her university students write about times in their lives they had witnessed discrimination, she noticed that a woman named Sally was the victim in many of the stories.
"It was very clear that ChatGPT had decided this is a common woman's name," said Leitzinger, who teaches an undergraduate class on business and society at the University of Illinois in Chicago.
"They weren't even coming up with their own anecdotal stories about their own lives," she told AFP.
Leitzinger estimated that around half of her 180 students used ChatGPT inappropriately at some point last semester -- including when writing about the ethics of artificial intelligence (AI), which she called both "ironic" and "mind-boggling".
So she was not surprised by recent research, which suggested that students who use ChatGPT to write essays engage in less critical thinking.
The preprint study, which has not been peer-reviewed, was shared widely online and clearly struck a chord with some frustrated educators.
The team of MIT researchers behind the paper has received more than 3,000 emails from teachers of all stripes since it was published online last month, lead author Nataliya Kosmyna told AFP.
'Soulless' AI essays
For the small study, 54 adult students from the greater Boston area were split into three groups. One group used ChatGPT to write 20-minute essays, one used a search engine, and the final group had to make do with only their brains.
The researchers used EEG devices to measure the brain activity of the students, and two teachers marked the essays.
The ChatGPT users scored significantly worse than the brain-only group on all levels. The EEG showed that different areas of their brains connected to each other less often.
And more than 80 percent of the ChatGPT group could not quote anything from the essay they had just written, compared to around 10 percent of the other two groups.
By the third session, the ChatGPT group appeared to be mostly focused on copying and pasting.
The teachers said they could easily spot the "soulless" ChatGPT essays because they had good grammar and structure but lacked creativity, personality, and insight.
However, Kosmyna pushed back against media reports claiming the paper showed that using ChatGPT made people lazier or more stupid.
She pointed to the fourth session, when the brain-only group used ChatGPT to write their essay and displayed even higher levels of neural connectivity.
Kosmyna emphasised it was too early to draw conclusions from the study's small sample size, but called for more research into how AI tools could be used more carefully to help learning.
Ashley Juavinett, a neuroscientist at the University of California, San Diego, who was not involved in the research, criticised some "off-base" headlines that wrongly extrapolated from the preprint.
"This paper does not contain enough evidence or the methodological rigour to make any claims about the neural impact of using LLMs (large language models such as ChatGPT) on our brains," she told AFP.
Thinking outside the box
Leitzinger said the research reflected how she had seen student essays change since ChatGPT was released in 2022, as both spelling errors and authentic insight became less common.
Sometimes students do not even change the font when they copy and paste from ChatGPT, she said.
But Leitzinger called for empathy for students, saying they can get confused when the use of AI is being encouraged by universities in some classes but is banned in others.
The usefulness of new AI tools is sometimes compared to the introduction of calculators, which required educators to change their ways.
But Leitzinger worried that students do not need to know anything about a subject before pasting their essay question into ChatGPT, skipping several important steps in the process of learning.
A student at a British university in his early 20s who wanted to remain anonymous told AFP he found ChatGPT was a useful tool for compiling lecture notes, searching the internet, and generating ideas.
"I think that using ChatGPT to write your work for you is not right because it's not what you're supposed to be at university for," he said.
The problem goes beyond high school and university students.
Academic journals are struggling to cope with a massive influx of AI-generated scientific papers. Book publishing is also not immune, with one startup planning to pump out 8,000 AI-written books a year.
"Writing is thinking, thinking is writing, and when we eliminate that process, what does that mean for thinking?" Leitzinger asked.
AFP

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mail & Guardian
14 hours ago
- Mail & Guardian
Digital technology must speak African languages
(Graphic: John McCann/M&G) Every year on 25 May, Africa Day is observed to celebrate the continent's strength and rich cultural heritage. But it is also a day that reminds us how far we still have to go. Across Africa, many still face daily struggles with unemployment, poverty and inadequate access to basic services. Less visible, but just as urgent, is another kind of inequality: language. As governments increasingly connect with citizens through digital platforms, millions are left out, partly because the technology does not speak their language. Africa is home to more than 2 000 languages and a vibrant linguistic and cultural tradition, yet its civic tech infrastructure remains stubbornly monolingual. In a world where artificial intelligence (AI) and digital tools increasingly mediate civic engagement, leaving out African languages in these platforms is both a technical oversight and a governance failure. Civic technology (civic tech) refers to digital tools used to promote citizen engagement, government transparency and public participation. From apps that track service delivery to platforms that allow people to report corruption and access public services, civic tech is important for participatory democracy. But what happens when the very people these tools are meant to serve and empower cannot understand them? The reality is that most African civic tech platforms are designed in English or French, the languages of former colonial powers. This excludes most citizens who are more comfortable in indigenous languages such as Swahili, Yoruba or isiZulu. In most cases, English is the default interface language, even in countries where only a minority speak English fluently. A key reason for this dominance is that English is the primary language of the internet, where most training data for language technologies used in digital tools comes from. Natural language processing (NLP), the AI subfield that allows machines to understand and generate human language, depends on large, annotated datasets. These are widely available for English but rarely for African languages, many of which are considered ' Tech developers often lack the training, tools or funding to build NLP models for these languages, especially when faced with the added complexity of dialectal variation, oral traditions and frequent code-switching. Another reason is that civic tech initiatives and efforts are often concentrated in developed urban areas, where English tends to be the main language of communication. This creates a situation where digital governance tools are more responsive to elites and uphold old hierarchies. The other barrier is institutional. In many cases, language inclusion is often an afterthought in civic tech development, with design decisions made by teams that do not consider the linguistic realities of the users they serve. This disconnect is worsened by the inadequacy of language policies or government mandates requiring digital platforms to support indigenous languages. As a result, civic tech ends up amplifying the voices of those already heard (urban, educated and English-speaking) while muting those on the margins. Take South Africa, for instance. It has 11 official spoken languages and the Post-apartheid reforms may have constitutionally elevated African languages, but digital systems have not caught up. Language inequity is being replicated in digital space, and this often results in diminished civic participation, poor service uptake and distrust in institutions. These problems are worse in rural areas, where literacy in former colonial languages is low. In Kenya, for example, citizen feedback platforms like Ushahidi have struggled to reach monolingual Swahili speakers. In Nigeria, digital voting education tools often exclude Hausa, Igbo or Yoruba, creating information asymmetries in the democratic process. In Ethiopia, the dominance of Amharic-based civic systems means that minority language speakers in Oromia or Tigray are digitally disenfranchised. There are growing efforts across the continent to localise AI and digital governance tools, and, equally, lessons to learn from these initiatives. The Masakhane project, for example, is a pan-African research initiative developing machine translation models for African languages. In Rwanda, Kinyarwanda-language platforms are being integrated into agriculture extension services, enabling farmers to get weather forecasts and pricing in real time. Open-source solutions are also important. Projects such as Mozilla Common Voice have crowdsourced voice data in several African languages. These community-collected datasets can help train AI and language technologies to understand under-resourced languages, bypassing the expensive proprietary route. As these efforts grow, so does the need to centre accessibility and inclusion from the very beginning of civic tech projects. Mark Renja, project manager at Code for Africa, Others in the civic tech space echo this view. 'We are quick to condemn inaccessibility in the physical space because it is glaring, but we are making the digital space inaccessible because we think it doesn't matter,' Professor Mpho Primus, co-director of the Institute of AI Systems at the University of Johannesburg, argues that the rise of the Fifth Industrial Revolution (a shift focused on ethics, collaboration and human-centred AI) provides a key opportunity for change. She explains that this new paradigm corresponds with Africa's pluralistic and multilingual societies, if we choose to embrace it. She notes that integrating African languages into emerging technologies would not only help bridge the digital divide but could also position the continent as a leader in shaping ethical AI development. 'The push toward human-centred AI requires linguistic inclusion to be at the forefront,' says Primus. Importantly, there is a strong case for governments to mandate the inclusion of indigenous languages in all e-governance systems. This includes local language support in digital identity systems, chatbots, mobile apps and voting education platforms. Multilingual support should not be viewed as a 'feature' but as a default standard, much like data protection or accessibility for persons with disabilities. Donors and international development partners also have a role to play. Too often civic tech funding is tied to short-term performance metrics (number of users, clicks or reports filed) rather than long-term inclusivity. But trust is the foundation on which civic tech succeeds and delivers. If marginalised communities do not trust the system or the institutions behind it, the technology will either fail or exacerbate inequalities. Language inclusion is one way to build that trust. A multilingual platform may be slower to scale in the short term, but it is more likely to foster trust, uptake, and resilience. Funders must be willing to back projects that prioritise inclusion over convenience, invest in research that improves the quality and availability of language data and support programmes that connect technology, governance and language inclusion. Finally, we must reframe language not as a barrier, but as an enabler. African languages are rich in nuance, metaphor and centuries of indigenous knowledge. When we include them in civic tech, we are making tools more accessible and meaningful. Imagine an AI tool that interprets a proverb-laden community feedback report in Tshivenda, or a chatbot that explains land tenure in Wolof using culturally grounded analogies. Those are the kind of tech that truly speaks to people. As AI becomes central to everything from taxation to public service delivery, the cost of exclusion will grow. Civic tech needs to be built with more voices at the table, especially from communities that speak lesser-known or low-resource African languages. A digital state that cannot speak the language of its people is a state that cannot hear them either. Nnaemeka Ohamadike is a senior data analyst at Good Governance Africa.


eNCA
a day ago
- eNCA
AI robots fill in for weed killers and farm hands
LOS ANGELES - Oblivious to the punishing midday heat, a wheeled robot powered by the sun and infused with artificial intelligence carefully combs a cotton field in California, plucking out weeds. As farms across the United States face a shortage of labourers and weeds grow resistant to herbicides, startup Aigen says its robotic solution -- named Element -- can save farmers money, help the environment and keep harmful chemicals out of food. "I really believe this is the biggest thing we can do to improve human health," co-founder and chief technology officer Richard Wurden told AFP, as robots made their way through crops at Bowles Farm in the town of Los Banos. "Everybody's eating food sprayed with chemicals." Wurden, a mechanical engineer who spent five years at Tesla, went to work on the robot after relatives who farm in Minnesota told him weeding was a costly bane. Weeds are becoming immune to herbicides, but a shortage of laborers often leaves chemicals as the only viable option, according to Wurden. "No farmer that we've ever talked to said 'I'm in love with chemicals'," added Aigen co-founder and chief executive Kenny Lee, whose background is in software. "They use it because it's a tool -- we're trying to create an alternative." Element the robot resembles a large table on wheels, solar panels on top. Metal arms equipped with small blades reach down to hoe between crop plants. "It actually mimics how humans work," Lee said as the temperature hit 90 degrees Fahrenheit (32 degrees Celsius) under a cloudless sky. "When the sun goes down, it just powers down and goes to sleep; then in the morning it comes back up and starts going again." AFP | Josh Edelson The robot's AI system takes in data from on-board cameras, allowing it to follow crop rows and identify weeds. "If you think this is a job that we want humans doing, just spend two hours in the field weeding," Wurden said. Aigen's vision is for workers who once toiled in the heat to be "upskilled" to monitor and troubleshoot robots. Along with the on-board AI, robots communicate wirelessly with small control centres, notifying handlers of mishaps. - Future giant? - Aigen has robots running in tomato, cotton, and sugar beet fields, and touts the technology's ability to weed without damaging the crops. Lee estimated that it takes about five robots to weed 65 hectares of farm. AFP | Josh Edelson The robots made by the 25-person startup -- based in the city of Redmond, outside Seattle -- are priced at $50,000. The company is focused on winning over politically conservative farmers with a climate-friendly option that relies on the sun instead of costly diesel fuel that powers heavy machinery. "Climate, the word, has become politicized but when you get really down to brass tacks farmers care about their land," Lee said. The technology caught the attention of Amazon Web Services (AWS), the e-commerce giant's cloud computing unit. Aigen was chosen for AWS's "Compute for Climate" fellowship program that provides AI tools, data center power, and technical help for startups tackling environmental woes. "Aigen is going to be one of the industry giants in the future," said AWS head of climate tech startups business development Lisbeth Kaufman. "I think about Ford and the Model T, or Edison and the light bulb -- that's Kenny and Rich and Aigen."

TimesLIVE
a day ago
- TimesLIVE
AI needn't make you anxious — Sameer Rawjee on writing 'Taking the Anxiety out of AI'
You know that feeling when you study super hard for a test and then none of the questions come up? That frustration, like it was all for nothing? That's how I felt when I walked out of a podcast with a major bank in Sandton where I had prepared for more than two weeks to deliver my insights on AI. Of course the bank only needed to produce a 20-minute piece — who is realistically going to listen to anything longer than that — but still, the moment just felt like a cliffhanger. I sat in an Uber on my way back home that day and emailed someone in publishing on LinkedIn; it turned out we had mutual connections. I then got introduced to an agent. I wrote a book proposal in 30 mins on Gmail, no attachments. Five emails and six weeks later, I signed a book deal with Penguin. I guess I found my way to finish my podcast! Look, I've been writing for a while. I love writing. I write on many different topics. I thought my first book deal might be on spirituality or philosophy, topics closer to my heart. But I've also worked in Silicon Valley for many years, including working at Google's headquarters. So I have learnt a lot about how to think for the future and make sense of how the world is unfolding with respect to new technologies, business models and cultural evolutions. I'm not an AI expert, and who is these days anyway, but it's also not rocket science to contextualise a new technology and draw patterns from past revolutions and sci-fi films to make sense of what to do next — whether that's in business, education or your personal life. Image: Supplied I think this space needs more philosophers and anthropologists — people who can understand the technology but also question the deeper meaning behind our humanity. We going to into a future where AI inventors are motivated by so many dichotomous beliefs, some exciting and some scary, that we also need people who can simply question what it means to be human now to join the global conversation too. I think that's what my book does. It asks questions such as: 'What does it mean to have a soul? How is your brain different from a computer, and what does it mean to be emotionally intelligent in ways that are different from machines?' These are questions that really matter now. And those who pay attention to them, beyond just the basics of 'how to use ChatGPT' or 'how to make cash with AI', will really be able to make advances for the new era. I suppose the hardest part of writing this book was to decide whether, and where, I would use AI to write about AI. They say no-one knows you better than yourself. In that case no-one can know AI better than itself. So yes, I did use AI to help write this book, but I also spent over 1,000 hours writing this out of my own head and heart. I hope this book will help you reflect on what truly matters now, and I hope that it will help you and your family make big leaps in your thinking. For every threat to our jobs and existence, there are 1,000 more opportunities to be excited about. This is the time to reshape your thinking from what you think you might lose from this revolution, to what you might actually gain. Wishing you the best this year; let's continue the conversation on socials.