logo
American Students Are Relying On ChatGPT - At Their Own Risk

American Students Are Relying On ChatGPT - At Their Own Risk

Newsweek24-06-2025
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources.
Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content.
The use of generative artificial intelligence by students has increased over the last two years, but now research has revealed what is driving the trend.
A new survey found that students appreciated the ability of large language models (LLMs) like ChatGPT to provide information without any judgement, with many respondents describing it as a "safe and supportive" learning tool.
Why It Matters
The use of artificial intelligence in academic work is one of the biggest ethical issues facing the education sector. Tools like ChatGPT, which are being updated regularly to be more intelligent, can serve a purpose in helping students' work, but there is a worry that an overreliance could lead to problems.
What To Know
Last year, a study in the journal Computers and Education: Artificial Intelligence outlined that of 490 university students, one in four respondents (23.1 percent) relied on ChatGPT for drafting assignments and writing homework.
That research has now been backed up by a new report in the Tech Trends journal, published in June this year, which found that 78.7 percent of respondents were using generative AI regularly for their studies.
"Particularly noteworthy is that students perceived GenAI as useful because they are not judged by it and because of its anonymity," the report read.
"Students generally feel comfortable using GenAI for either general or learning purposes, perceiving these tools as beneficial especially with regard to their anonymity and non-judgmental nature."
Photo-illustration by Newsweek/Getty/Canva
However, the reliance on AI can be a double-edged sword. Another study from MIT found that extended use of LLMs for research and writing could have long-term behavioral effects, such as lower brain engagement and laziness.
The study, released this week without peer review, indicated that an overreliance on tools like ChatGPT "could actually harm learning, especially for younger users."
It compared brain activity between students using ChatGPT and students using traditional writing methods. The study found that the AI-assisted writers were engaging their deep memory processes far less than the control groups, and that their information recall skills were worse after producing work with ChatGPT.
What People Are Saying
Akli Adjaoute, an artificial intelligence security expert and author of Inside AI, told Newsweek of another pitfall for students. He says generative AI remained influenced by human hands in its programming, and "cannot be trained to be completely free of bias."
He added, "This is not a bug, it just reflects our world. AI does not invent knowledge. It learns from data created by people. And people, even with the best intentions, carry assumptions, disagreements, and historical baggage.
"AI systems are trained on information from many sources: books, websites, job applications, police records, medical histories, and social media. All of this information reflects human choices, including what we believe, what we value, and who has held power.
"If the data contains stereotypes or discrimination, the AI will absorb it. In many cases, it does not just copy the bias; it amplifies it."
What Happens Next
ChatGPT and other LLM tools continue to be updated regularly, but the academic sector is not moving as fast, and there is still no united approach on how AI tools should be handled.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI employees share their 3 favorite tips for using ChatGPT
OpenAI employees share their 3 favorite tips for using ChatGPT

Business Insider

time31 minutes ago

  • Business Insider

OpenAI employees share their 3 favorite tips for using ChatGPT

If you ever happen to see Nick Turley, the head of ChatGPT at OpenAI, muttering to himself on a weekday morning, it might be because he's talking to a chatbot. Turley said that ChatGPT's voice feature is his favorite tip for using the technology on a recent episode of the OpenAI podcast. "On my way to work, I'll use it to process my own thoughts. With some luck, and I think this works most days, I'll have the restructured list of to-dos by the time I actually get there," he said, adding that the voice feature isn't yet mainstream because there are a bunch of small "kinks" still. He said he finds it valuable to force himself to articulate his thoughts aloud, and wants to see the feature improve next year. Mark Chen, OpenAI's chief research officer, said on the podcast that he's a fan of the deep research feature, especially before an introduction. "When I go meet someone new, when I'm going to talk to someone about AI, I just preflight topics," Chen said. "I think the model can do a really good job of contextualizing who I am, who I'm about to meet, and what things we might find interesting." And podcast host Andrew Mayne, who was formerly OpenAI's science communicator and worked on ChatGPT, said he uses the technology when he's out at a restaurant. "I take a photograph of a menu and I'm like, 'Help me plan a meal or whatever, I'm trying to stick to a diet," Mayne said. Turley, however, cautioned against using the same trick for the wine list. "It keeps embarrassing me with hallucinated wine recommendations, and I go order it and they're like, 'Never heard of this one,'" he said. Corporate executives across companies are using AI in their daily lives, and OpenAI CEO Sam Altman is no different. Altman said on the "ReThinking" podcast in January that he uses it in "the boring ways," for things like processing emails and summarizing documents. When Altman spoke on the OpenAI podcast in June, he said that he uses ChatGPT "constantly" as a father. At the time, he said he was mainly using it to research developmental stages. "Clearly, people have been able to take care of babies without ChatGPT for a long time," Altman said. "I don't know how I would have done that."

Microsoft to slash 9,000 jobs in latest brutal cut amid AI push: report
Microsoft to slash 9,000 jobs in latest brutal cut amid AI push: report

New York Post

timean hour ago

  • New York Post

Microsoft to slash 9,000 jobs in latest brutal cut amid AI push: report

Microsoft said Wednesday that it will lay off about 9,000 workers in the software giant's latest round of brutal cuts this year. The layoffs will impact less than 4% of Microsoft's global workforce, impacting workers across different teams with varying levels of experience, a source familiar with the matter told CNBC. Microsoft has already slashed thousands of positions this year as it focuses on cutting layers of management and shifting resources toward the artificial intelligence race. Advertisement Microsoft CEO Satya Nadella speaking at a conference in May. AFP via Getty Images Bloomberg reported last month that Microsoft was planning job cuts in its sales division. 'We continue to implement organizational changes necessary to best position the company and teams for success in a dynamic marketplace,' a Microsoft spokesperson told CNBC. Meanwhile, Microsoft reported nearly $26 billion in net income and $70 billion in revenue in the most recent quarter, far outperforming Wall Street estimates. Advertisement Microsoft did not immediately respond to The Post's request for comment. Its most recent layoff round in May slashed more than 6,000 jobs, or about 3% of its global workforce, as it eradicates middle management roles. The layoffs announced Wednesday similarly seek to reduce the layers between individual contributors and top executives, a source familiar with the matter told CNBC. Advertisement Microsoft said it will lay off about 9,000 workers. AP In January, the software giant axed less than 1% of its workforce based on performance in an attempt to keep up with cutthroat tech rivals, mimicking Elon Musk's 'hardcore' approach. As of last summer, the company employed 228,000 workers. It cut 10,000 roles throughout 2023. Microsoft has led mammoth layoff rounds in the past, axing 18,000 roles in a single sweep in 2014 after acquiring Finnish telecommunications firm Nokia. Advertisement The company is projecting strong revenue growth of 14% year-over-year as it expands its Azure cloud business and corporate software subscriptions. Shares in Microsoft have risen more than 17% so far this year. Meanwhile, the company is reportedly weighing whether to abandon its breakthrough partnership with Sam Altman's OpenAI. It has considered pausing talks with the ChatGPT maker if the two parties are not able to agree on the size of Microsoft's future stake in OpenAI, the Financial Times reported last month. The company will rely on its existing contract with OpenAI through 2030, according to the report. Several other software companies have trimmed their workforces this year, including homework helper Chegg and CrowdStrike, which suffered a massive outage last year that disrupted airlines, banks and the hospitality industry.

Scientists Use A.I. to Mimic the Mind, Warts and All
Scientists Use A.I. to Mimic the Mind, Warts and All

New York Times

timean hour ago

  • New York Times

Scientists Use A.I. to Mimic the Mind, Warts and All

Companies like OpenAI and Meta are in a race to make something they like to call artificial general intelligence. But for all the money being spent on it, A.G.I. has no settled definition. It's more of an aspiration to create something indistinguishable from the human mind. Artificial intelligence today is already doing a lot of things that were once limited to human minds — such as playing championship chess and figuring out the structure of proteins. ChatGPT and other chatbots are crafting language so humanlike that people are falling in love with them. But for now, artificial intelligence remains very distinguishable from the human kind. Many A.I. systems are good at one thing and one thing only. A grandmaster can drive a car to a chess tournament, but a chess-playing A.I. system is helpless behind the wheel. An A.I. chatbot can sometimes make very simple — and very weird — mistakes, like letting pawns move sideways in chess, an illegal move. For all these shortcomings, an international team of scientists believe that A.I. systems can help them understand how the human mind works. They have created a ChatGPT-like system that can play the part of a human in a psychological experiment and behave as if it has a human mind. Details about the system, known as Centaur, were published on Wednesday in the journal Nature. In recent decades, cognitive scientists have created sophisticated theories to explain various things that our minds can do: learn, recall memories, make decisions and more. To test these theories, cognitive scientists run experiments to see if human behavior matches a theory's predictions. Some theories have fared well on such tests, and can even explain the mind's quirks. We generally choose certainty over risk, for instance, even if that means forgoing a chance to make big gains. If people are offered $1,000, they will usually take that firm offer rather than make a bet that might, or might not, deliver a much bigger payout. Want all of The Times? Subscribe.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store