
The professors are using ChatGPT, and some students aren't happy about it
Halfway through the document, which her business professor had made for a lesson on models of leadership, was an instruction to ChatGPT to 'expand on all areas. Be more detailed and specific.' It was followed by a list of positive and negative leadership traits, each with a prosaic definition and a bullet-pointed example.
Stapleton texted a friend in the class.
'Did you see the notes he put on Canvas?' she wrote, referring to the university's software platform for hosting course materials. 'He made it with ChatGPT.'
'OMG Stop,' the classmate responded. 'What the hell?'
Stapleton decided to do some digging. She reviewed her professor's slide presentations and discovered other telltale signs of artificial intelligence: distorted text, photos of office workers with extraneous body parts and egregious misspellings.
Stapleton, who filed a formal complaint with Northeastern University's D'Amore-McKim School of Business over a professor's undisclosed use of AI, in London. The syllabus forbade the unauthorised use of AI or chatbots. 'He's telling us not to use it, and then he's using it himself,' she said of the professor. — OLIVER HOLMS/The New York Times
She was not happy. Given the school's cost and reputation, she expected a top-tier education. This course was required for her business minor; its syllabus forbade 'academically dishonest activities', including the unauthorised use of AI or chatbots.
'He's telling us not to use it, and then he's using it himself,' she said.
Stapleton filed a formal complaint with Northeastern's business school, citing the undisclosed use of AI as well as other issues she had with his teaching style, and requested reimbursement of tuition for that class. As a quarter of the total bill for the semester, that would be more than US$8,000 (RM34,312).
When ChatGPT was released at the end of 2022, it caused a panic at all levels of education because it made cheating incredibly easy. Students who were asked to write a history paper or literary analysis could have the tool do it in mere seconds. Some schools banned it while others deployed AI detection services, despite concerns about their accuracy.
But, oh, how the tables have turned. Now students are complaining on sites like Rate My Professors about their instructors' overreliance on AI and scrutinising course materials for words ChatGPT tends to overuse, such as 'crucial' and 'delve'. In addition to calling out hypocrisy, they make a financial argument: They are paying, often quite a lot, to be taught by humans, not an algorithm that they, too, could consult for free.
For their part, professors said they used AI chatbots as a tool to provide a better education. Instructors interviewed by The New York Times said chatbots saved time, helped them with overwhelming workloads and served as automated teaching assistants.
Their numbers are growing. In a national survey of more than 1,800 higher-education instructors last year, 18% described themselves as frequent users of generative AI tools; in a repeat survey this year, that percentage nearly doubled, according to Tyton Partners, the consulting group that conducted the research. The AI industry wants to help, and to profit: The startups OpenAI and Anthropic recently created enterprise versions of their chatbots designed for universities.
(The Times has sued OpenAI for copyright infringement for use of news content without permission.)
Generative AI is clearly here to stay, but universities are struggling to keep up with the changing norms. Now professors are the ones on the learning curve and, like Stapleton's teacher, muddling their way through the technology's pitfalls and their students' disdain.
Making the grade
Last fall, Marie, 22, wrote a three-page essay for an online anthropology course at Southern New Hampshire University. She looked for her grade on the school's online platform, and was happy to have received an A. But in a section for comments, her professor had accidentally posted a back-and-forth with ChatGPT. It included the grading rubric the professor had asked the chatbot to use and a request for some 'really nice feedback' to give Marie.
'From my perspective, the professor didn't even read anything that I wrote , ' said Marie, who asked to use her middle name and requested that her professor's identity not be disclosed. She could understand the temptation to use AI. Working at the school was a 'third job' for many of her instructors, who might have hundreds of students, said Marie, and she did not want to embarrass her teacher.
Still, Marie felt wronged and confronted her professor during a Zoom meeting. The professor told Marie that she did read her students' essays but used ChatGPT as a guide, which the school permitted.
Robert MacAuslan, vice president of AI at Southern New Hampshire, said that the school believed 'in the power of AI to transform education' and that there were guidelines for both faculty and students to 'ensure that this technology enhances, rather than replaces, human creativity and oversight.' A do's and don'ts for faculty forbids using tools, such as ChatGPT and Grammarly, 'in place of authentic, human-centric feedback.'
'These tools should never be used to 'do the work' for them,' MacAuslan said. 'Rather, they can be looked at as enhancements to their already established processes.'
After a second professor appeared to use ChatGPT to give her feedback, Marie transferred to another university.
Paul Shovlin, an English professor at Ohio University in Athens, Ohio, said he could understand her frustration. 'Not a big fan of that,' Shovlin said, after being told of Marie's experience. Shovlin is also an AI faculty fellow, whose role includes developing the right ways to incorporate AI into teaching and learning.
Shovlin is a proponent of incorporating AI into teaching, but not simply to make an instructor's life easier. Students need to learn to use the technology responsibly and 'develop an ethical compass with AI,' he said, because they will almost certainly use it in the workplace. — RICH-JOSEPH FACUN/The New York Times
'The value that we add as instructors is the feedback that we're able to give students,' he said . 'It's the human connections that we forge with students as human beings who are reading their words and who are being impacted by them.'
Shovlin is a proponent of incorporating AI into teaching, but not simply to make an instructor's life easier. Students need to learn to use the technology responsibly and 'develop an ethical compass with AI,' he said, because they will almost certainly use it in the workplace. Failure to do so properly could have consequences. 'If you screw up, you're going to be fired,' Shovlin said.
One example he uses in his own classes: In 2023, officials at Vanderbilt University's education school responded to a mass shooting at another university by sending an email to students calling for community cohesion. The message, which described promoting a 'culture of care' by 'building strong relationships with one another,' included a sentence at the end that revealed that ChatGPT had been used to write it. After students criticised the outsourcing of empathy to a machine, the officials involved temporarily stepped down.
Not all situations are so clear cut. Shovlin said it was tricky to come up with rules because reasonable AI use may vary depending on the subject. The Center for Teaching, Learning and Assessment, where he is a fellow, instead has 'principles' for AI integration, one of which eschews a 'one-size-fits-all approach.'
The Times contacted dozens of professors whose students had mentioned their AI use in online reviews. The professors said they had used ChatGPT to create computer science programming assignments and quizzes on required reading, even as students complained that the results didn't always make sense. They used it to organise their feedback to students, or to make it kinder. As experts in their fields, they said, they can recognise when it hallucinates, or gets facts wrong.
There was no consensus among them as to what was acceptable. Some acknowledged using ChatGPT to help grade students' work; others decried the practice. Some emphasised the importance of transparency with students when deploying generative AI, while others said they didn't disclose its use because of students' scepticism about the technology.
Most, however, felt that Stapleton's experience at Northeastern – in which her professor appeared to use AI to generate class notes and slides – was perfectly fine. That was Shovlin's view, as long as the professor edited what ChatGPT spat out to reflect his expertise. Shovlin compared it with a long-standing practice in academia of using content, such as lesson plans and case studies, from third-party publishers.
To say a professor is 'some kind of monster' for using AI to generate slides 'is, to me, ridiculous', he said.
The calculator on steroids
Shingirai Christopher Kwaramba, a business professor at Virginia Commonwealth University, described ChatGPT as a partner that saved time. Lesson plans that used to take days to develop now take hours, he said. He uses it, for example, to generate data sets for fictional chain stores, which students use in an exercise to understand various statistical concepts.
'I see it as the age of the calculator on steroids,' Kwaramba said.
Kwaramba said he now had more time for student office hours.
Other professors, including David Malan at Harvard University, said the use of AI meant fewer students were coming to office hours for remedial help. Malan, a computer science professor, has integrated a custom AI chatbot into a popular class he teaches on the fundamentals of computer programming. His hundreds of students can turn to it for help with their coding assignments.
Malan has had to tinker with the chatbot to hone its pedagogical approach, so that it offers only guidance and not the full answers. The majority of 500 students surveyed in 2023, the first year it was offered, said they found it helpful.
Rather than spend time on 'more mundane questions about introductory material' during office hours, he and his teaching assistants prioritise interactions with students at weekly lunches and hackathons – 'more memorable moments and experiences,' Malan said.
Katy Pearce, a communication professor at the University of Washington, developed a custom AI chatbot by training it on versions of old assignments that she had graded. It can now give students feedback on their writing that mimics her own at any time, day or night. It has been beneficial for students who are otherwise hesitant to ask for help, she said.
'Is there going to be a point in the foreseeable future that much of what graduate student teaching assistants do can be done by AI?' she said. 'Yeah, absolutely.'
What happens then to the pipeline of future professors who would come from the ranks of teaching assistants?
'It will absolutely be an issue,' Pearce said.
A teachable moment
After filing her complaint at Northeastern, Stapleton had a series of meetings with officials in the business school. In May, the day after her graduation ceremony, the officials told her that she was not getting her tuition money back.
Rick Arrowood, her professor, was contrite about the episode. Arrowood, who is an adjunct professor and has been teaching for nearly two decades, said he had uploaded his class files and documents to ChatGPT, the AI search engine Perplexity and an AI presentation generator called Gamma to 'give them a fresh look'. At a glance, he said, the notes and presentations they had generated looked great.
'In hindsight, I wish I would have looked at it more closely,' he said.
He put the materials online for students to review, but emphasised that he did not use them in the classroom, because he prefers classes to be discussion-oriented. He realised the materials were flawed only when school officials questioned him about them.
The embarrassing situation made him realise, he said, that professors should approach AI with more caution and disclose to students when and how it is used. Northeastern issued a formal AI policy only recently; it requires attribution when AI systems are used and review of the output for 'accuracy and appropriateness.' A Northeastern spokesperson said the school 'embraces the use of artificial intelligence to enhance all aspects of its teaching, research and operations.'
'I'm all about teaching,' Arrowood said. 'If my experience can be something people can learn from, then, OK, that's my happy spot.' – ©2025 The New York Times Company
This article originally appeared in The New York Times.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Star
4 hours ago
- The Star
US drops sanctions on Myanmar junta allies after Trump praise
A trishaw driver rides as his passenger uses an umbrella to cover themselves from the rain in Yangon on July 21, 2025. -- Photo by Sai Aung MAIN / AFP YANGON (AFP): The United States has lifted sanctions on several allies of Myanmar's ruling general and their military-linked firms, a US Treasury notice shows, after the junta chief sent a glowing letter of praise to President Donald Trump. Junta chief Min Aung Hlaing seized power in a 2021 coup, deposing the civilian government and sparking a civil war that has killed thousands, leaving 3.5 million displaced and half the nation in poverty. Two weeks ago, the top general sent a letter to Trump, responding to his threat of tariffs by lauding his presidency with praise, including for shutting down US-funded media outlets covering the conflict. A US Treasury notice on Thursday said sanctions were dropped against KT Services and Logistics, the Myanmar Chemical and Machinery Company, and Suntac Technologies -- as well as their managers. In a statement, a US Treasury Department spokesperson denied there was an "ulterior motive" in the move, although the notice did not provide a reason for the removals. "Anyone suggesting these sanctions were lifted for an ulterior motive is uninformed and peddling a conspiracy theory driven by hatred for President Trump," said the spokesperson, on condition of anonymity. They added that individuals were "regularly added and removed" from the sanctions list "in the ordinary course of business." KT Services and Logistics and its CEO Jonathan Myo Kyaw Thaung were described as junta "cronies" when they were sanctioned in 2022 for leasing Yangon's port from a military firm for $3 million a year. The Myanmar Chemical and Machinery Company and its owner, Aung Hlaing Oo, and Suntac Technologies owner Sit Taing Aung were sanctioned later that year for producing arms, including tanks and mortars. A third Myanmar national, Tin Latt Min -- who the US previously described as owning "various companies that are closely related to the regime" -- was also removed from the sanctions list. Trump sent a letter to junta chief Min Aung Hlaing earlier this month, one among a raft of missives despatched to foreign leaders during his global tariff blitz. The letter -- believed to be Washington's first public recognition of the junta's rule since the coup -- threatened Myanmar with a 40 percent levy unless a trade deal was struck. Min Aung Hlaing responded with a multi-page letter expressing his "sincere appreciation" for Trump's message and praising his "strong leadership". - AFP


The Star
7 hours ago
- The Star
‘It's the most empathetic voice in my life': How AI is transforming the lives of neurodivergent people
-For Cape Town-based filmmaker Kate D'hotman, connecting with movie audiences comes naturally. Far more daunting is speaking with others. 'I've never understood how people [decipher] social cues,' the 40-year-old director of horror films says. D'hotman has autism and attention-deficit hyperactivity disorder (ADHD), which can make relating to others exhausting and a challenge. However, since 2022, D'hotman has been a regular user of ChatGPT, the popular AI-powered chatbot from OpenAI, relying on it to overcome communication barriers at work and in her personal life. 'I know it's a machine,' she says. 'But sometimes, honestly, it's the most empathetic voice in my life.' Neurodivergent people — including those with autism, ADHD, dyslexia and other conditions — can experience the world differently from the neurotypical norm. Talking to a colleague, or even texting a friend, can entail misread signals, a misunderstood tone and unintended impressions. AI-powered chatbots have emerged as an unlikely ally, helping people navigate social encounters with real-time guidance. Although this new technology is not without risks — in particular some worry about over-reliance — many neurodivergent users now see it as a lifeline. How does it work in practice? For D'hotman, ChatGPT acts as an editor, translator and confidant. Before using the technology, she says communicating in neurotypical spaces was difficult. She recalls how she once sent her boss a bulleted list of ways to improve the company, at their request. But what she took to be a straightforward response was received as overly blunt, and even rude. Now, she regularly runs things by ChatGPT, asking the chatbot to consider the tone and context of her conversations. Sometimes she'll instruct it to take on the role of a psychologist or therapist, asking for help to navigate scenarios as sensitive as a misunderstanding with her best friend. She once uploaded months of messages between them, prompting the chatbot to help her see what she might have otherwise missed. Unlike humans, D'hotman says, the chatbot is positive and non-judgmental. That's a feeling other neurodivergent people can relate to. Sarah Rickwood, a senior project manager in the sales training industry, based in Kent, England, has ADHD and autism. Rickwood says she has ideas that run away with her and often loses people in conversations. 'I don't do myself justice,' she says, noting that ChatGPT has 'allowed me to do a lot more with my brain.' With its help, she can put together emails and business cases more clearly. The use of AI-powered tools is surging. A January study conducted by Google and the polling firm Ipsos found that AI usage globally has jumped 48%, with excitement about the technology's practical benefits now exceeding concerns over its potentially adverse February, OpenAI told Reuters that its weekly active users surpassed 400 million, of which at least 2 million are paying business users. But for neurodivergent users, these aren't just tools of convenience and some AI-powered chatbotsare now being created with the neurodivergent community in mind. Michael Daniel, an engineer and entrepreneur based in Newcastle, Australia, told Reuters that it wasn't until his daughter was diagnosed with autism — and he received the same diagnosis himself — that he realised how much he had been masking his own neurodivergent traits. His desire to communicate more clearly with his neurotypical wife and loved ones inspired him to build Neurotranslator, an AI-powered personal assistant, which he credits with helping him fully understand and process interactions, as well as avoid misunderstandings. 'Wow … that's a unique shirt,' he recalls saying about his wife's outfit one day, without realising how his comment might be perceived. She asked him to run the comment through NeuroTranslator, which helped him recognise that, without a positive affirmation, remarks about a person's appearance could come across as criticism. 'The emotional baggage that comes along with those situations would just disappear within minutes,' he says of using the app. Since its launch in September, Daniel says NeuroTranslator has attracted more than 200 paid subscribers. An earlier web version of the app, called Autistic Translator, amassed 500 monthly paid subscribers. As transformative as this technology has become, some warn against becoming too dependent. The ability to get results on demand can be 'very seductive,' says Larissa Suzuki, a London-based computer scientist and visiting NASA researcher who is herself neurodivergent. Overreliance could be harmful if it inhibits neurodivergent users' ability to function without it, or if the technology itself becomes unreliable — as is already the case with many AI search-engine results, according to a recent study from the Columbia Journalism Review.'If AI starts screwing up things and getting things wrong,' Suzuki says, 'people might give up on technology, and on themselves." Baring your soul to an AI chatbot does carry risk, agrees Gianluca Mauro, an AI adviser and co-author of Zero to AI. 'The objective [of AI models like ChatGPT] is to satisfy the user,' he says, raising questions about its willingness to offer critical advice. Unlike therapists, these tools aren't bound by ethical codes or professional guidelines. If AI has the potential to become addictive, Mauro adds, regulation should follow. A recent study by Carnegie Mellon and Microsoft (which is a key investor in OpenAI) suggests that long-term overdependence on generative AI tools can undermine users' critical-thinking skills and leave them ill-equipped to manage without it. 'While AI can improve efficiency,' the researchers wrote, 'it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI.' While Dr. Melanie Katzman, a clinical psychologist and expert in human behaviour, recognises the benefits of AI for neurodivergent people, she does see downsides, such as giving patients an excuse not to engage with others. A therapist will push their patient to try different things outside of their comfort zone. "I think it's harder for your AI companion to push you," she says. But for users who have come to rely on this technology, such fears are academic. 'A lot of us just end up kind of retreating from society,' warns D'hotman, who says that she barely left the house in the year following her autism diagnosis, feeling overwhelmed. Were she to give up using ChatGPT, she fears she would return to that traumatic period of isolation. 'As somebody who's struggled with a disability my whole life,' she says, 'I need this.' (Editing by Yasmeen Serhan and Sharon Singleton)


The Star
7 hours ago
- The Star
Oil prices ease to three-week low at weekend as negative economic news offsets trade optimism
JAKARTA/SINGAPORE (Reuters): Oil prices eased to a three-week low on Friday night on negative economic news from the United States and China and signs of growing supply despite optimism U.S. trade deals could boost global economic growth and oil demand in the future. Brent crude futures fell 76 cents, or 1.1%, to US$68.42 a barrel by 1:44 p.m. EDT (1744 GMT), while US West Texas Intermediate (WTI) crude fell 91 cents, or 1.4%, to US$65.12. That put Brent on track for its lowest close since July 4 and WTI on track for its lowest close since June 30. For the week, Brent was down about 1% and WTI down about 3%. European Commission President Ursula von der Leyen will meet U.S. President Donald Trump on Sunday in Scotland after European Union officials and diplomats said they expected to reach a framework trade deal this weekend. The euro zone economy has remained resilient to the pervasive uncertainty caused by a global trade war, a slew of data showed on Friday, even as European Central Bank policymakers appeared to temper market bets on no more rate cuts. In the U.S., meanwhile, new orders for U.S.-manufactured capital goods unexpectedly fell in June while shipments of those products increased moderately, suggesting that business spending on equipment slowed considerably in the second quarter. Trump said on Friday that he had a good meeting with Federal Reserve Chair Jerome Powell and got the impression that the head of the U.S. central bank might be ready to lower interest rates. Central banks, like the Fed or ECB, use interest rates to keep inflation in check. Lower interest rates reduce consumer borrowing costs and can boost economic growth and demand for oil. In China, the world's second biggest economy, fiscal revenue dipped 0.3% in the first six months from a year earlier, the finance ministry said on Friday, maintaining the rate of decline seen between January and May. GROWING SUPPLIES? The US is preparing to allow partners of Venezuela's state-run PDVSA, starting with U.S. oil major Chevron, to operate with limitations in the sanctioned nation, sources said on Thursday. That could boost Venezuelan oil exports by a little more than 200,000 barrels per day (bpd), which would be welcome news for U.S. refiners, as it would ease tightness in the heavier crude market, ING analysts wrote. In the Middle East, Iran said it would continue nuclear talks with European powers after "serious, frank, and detailed" conversations on Friday, the first such face-to-face meeting since Israel and the U.S. bombed Iran last month. Venezuela and Iran are members of the Organization of the Petroleum Exporting Countries (OPEC). Any deal that could increase the amount of oil either sanctioned country could export would boost the amount of crude available to global markets. A meeting of the Joint Ministerial Monitoring Committee, which includes top ministers from OPEC and allies like Russia, a group known as OPEC+, is scheduled for 1200 GMT on Monday. Four OPEC+ sources told Reuters the meeting was unlikely to alter the group's existing policy, which calls for eight members to raise output by 548,000 bpd in August. In Russia, the world's second biggest crude oil producer behind the U.S., daily oil exports from its western ports are set to be around 1.77 million bpd in August, down from 1.93 million bpd in July's plan, amid the expected rise in refinery runs, Reuters calculations based on data from two sources show. In the U.S., energy firms this week cut the number of oil and natural gas rigs operating for the 12th time in 13 weeks, energy services firm Baker Hughes said in its closely followed report on Friday. (Reporting by Scott DiSavino in New York, Robert Harvey in London, and Sudarshan Varadhan and Siyi Liu in Singapore. Editing by Kirsten Donovan and Emelia Sithole-Matarise) - Reuters