
The professors are using ChatGPT, and some students aren't happy about it
Halfway through the document, which her business professor had made for a lesson on models of leadership, was an instruction to ChatGPT to 'expand on all areas. Be more detailed and specific.' It was followed by a list of positive and negative leadership traits, each with a prosaic definition and a bullet-pointed example.
Stapleton texted a friend in the class.
'Did you see the notes he put on Canvas?' she wrote, referring to the university's software platform for hosting course materials. 'He made it with ChatGPT.'
'OMG Stop,' the classmate responded. 'What the hell?'
Stapleton decided to do some digging. She reviewed her professor's slide presentations and discovered other telltale signs of artificial intelligence: distorted text, photos of office workers with extraneous body parts and egregious misspellings.
She was not happy. Given the school's cost and reputation, she expected a top-tier education. This course was required for her business minor; its syllabus forbade 'academically dishonest activities,' including the unauthorized use of AI or chatbots.
'He's telling us not to use it, and then he's using it himself,' she said.
Stapleton filed a formal complaint with Northeastern's business school, citing the undisclosed use of AI as well as other issues she had with his teaching style, and requested reimbursement of tuition for that class. As a quarter of the total bill for the semester, that would be more than $8,000.
When ChatGPT was released at the end of 2022, it caused a panic at all levels of education because it made cheating incredibly easy. Students who were asked to write a history paper or literary analysis could have the tool do it in mere seconds. Some schools banned it while others deployed AI detection services, despite concerns about their accuracy.
But, oh, how the tables have turned. Now students are complaining on sites like Rate My Professors about their instructors' overreliance on AI and scrutinizing course materials for words ChatGPT tends to overuse, such as 'crucial' and 'delve.' In addition to calling out hypocrisy, they make a financial argument: They are paying, often quite a lot, to be taught by humans, not an algorithm that they, too, could consult for free.
For their part, professors said they used AI chatbots as a tool to provide a better education. Instructors interviewed by The New York Times said chatbots saved time, helped them with overwhelming workloads and served as automated teaching assistants.
Their numbers are growing. In a national survey of more than 1,800 higher-education instructors last year, 18% described themselves as frequent users of generative AI tools; in a repeat survey this year, that percentage nearly doubled, according to Tyton Partners, the consulting group that conducted the research. The AI industry wants to help, and to profit: The startups OpenAI and Anthropic recently created enterprise versions of their chatbots designed for universities.
(The Times has sued OpenAI for copyright infringement for use of news content without permission.)
Generative AI is clearly here to stay, but universities are struggling to keep up with the changing norms. Now professors are the ones on the learning curve and, like Stapleton's teacher, muddling their way through the technology's pitfalls and their students' disdain.
Last fall, Marie, 22, wrote a three-page essay for an online anthropology course at Southern New Hampshire University. She looked for her grade on the school's online platform, and was happy to have received an A. But in a section for comments, her professor had accidentally posted a back-and-forth with ChatGPT. It included the grading rubric the professor had asked the chatbot to use and a request for some 'really nice feedback' to give Marie.
'From my perspective, the professor didn't even read anything that I wrote,' said Marie, who asked to use her middle name and requested that her professor's identity not be disclosed. She could understand the temptation to use AI. Working at the school was a 'third job' for many of her instructors, who might have hundreds of students, said Marie, and she did not want to embarrass her teacher.
Still, Marie felt wronged and confronted her professor during a Zoom meeting. The professor told Marie that she did read her students' essays but used ChatGPT as a guide, which the school permitted.
Robert MacAuslan, vice president of AI at Southern New Hampshire, said that the school believed 'in the power of AI to transform education' and that there were guidelines for both faculty and students to 'ensure that this technology enhances, rather than replaces, human creativity and oversight.' A do's and don'ts for faculty forbids using tools, such as ChatGPT and Grammarly, 'in place of authentic, human-centric feedback.'
'These tools should never be used to 'do the work' for them,' MacAuslan said. 'Rather, they can be looked at as enhancements to their already established processes.'
After a second professor appeared to use ChatGPT to give her feedback, Marie transferred to another university.
Paul Shovlin, an English professor at Ohio University in Athens, Ohio, said he could understand her frustration. 'Not a big fan of that,' Shovlin said, after being told of Marie's experience. Shovlin is also an AI faculty fellow, whose role includes developing the right ways to incorporate AI into teaching and learning.
'The value that we add as instructors is the feedback that we're able to give students,' he said. 'It's the human connections that we forge with students as human beings who are reading their words and who are being impacted by them.'
Shovlin is a proponent of incorporating AI into teaching, but not simply to make an instructor's life easier. Students need to learn to use the technology responsibly and 'develop an ethical compass with AI,' he said, because they will almost certainly use it in the workplace. Failure to do so properly could have consequences. 'If you screw up, you're going to be fired,' Shovlin said.
One example he uses in his own classes: In 2023, officials at Vanderbilt University's education school responded to a mass shooting at another university by sending an email to students calling for community cohesion. The message, which described promoting a 'culture of care' by 'building strong relationships with one another,' included a sentence at the end that revealed that ChatGPT had been used to write it. After students criticized the outsourcing of empathy to a machine, the officials involved temporarily stepped down.
Not all situations are so clear cut. Shovlin said it was tricky to come up with rules because reasonable AI use may vary depending on the subject. The Center for Teaching, Learning and Assessment, where he is a fellow, instead has 'principles' for AI integration, one of which eschews a 'one-size-fits-all approach.'
The Times contacted dozens of professors whose students had mentioned their AI use in online reviews. The professors said they had used ChatGPT to create computer science programming assignments and quizzes on required reading, even as students complained that the results didn't always make sense. They used it to organize their feedback to students, or to make it kinder. As experts in their fields, they said, they can recognize when it hallucinates, or gets facts wrong.
There was no consensus among them as to what was acceptable. Some acknowledged using ChatGPT to help grade students' work; others decried the practice. Some emphasized the importance of transparency with students when deploying generative AI, while others said they didn't disclose its use because of students' skepticism about the technology.
Most, however, felt that Stapleton's experience at Northeastern — in which her professor appeared to use AI to generate class notes and slides — was perfectly fine. That was Shovlin's view, as long as the professor edited what ChatGPT spat out to reflect his expertise. Shovlin compared it with a long-standing practice in academia of using content, such as lesson plans and case studies, from third-party publishers.
To say a professor is 'some kind of monster' for using AI to generate slides 'is, to me, ridiculous,' he said.
Shingirai Christopher Kwaramba, a business professor at Virginia Commonwealth University, described ChatGPT as a partner that saved time. Lesson plans that used to take days to develop now take hours, he said. He uses it, for example, to generate data sets for fictional chain stores, which students use in an exercise to understand various statistical concepts.
'I see it as the age of the calculator on steroids,' Kwaramba said.
Kwaramba said he now had more time for student office hours.
Other professors, including David Malan at Harvard University, said the use of AI meant fewer students were coming to office hours for remedial help. Malan, a computer science professor, has integrated a custom AI chatbot into a popular class he teaches on the fundamentals of computer programming. His hundreds of students can turn to it for help with their coding assignments.
Malan has had to tinker with the chatbot to hone its pedagogical approach, so that it offers only guidance and not the full answers. The majority of 500 students surveyed in 2023, the first year it was offered, said they found it helpful.
Rather than spend time on 'more mundane questions about introductory material' during office hours, he and his teaching assistants prioritize interactions with students at weekly lunches and hackathons — 'more memorable moments and experiences,' Malan said.
Katy Pearce, a communication professor at the University of Washington, developed a custom AI chatbot by training it on versions of old assignments that she had graded. It can now give students feedback on their writing that mimics her own at any time, day or night. It has been beneficial for students who are otherwise hesitant to ask for help, she said.
'Is there going to be a point in the foreseeable future that much of what graduate student teaching assistants do can be done by AI?' she said. 'Yeah, absolutely.'
What happens then to the pipeline of future professors who would come from the ranks of teaching assistants?
'It will absolutely be an issue,' Pearce said.
After filing her complaint at Northeastern, Stapleton had a series of meetings with officials in the business school. In May, the day after her graduation ceremony, the officials told her that she was not getting her tuition money back.
Rick Arrowood, her professor, was contrite about the episode. Arrowood, who is an adjunct professor and has been teaching for nearly two decades, said he had uploaded his class files and documents to ChatGPT, the AI search engine Perplexity and an AI presentation generator called Gamma to 'give them a fresh look.' At a glance, he said, the notes and presentations they had generated looked great.
'In hindsight, I wish I would have looked at it more closely,' he said.
He put the materials online for students to review, but emphasized that he did not use them in the classroom, because he prefers classes to be discussion-oriented. He realized the materials were flawed only when school officials questioned him about them.
The embarrassing situation made him realize, he said, that professors should approach AI with more caution and disclose to students when and how it is used. Northeastern issued a formal AI policy only recently; it requires attribution when AI systems are used and review of the output for 'accuracy and appropriateness.' A Northeastern spokesperson said the school 'embraces the use of artificial intelligence to enhance all aspects of its teaching, research and operations.'
'I'm all about teaching,' Arrowood said. 'If my experience can be something people can learn from, then, OK, that's my happy spot.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
43 minutes ago
- Time of India
Govt to launch ‘Rajasthan AI Policy 2025' soon
Jaipur: The state govt will launch the 'Rajasthan AI Policy 2025' soon. Through this policy, the govt seeks to establish the state as a leading destination for digital innovation and creative technological advancement within the country. This policy is built on three main pillars: first, special efforts will be made at the govt level to adopt ethical and responsible artificial intelligence. Second, skill development and research will be promoted in the state. Third, a robust and comprehensive digital infrastructure will be developed. For its implementation, a Centre of Excellence for AI (CoE-AI) will be established to accelerate innovation in collaboration with startups, academic institutions, and the private sector. This policy aligns with the National India AI Mission and aims to adopt global best practices. The AVGC-XR Policy issued by the govt will take steps to make the state a leader in fields such as animation, gaming, and visual effects. Additionally, the state govt has proposed the establishment of four Atal Innovation Studios and Accelerators at a cost of Rs 1,000 crore. Alongside, the Agriculture Accelerator Mission was launched to integrate technology with agriculture. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Villa Prices in Dubai Might Be Lower Than You Think! Villa for sale in Dubai | Search Ads Learn More Undo Considering the growing impact of information technology and technology, the 'Rajasthan Data Centre Policy 2025', unveiled by the govt, will encourage the establishment of data centres in the private sector in Rajasthan. The aim of this policy is to develop a world-class data centre ecosystem in the state and make Rajasthan a prime destination in the data centre sector. This policy will enhance the efficiency, security, and reliability of activities of data centres to be established in the state. The Rajasthan State Data Centre (RSDC), also known as Bhamashah State Data Centre, is the largest govt-owned data centre in the country. This centre is equipped with state-of-the-art technologies. It includes four DR sites in Jaipur and one in Jodhpur. It has a total capacity of 800 racks, with RSDC-P4 offering a Tier-4 design with a 99.995 percent uptime guarantee.


Time of India
an hour ago
- Time of India
WinZO disputes Google's CCI proposal on Play Store rules
Bengaluru: Real‑money gaming platform WinZO has raised concerns over Google's latest proposal to the Competition Commission of India (CCI), arguing that it does not adequately address the anti‑competitive issues flagged in an ongoing investigation. The case originates from WinZO's December 2022 complaint alleging that Google abused its dominant position by restricting Play Store access to only fantasy sports and rummy apps while excluding other real‑money gaming categories. With Google controlling 96% of India's app distribution market, WinZO said this policy inflated user acquisition costs by 'at least 10x' for apps distributed outside the Play Store. Acting on the complaint, the CCI in November 2024 ordered a formal investigation, citing prima facie violations of competition law, including the imposition of unfair conditions, denial of market access and limiting of market development. The Commission had noted Google's lack of clear criteria for its pilot programme, inconsistent enforcement of its ads policy and 'misleading payment warnings' shown to users sideloading apps, which it said artificially deterred usage. In a public notice issued recently the CCI invited comments on Google's commitment offer. Google's proposal includes allowing all real‑money games that are certified by recognised third‑party bodies, replacing its pilot programme for fantasy sports and rummy apps and permitting certified skill‑based games to advertise on Google Ads. If approved, Google has committed to implement the Play Store changes within 120 days and its advertising changes within 150 days of the CCI's order. The deadline for public comments is August 20, 2025. Responding to WinZO's objections, a Google spokesperson said the company welcomed the CCI's market testing of its proposed framework, describing it as the result of 'constructive discussions' with the regulator and Indian developers. Google added that it is confident the proposal will create 'a more open and safe ecosystem' for real‑money gaming apps, empower local developers and prioritise user safety. In its statement, WinZO said Google's commitments 'must be robust and genuinely eliminate the discriminatory practices identified,' adding that the proposed framework's reference to 'developing' a commercial model for real‑money gaming lacked clear timelines or objective criteria. WinZO, which reported a 70% rise in revenue to Rs 1,055 crore and a 151% increase in profit to Rs 315 crore in FY24, said it will continue to participate in the consultation process 'to ensure fairness, transparency and a level playing field in India's digital economy.' AI Masterclass for Students. Upskill Young Ones Today!– Join Now
&w=3840&q=100)

First Post
an hour ago
- First Post
US export license backlog hits record levels amid internal commerce department turmoil
Thousands of US export license applications, including high-value tech shipments to China, are stuck in limbo due to bureaucratic dysfunction within the Commerce Department's Bureau of Industry and Security. read more An American flag flutters over a ship and shipping containers at the Port of Los Angeles, in San Pedro California, US. File image/ Reuters Thousands of license applications by US companies to export goods and technology around the globe, including to China, are in limbo because turmoil at the agency in charge of approving them has left it nearly paralyzed, two sources said. While US Commerce Secretary Howard Lutnick has become a familiar face touting President Donald Trump's tariff and trade deals, sources said the export bureau under Lutnick's command has failed to issue expected new rules, stifled communications with industry representatives, pushed out experts, and lost staff through buyouts and resignations. STORY CONTINUES BELOW THIS AD Shipments of artificial intelligence chips from Nvidia to China are the most high-profile example of licenses not being swiftly approved. The company said July 14 the government assured it licenses would be granted for its H20 chip, and it hoped to start deliveries soon. Lutnick and other officials confirmed sales would be allowed. But sources said this week no licenses have yet been issued, and billions of dollars of AI chip orders are at stake. One US official said the backlog of license applications is the lengthiest in more than three decades. A spokesperson for Nvidia declined to comment. The Department of Commerce did not respond to a request for comment. The turmoil and resulting inaction at an agency tasked with promoting overseas trade and safeguarding American technology are alarming both those seeking tougher restrictions on exports to China and companies trying to sell their wares abroad. 'Licensing is how the US does business and competes globally,' said Meghan Harris, who served on the National Security Council in the first Trump administration and has worked at Commerce. 'Delays and unpredictability put us at an unnecessary disadvantage.' The Commerce Department's Bureau of Industry and Security averaged 38 days per export license application in fiscal year 2023, the most recent data available, denying 2% of 37,943 applications. STORY CONTINUES BELOW THIS AD The license process enforces US export restrictions in an effort to make sure sensitive goods and technology do not reach countries or entities whose use of the items could harm US national security. Some staff have criticized Jeffrey Kessler, who became BIS undersecretary in March, saying he has micromanaged the bureau and failed to communicate adequately. Kessler did not respond to a request for comment. At a staff meeting soon after he took office, Kessler urged BIS staff to limit communications with company representatives and industry officials, according to two additional sources, who said he later asked for all meetings to be entered on a spreadsheet. Getting approval from Kessler's office to attend meetings with other government agencies has also been tricky, those sources said. Sources spoke anonymously because they were not authorized to speak publicly. Frustration among exporters Frustration is growing within US industry. 'We're seeing whole sectors where there is no movement or indication if or when licenses will be issued," including license applications for semiconductor manufacturing equipment worth billions of dollars, said Sean Stein, president of the US-China Business Council. STORY CONTINUES BELOW THIS AD While the clock is ticking on license applications, 'Chinese companies are exploring and doing deals with suppliers in China and other countries,' he said. 'The longer we have the delay, the more market share we're going to lose." Jim Anzalone, president of Compliance Assurance, a Florida-based trade consultancy, said he has seen delays in license approvals for sensors, radars, and sonar to Latin America and other parts of the world. 'There's nothing official about what the policy is and when the backlog would be cleared,' he said. He has received denials sporadically after submitting some two dozen applications months ago to export semiconductor manufacturing equipment to China, including four denials on Wednesday, he added. Sources stressed that some licenses are getting approved, especially exports to allied countries, and they noted that some communication with companies continues, especially around license applications. Commerce is also delaying regulatory changes. The agency said in May it would rescind and replace a Biden administration rule before it went into effect that month restricting where AI chips can be exported, but the agency has not done so yet. STORY CONTINUES BELOW THIS AD Other rules, which sources said have been drafted for months, have not been published, including one to expand export restrictions to subsidiaries of companies already banned from receiving controlled US exports. Meanwhile, important staff vacancies such as China-based export control officers have not been filled, and high-level career employees have resigned. A retirement party was held this week for Dan Clutch, acting director of the BIS Office of Export Enforcement, the latest experienced staff member to leave.