
'They Blamed the Students—But It Was Us': Professors Caught Using ChatGPT as Secret Weapon While Cracking Down on Classroom Cheating
are increasingly relying on digital assistants to handle educational tasks, reshaping traditional teaching methods. 🤖 The use of AI by educators often goes unmentioned, leading to student concerns over transparency and trust in the classroom.
by educators often goes unmentioned, leading to student concerns over transparency and trust in the classroom. ⚖️ Universities are crafting ethical frameworks to manage AI's role in education, promoting disclosure and human oversight.
to manage AI's role in education, promoting disclosure and human oversight. 🔍 Students are becoming adept at identifying AI-generated content, highlighting the need for honest communication about its use.
In recent years, the educational landscape has undergone a profound transformation as teachers increasingly rely on digital assistants to aid in their duties. This silent shift is reshaping the very essence of knowledge transmission. What was once a straightforward exchange of wisdom between teacher and student is now mediated by artificial intelligence (AI), raising questions about transparency and trust. While the integration of AI into education may seem a natural progression in a tech-driven world, it becomes contentious when its use remains concealed from students, challenging the fundamental trust that underpins educational relationships. The Silent Automation of Teaching Practices
The use of artificial intelligence in education is not solely a tool for students; teachers, too, are increasingly harnessing its capabilities to streamline their workloads. From creating instructional materials to crafting quizzes and providing personalized feedback, AI's presence is growing in the classroom. Notably, David Malan at Harvard has developed a chatbot to assist in his computer science course, while Katy Pearce at the University of Washington uses AI trained on her evaluation criteria to help students progress even in her absence.
Despite these advancements, some educators choose to keep their use of AI under wraps. Overwhelmed by grading and time constraints, they delegate certain tasks to AI without disclosure. Rick Arrowood, a professor at Northeastern University, admitted to using generative tools for creating his materials without thoroughly reviewing them or informing his students. Reflecting on this, he expressed regret over his lack of transparency, wishing he had better managed the practice.
'These Kids Read in 6 Months': This Elementary Teacher's Shocking Method Defies 30 Years of Reading Education Norms AI Use in Education Sparks Student Tensions
The non-transparent use of AI by educators has led to growing unease among students. Many notice the impersonal style and repetitive vocabulary of AI-generated content, prompting them to become adept at identifying artificial texts. This has led to instances like that of Ella Stapleton, a Northeastern student who discovered a direct ChatGPT request within her course materials. She filed a complaint and demanded a refund of her tuition fees.
On platforms like Rate My Professors, criticism of standardized and ill-suited content is mounting, with students perceiving such materials as incompatible with quality education. This sense of betrayal is heightened when students are prohibited from using the same tools. For many, teachers' reliance on AI signifies injustice and hypocrisy, fueling further discontent.
'This Should Never Have Happened': Scientists Horrified as World's First Octopus Farm Sparks Ethical and Ecological Uproar Ethical Frameworks for AI Use in Education
In response to these tensions, several universities are establishing regulatory frameworks to govern AI's role in education. The University of Berkeley, for instance, mandates explicit disclosure of AI-generated content, coupled with human verification. French institutions are following suit, acknowledging that a complete ban is no longer feasible.
An investigation by Tyton Partners, cited by the New York Times, found that nearly one in three professors regularly uses AI, yet few disclose this to their students. This disparity fuels conflict, as emphasized by Paul Shovlin from Ohio University. He argues that the tool itself is not the issue, but rather how it is integrated. Teachers still play a crucial role as human interlocutors capable of interpretation, evaluation, and dialogue.
'China Prepares for War in Space': HQ-29 Missile System Can Destroy Satellites and Ballistic Threats Mid-Air
Some educators are choosing to embrace transparency by explaining and regulating their AI use, using it to enhance interactions. Though still a minority, this approach could pave the way for reconciling pedagogical innovation with restored trust.
As we navigate this evolving educational landscape, the balance between technology and transparency remains a pressing concern. How can educators and institutions work together to ensure that the integration of AI enhances rather than hinders the educational experience?
This article is based on verified sources and supported by editorial technologies.
Did you like it? 4.5/5 (30)

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


France 24
17 hours ago
- France 24
Humanoid robots embodiment of China's AI ambitions
The annual event is primed at showcasing China's progress in the ever-evolving field of artificial intelligence, with the government aiming to position the country as a world leader on both technology and regulation as it snaps at the United States' heels. Opening the event on Saturday, Premier Li Qiang announced China would set up a new organisation for cooperation on AI governance, warning the benefits of development must be balanced with the risks. But in the cavernous expo next door, the mood was more giddy than concerned. "Demand is currently very strong, whether in terms of data, scenarios, model training, or artificial construction. The overall atmosphere in all these areas is very lively," said Yang Yifan, R&D director at Transwarp, a Shanghai-based AI platform provider. This year's WAIC is the first since a breakthrough moment for Chinese AI this January when startup DeepSeek unveiled an AI model that performed as well as top US systems for an apparent fraction of the cost. Organisers said the forum involved more than 800 companies, showcasing over 3,000 products -- the undeniable crowd pleasers being the humanoid robots and their raft of slightly surreal party tricks. At one booth, a robot played drums, half a beat out of time, to Queen's "We Will Rock You" while a man in safety goggles and a security vest hyped up a giggling crowd. Other droids, some dressed in working overalls or baseball caps, manned assembly lines, played curling with human opponents or sloppily served soft drinks from a dispenser. While most of the machines on display were still a little jerky, the increasing sophistication year-on-year was clear to see. The Chinese government has poured support into robotics, an area in which some experts think China might already have the upper hand over the United States. At Hangzhou-based Unitree's stall, its G1 android -- around 130 centimetres (four feet) tall, with a two-hour battery life -- kicked, pivoted and punched, keeping its balance with relative fluidity as it shadowboxed around a ring. Ahead of the conference's opening, Unitree announced it would launch a full-size humanoid, the R1, for under $6,000. 'Digital humans' Most high-tech helpers don't need hardware though. At the expo, AI companions -- in the form of middle-aged businessmen, scantily clad women and ancient warriors -- waved at people from screens, asking how their day was, while other stalls ran demos allowing visitors to create their own digital avatars. Tech giant Baidu on Saturday announced a new generation of technology for its "digital humans" -- AI agents modelled on real people, which it says are "capable of thinking, making decisions, and collaborating". The company recently ran a six-hour e-commerce broadcast hosted by the "digital human" of a well-known streamer and another avatar. The two agents beat the human streamer's debut sales in some categories, Baidu said. Over ten thousand businesses are using the technology already, the department's head Wu Chenxia told AFP. Asked about the impact on jobs -- one of the major concerns raised around widespread AI adoption -- Wu insisted that AI was a tool that should be used to improve quality and save time and effort, which still required human input. For now, few visitors to the WAIC expo seemed worried about the potential ramifications of the back-flipping dog robots they were excitedly watching. "When it comes to China's AI development, we have a comparatively good foundation of data and also a wealth of application scenarios," said Transwarp's Yang.


France 24
a day ago
- France 24
Urgent need for 'global approach' on AI regulation: UN tech chief
Doreen Bogdan-Martin, head of the UN's International Telecommunications Union (ITU) agency, told AFP she hoped that AI "can actually benefit humanity". But as concerns mount over the risks posed by the fast-moving technology -- including fears of mass job losses, the spread of deepfakes and disinformation, and society's fabric fraying -- she insisted that regulation was key. "There's an urgency to try to get... the right framework in place," she said, stressing the need for "a global approach". Her comments came after US President Donald Trump this week unveiled an aggressive, low-regulation strategy aimed at ensuring the United States stays ahead of China on AI. Among more than 90 proposals, Trump's plan calls for sweeping deregulation, with the administration promising to "remove red tape and onerous regulation" that could hinder private sector AI development. Asked if she had concerns about an approach that urges less, not more, regulation of AI technologies, Bogdan-Martin refrained from commenting, saying she was "still trying to digest" the US plan. 'Critical' "I think there are different approaches," she said. "We have the EU approach. We have the Chinese approach. Now we're seeing the US approach. I think what's needed is for those approaches to dialogue," she said. At the same time, she highlighted that "85 percent of countries don't yet have AI policies or strategies". A consistent theme among those strategies that do exist is the focus on innovation, capacity building and infrastructure investments, Bogdan-Martin said. "But where I think the debate still needs to happen at a global level is trying to figure out how much regulation, how little regulation, is needed," she said. Bogdan-Martin, who grew up in New Jersey and has spent most of her more than three-decade career at the ITU, insisted the Geneva-based telecoms agency that sets standards for new technologies was well-placed to help facilitate much-needed dialogue on the issue. "The need for a global approach I think is critical," she said, cautioning that "fragmented approaches will not help serve and reach all". As countries and companies sprint to cement their dominance in the booming sector, there are concerns that precautions could be thrown to the wind -- and that those who lose the race or do not have the capacity to participate will be left behind. 'Huge gap' The ITU chief hailed "mind-blowing" advances within artificial intelligence, with the potential to improve everything from education to agriculture to health care -- but insisted the benefits must be shared. Without a concerted effort, there is a risk that AI will end up standing for "advancing inequalities", she warned, cautioning against deepening an already dire digital divide worldwide. "We have 2.6 billion people that have no access to the internet, which means they have no access to artificial intelligence", Bogdan-Martin pointed out. "We have to tackle those divides if we're actually going to have something that is beneficial to all of humanity." Bogdan-Martin, the first woman to serve as ITU secretary-general in the organisation's nearly 160-year history, also stressed the need to get more women into the digital space. "We have a huge gap," she said. "We definitely don't have enough women... in artificial intelligence." The 59-year-old mother of four said it was "a big honour" to be the first woman in her position, to be "breaking the glass ceiling (and) paving the path for future generations". But she acknowledged there was a lot of pressure, "not just to achieve, but to almost overachieve". Bogdan-Martin, who is being backed by the Trump administration to stand for re-election when her four-year mandate ends next year, said she was eager to stay on for a second term. "There is a lot to do." © 2025 AFP


France 24
2 days ago
- France 24
China urges global consensus on balancing AI development, security
His remarks came just days after US President Donald Trump unveiled an aggressive low-regulation strategy aimed at cementing US dominance in the fast-moving field, promising to "remove red tape and onerous regulation" that could hinder private sector AI development. Opening the World AI Conference (WAIC) in Shanghai on Saturday, Li emphasised the need for governance and open-source development, announcing the establishment of a Chinese-led body for international AI cooperation. "The risks and challenges brought by artificial intelligence have drawn widespread attention... How to find a balance between development and security urgently requires further consensus from the entire society," the premier said. Li said China would "actively promote" the development of open-source AI, adding Beijing was willing to share advances with other countries, particularly developing ones. "If we engage in technological monopolies, controls and blockage, artificial intelligence will become the preserve of a few countries and a few enterprises," he said. "Only by adhering to openness, sharing and fairness in access to intelligence can more countries and groups benefit from (AI)." The premier highlighted "insufficient supply of computing power and chips" as a bottleneck. Washington has expanded its efforts in recent years to curb exports of state-of-the-art chips to China, concerned that these can be used to advance Beijing's military systems and erode US tech dominance. For its part, China has made AI a pillar of its plans for technological self-reliance, with the government pledging a raft of measures to boost the sector. In January, Chinese startup DeepSeek unveiled an AI model that performed as well as top US systems despite using less powerful chips. 'Pet tiger cub' At a time when AI is being integrated across virtually all industries, its uses have raised major ethical questions, from the spread of misinformation to its impact on employment, or the potential loss of technological control. In a speech at WAIC on Saturday, Nobel Prize-winning physicist Geoffrey Hinton compared the situation to keeping "a very cute tiger cub as a pet". "To survive", he said, you need to ensure you can train it not to kill you when it grows up. In a video message played at the WAIC opening ceremony, UN Secretary-General Antonio Guterres said AI governance would be "a defining test of international cooperation". The ceremony also saw the French president's AI envoy, Anne Bouverot, underscore the "an urgent need" for global action. At an AI summit in Paris in February, 58 countries including China, France and India -- as well as the European Union and African Union Commission -- called for enhanced coordination on AI governance. But the United States warned against "excessive regulation", and alongside the United Kingdom, refused to sign the summit's appeal for an "open", "inclusive" and "ethical" AI.