Is AI cheating on the rise? Few cases reported by S'pore universities, but experts warn of risks
All six universities in Singapore generally allow students to use generative AI to varying degrees, depending on the module or coursework. PHOTO: UNSPLASH
Is AI cheating on the rise? Few cases reported by S'pore universities, but experts warn of risks
SINGAPORE - The number of students caught for plagiarising and passing off content generated by artificial intelligence as their own work remains low, said the public universities, following a recent case at the Nanyang Technological University (NTU).
But professors here are watching closely for signs of misuse, warning that over-reliance on AI could undermine learning. Some are calling for more creative forms of assessment.
Their comments follow NTU's decision to award three students zero marks for an assignment after discovering they had used gen AI tools in their work.
The move drew attention after one of the students posted about it on Reddit, sparking debate about the growing role of AI in education and its impact on academic integrity.
All six universities here generally allow students to use generative AI to varying degrees, depending on the module or coursework. Students are required to declare when and how they use such tools, to uphold academic integrity.
In the past three years, Singapore Management University (SMU) recorded 'less than a handful' of cases of AI-related academic misconduct, it said, without giving specific numbers. Similarly, the Singapore University of Technology and Design (SUTD) has encountered a 'handful of academic integrity cases, primarily involving plagiarism' during the same time period.
At Singapore University of Social Sciences (SUSS), confirmed cases of academic dishonesty involving generative AI remain low, but it has seen a 'slight uptick' in such reports, partly due to heightened faculty vigilance and use of detection tools.
The other universities - National University of Singapore (NUS) and Singapore Institute of Technology (SIT) and NTU - did not respond to queries about whether more students have been caught for flouting the rules using AI.
Recognising that AI technologies are here to stay, universities said they are exploring better ways to integrate such tools meaningfully and critically into learning.
Gen AI refers to technologies that can produce human-like text, images, or other content based on prompts. Educational institutions worldwide have been grappling with balancing its challenges and opportunities, while maintaining academic integrity.
Faculty members here have flexibility to decide how AI can be used in their courses, as long as their decisions align with university-wide policies.
NUS allows AI use for take-home assignments if properly attributed, although instructors have to design complex tasks to prevent over-reliance. For modules focused on core skills, assessments may be done in person or designed to go beyond AI's capabilities.
At SMU, instructors inform students which AI tools are allowed, and guide them on their use, typically for idea generation or research-heavy projects outside exams.
SIT has reviewed assessments and trained staff to manage AI use, encouraging it in advanced courses like coding but restricting it in foundational ones, while SUTD has integrated Gen AI into its design thinking curriculum to foster higher-order thinking. The idea is to teach students when AI should be a tool, partner, or avoided.
Universities said that students must ensure originality and credibility in their work.
The allure of gen AI
Students interviewed by ST, who requested to remain anonymous, said AI usage is widespread among their peers.
'Unfortunately, I think that (using generative AI) is the norm nowadays. It has become so rare to see people think on their own first before sending their assignments into ChatGPT,' said a 21-year-old fourth-year law student from SUSS.
Still, most students said they have a sense of when it is appropriate to use AI and when it is not. Several said they use it mainly for brainstorming, collating research and sometimes while writing.
A 20-year-old Year 4 economics student from NTU said he does not see AI as anything more than a 'really smart study buddy' that helps him clarify difficult concepts, similar to how one would consult a professor.
A third-year SMU political science student, 22, said she uses AI to fix her grammar before submitting her essays, but draws the line at copying essays wholesale from ChatGPT.
But some students said they would turn to AI to quickly complete general modules outside their specialisations that they feel are not worth their personal effort.
AI may improve efficiency, but there is a 'level of wisdom that needs to come with that usage', said a third-year public policy and global affairs student from NTU.
The 21-year-old said she would not use ChatGPT for tasks that require her personal opinion but would use it 'judiciously' to complete administrative matters.
Other students said they avoid relying too much on AI, as they take pride in their work.
A 23-year-old Year 3 computer science student from SUTD said he wants to remain 'self-disciplined' in his use of AI because he realised he needed to learn from his mistakes in order to improve academically.
More creativity needed in testing
Academics say universities must bring AI use into the open and rethink assessments to stay ahead.
SMU Associate Professor of Marketing Education Seshan Ramaswami embraces AI tools, but with caveats. In recent terms, he has encouraged students to use AI, provided they submit a full account of how tools were used and critique their outputs.
He also uses AI tools to create practice quizzes, and a chatbot that allows students to ask questions about his class materials. But he tells them not to 'blindly trust' its responses.
The real danger lies in uncritical AI use, he added, which can weaken students' judgment, clarity in writing or personal integrity.
Dr Ramaswami said he is 'going to have to be even more thoughtful about the design of course assessments and pedagogy'.
He may explore methods like 'hyper-local' assignments based on Singapore-specific contexts, oral examinations to test depth of understanding, and in-class discussions where devices are put away and ideas are exchanged in real time.
Even long-standing assessment formats like individual essays may need to be reconsidered, he said.
Dr Thijs Willems, a research fellow at the Lee Kuan Yew Centre for Innovative Cities at SUTD, said that while essays, presentations and prototypes still matter, these are no longer the sole markers of achievement.
More attention needs to be paid to the originality of ideas, the sophistication with which AI is prompted and questioned, and the human judgment used to reshape machine output into something unexpected, he said.
These qualities 'surface most clearly in reflective journals, prompt logs, design diaries, spontaneous oral critiques, and peer feedback sessions', he added.
SUSS Associate Professor Wang Yue, head of the Doctor of Business Administration Programme, said undergraduates should already have basic cognitive skills and foundational knowledge.
'AI frees us to focus on higher-order thinking like developing insights and exercising wisdom,' she said, adding that restricting AI would be counterproductive to preparing students for the workplace.
Critical thinking needed more than ever
The same speed that makes AI exciting is also its potential hazard, said Dr Willems, warning that learners who treat it as a 'one-click answer engine' risk accepting mediocre work and weakening their own understanding.
The key is to focus on the quality of human and AI interaction, he said. 'Once learners adopt the stance of investigators of their own practice, their critical engagement with both technology and subject matter deepens.'
Dr Jean Liu, director at the Centre for Evidence and Implementation and adjunct assistant professor at the NUS Yong Loo Lin School of Medicine, said that while AI offers major advantages for learning, universities must clearly define the line between acceptable use and academic dishonesty.
'AI can act as a tutor who provides personalised explanations and feedback… or function as an experienced mentor or thought partner for projects,' she said.
But the line is drawn when students allow AI to do the work wholesale. 'In an earlier generation, a student might pay a ghost writer to complete an essay,' Dr Liu said. 'Submitting a ChatGPT essay falls into the same category and should be banned.'
'In general, it's best practice to come to an AI platform with ideas on the table, not to have AI do all the work. Helping students find this balance should be a key goal of educators.'
Universities must be upfront about what kinds of AI use are acceptable for students, and provide clearer guidance, she added.
Dr Jason Tan, associate professor for policy, curriculum, and leadership at the National Institute of Education, said the rise of AI is testing students' integrity and sense of responsibility.
Over-reliance on AI tools could also erode critical thinking, he added.
'Students have to decide for themselves what they want to get out of their university education,' he said.
Join ST's WhatsApp Channel and get the latest news and must-reads.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Asia News Network
an hour ago
- Asia News Network
AI creator blending Cambodian heritage with technology
June 25, 2025 PHNOM PENH – In an age of rapid technological advancements, one Cambodian youth is taking bold steps to bridge the gap between artificial intelligence (AI) and the Kingdom's rich culture. Rath Chan Ponleu, a digital content creator, has emerged as a driving force behind the integration of artificial intelligence (AI) into Cambodian culture and education. His videos, created by AI is a reflection of the young creator's passion for preserving the rich customs, traditions and language of Cambodia, while exploring the potential of modern technology. The young creator, who posts to the 'Ty Ronex' Facebook page, shares short clips of his content, as well as tips and tricks for others to try. In one of his captivating AI-generated videos, the protagonist, a Cambodian woman, navigates a mysterious house. With traditional Cambodian beauty and charm, she confronts a series of strange occurrences. A conversation with a man she meets inside the house only adds to the suspense, as she seeks to uncover the truth behind the house's enigmatic nature. The core message of the video, though set against a backdrop of mystery, highlights the importance of Cambodian identity. The young creator's purpose is clear — to train AI to understand and represent Cambodian culture, language and traditions. By using AI technology, the creator hopes to bring Cambodian heritage to a global audience, showcasing the beauty of the Khmer language and the stories that have shaped Cambodia for centuries. In his efforts to train AI systems to understand and promote the Kingdom's unique culture, Ponleu is not only exploring the future of AI but also shaping how the next generation of Cambodians can engage with this transformative technology. For Ponleu, the journey into AI began with a deep curiosity about how this powerful tool could be harnessed to advance understanding and preserve cultural identity. 'I want to know how good AI is and what it will be like in the future. The more data we train it with, the better it gets. Right now, my most popular video features Google's Veo 3 technology,' he explained. Through his creative videos and educational content, Ponleu is helping others understand how to utilise AI technology in practical and impactful ways. AI's power, according to Ponleu, lies in the careful crafting of prompts. 'AI requires a strong understanding of English sentence structure and the key elements of prompts — subject, object and key words,' he said. 'The better we understand how to design prompts, the more professional the results will be. For example, if we use basic words, the AI's output will be simpler. But when we structure the prompts carefully, like including specific lighting details, the results can be truly professional,' he added. One of the biggest challenges Ponleu faces is overcoming AI's difficulty in recognising Cambodian culture and its distinct characteristics. He noted that AI systems often confuse Cambodian cultural elements with those of neighboring countries such as Thailand, Vietnam or Laos. 'The AI still has a hard time distinguishing the faces, clothing, and traditions of Cambodia from other Southeast Asian nations,' he told The Post. His advice to fellow Cambodians is to include specific details in AI prompts, such as references to Cambodian culture, clothing and ethnicity, to help the system improve its recognition of the unique Cambodian identity. Ponleu's insight into AI's cultural understanding is particularly crucial when it comes to creating accurate representations of Cambodia's ancient and modern heritage. 'For instance, when I asked AI to generate a video of 'ancient women in traditional clothing near Angkor Wat', the results were not what I had hoped for,' he said. 'The AI often confused the traditional Cambodian style with that of neighbouring countries.' However, with further adjustments — adding details like skin tone and specific clothing styles from the appropriate era — Ponleu has been able to improve the results. The challenge of getting AI to recognize facial features and profiles also persists, but through techniques like locking a 'seed' to maintain consistency, he is able to manipulate the video content for more accurate depictions. Ponleu remains optimistic about AI's potential to revolutionise the way Cambodians create and consume content. AI's versatility can simplify many tasks, he explained, especially for those who are camera-shy or lack professional skills. 'For example, employees who are afraid of being on camera can still create videos by using AI to generate actions based on their facial features and body shape,' he noted. This technology also has the potential to save time and costs by eliminating the need for travel and on-location shooting. However, Ponleu acknowledges that AI's understanding of current events and locations is still limited. 'AI often uses outdated images, like older pictures of Angkor Wat, instead of capturing the latest scenes,' he said. He believes that the full potential of AI will be realised when companies collaborate with the Cambodian government and local organisations to ensure the technology reflects real-time developments and accurately captures Cambodia's rich heritage.


International Business Times
9 hours ago
- International Business Times
Tengr.ai: A privacy-by-design generative AI platform
Generative AI (GenAI) is rapidly reshaping industries from media to medicine, although with concerns around privacy, transparency, data integrity and ethics on the rise. A recent report from Deloitte indicated heightened skepticism with over 78% of users finding it challenging to control the data collected about them. AI image generators like Midjourney, DALL-E or Stable Diffusion raise serious privacy concerns, from using personal photos without consent in training data to unintentionally recreating real faces. They've also been used to create fake identities in online scams. As the technology evolves, experts warn that safeguards, especially for vulnerable groups like children; are lagging behind. Hungary based company aims to tackle that, with its privacy-by-design creativeGenAI platform, which is used by over 500,000 users worldwide. What is is the ethical image generation infrastructure that lets users create without censorship issues or data harvesting designed for creators, businesses, educators and more with a strong emphasis on user privacy and creative freedom. The company employs its proprietary Hyperalign™ technology to balance uncensored creative expression with safety. This allows the generation of diverse content while preventing misuse, such as deepfakes or harmful imagery. "Users retain full ownership of the images they create, enabling them to use their creations for commercial purposes without restrictions," says Péter W. Szabó, CEO and co-founder of How Works Unlike competitors that harvest personal data or impose restrictive licenses, is designed with privacy at its core. It does not collect or store any personal information, and users maintain full commercial rights to all images they create. Its Hyperalign™ technology quietly converts risky prompts into safe, compliant results, avoiding the constant battle of traditional filters while maintaining seamless creative freedom. also recently announced its Quantum 3.0; an upgraded image generation engine which sets a benchmark for prompt fidelity, rendering speed, photorealism, all while retaining the existing infrastructure. "Quantum 3.0 Engine uses advanced diffusion-transformer technology to accurately interpret complex prompts, reducing image revisions by 38% and enhancing fine details like hair and typography," says Peter. The Detailer Upscaler 3.0 claims to boost images up to 8x resolution with lifelike textures, offering "Details Only" and combined upscale modes for crisp prints. Its One-ClickBackground Swap, powered by ScenaNova, claims to isolate subjects and create custom backdrops. Why privacy and personal data is important "AI image generators are raising serious privacy concerns," says Peter. From models unintentionally recreating real people's faces to fake profiles used in scams, these tools can misuse personal data in harmful ways. Lawsuits like Getty Images vs. Stability AI highlights the unauthorised use of private photos in training data. Protecting personal data isn't just about compliance, it's about respecting individual rights and preventing real-world harm and to be ethical in an increasingly digital world. introduction into Web3 Brands like Jack Wolfskin and Tesa SE are already using for product visualisation, while the company's architectural partner Zindak AI uses the platform to turn sketches and CAD renders to photorealistic imagery. is also introducing its native $TENGR utility token into its platform to enhance user engagement and expand its ecosystem. Earlier this year, completed an equity funding round aimed at developing and launching its $TENGR utility token, integrating blockchain tech into its platform. Through Web3 initiatives and a utility token, the platform aims to empower and monetise its community in a more collaborative way, ensuring that no personal data is collected or stored, and users retain full commercial rights to every image they generate with ethical solution.

Straits Times
9 hours ago
- Straits Times
Is AI cheating on the rise? Few cases reported by S'pore universities, but experts warn of risks
All six universities in Singapore generally allow students to use generative AI to varying degrees, depending on the module or coursework. PHOTO: UNSPLASH Is AI cheating on the rise? Few cases reported by S'pore universities, but experts warn of risks SINGAPORE - The number of students caught for plagiarising and passing off content generated by artificial intelligence as their own work remains low, said the public universities, following a recent case at the Nanyang Technological University (NTU). But professors here are watching closely for signs of misuse, warning that over-reliance on AI could undermine learning. Some are calling for more creative forms of assessment. Their comments follow NTU's decision to award three students zero marks for an assignment after discovering they had used gen AI tools in their work. The move drew attention after one of the students posted about it on Reddit, sparking debate about the growing role of AI in education and its impact on academic integrity. All six universities here generally allow students to use generative AI to varying degrees, depending on the module or coursework. Students are required to declare when and how they use such tools, to uphold academic integrity. In the past three years, Singapore Management University (SMU) recorded 'less than a handful' of cases of AI-related academic misconduct, it said, without giving specific numbers. Similarly, the Singapore University of Technology and Design (SUTD) has encountered a 'handful of academic integrity cases, primarily involving plagiarism' during the same time period. At Singapore University of Social Sciences (SUSS), confirmed cases of academic dishonesty involving generative AI remain low, but it has seen a 'slight uptick' in such reports, partly due to heightened faculty vigilance and use of detection tools. The other universities - National University of Singapore (NUS) and Singapore Institute of Technology (SIT) and NTU - did not respond to queries about whether more students have been caught for flouting the rules using AI. Recognising that AI technologies are here to stay, universities said they are exploring better ways to integrate such tools meaningfully and critically into learning. Gen AI refers to technologies that can produce human-like text, images, or other content based on prompts. Educational institutions worldwide have been grappling with balancing its challenges and opportunities, while maintaining academic integrity. Faculty members here have flexibility to decide how AI can be used in their courses, as long as their decisions align with university-wide policies. NUS allows AI use for take-home assignments if properly attributed, although instructors have to design complex tasks to prevent over-reliance. For modules focused on core skills, assessments may be done in person or designed to go beyond AI's capabilities. At SMU, instructors inform students which AI tools are allowed, and guide them on their use, typically for idea generation or research-heavy projects outside exams. SIT has reviewed assessments and trained staff to manage AI use, encouraging it in advanced courses like coding but restricting it in foundational ones, while SUTD has integrated Gen AI into its design thinking curriculum to foster higher-order thinking. The idea is to teach students when AI should be a tool, partner, or avoided. Universities said that students must ensure originality and credibility in their work. The allure of gen AI Students interviewed by ST, who requested to remain anonymous, said AI usage is widespread among their peers. 'Unfortunately, I think that (using generative AI) is the norm nowadays. It has become so rare to see people think on their own first before sending their assignments into ChatGPT,' said a 21-year-old fourth-year law student from SUSS. Still, most students said they have a sense of when it is appropriate to use AI and when it is not. Several said they use it mainly for brainstorming, collating research and sometimes while writing. A 20-year-old Year 4 economics student from NTU said he does not see AI as anything more than a 'really smart study buddy' that helps him clarify difficult concepts, similar to how one would consult a professor. A third-year SMU political science student, 22, said she uses AI to fix her grammar before submitting her essays, but draws the line at copying essays wholesale from ChatGPT. But some students said they would turn to AI to quickly complete general modules outside their specialisations that they feel are not worth their personal effort. AI may improve efficiency, but there is a 'level of wisdom that needs to come with that usage', said a third-year public policy and global affairs student from NTU. The 21-year-old said she would not use ChatGPT for tasks that require her personal opinion but would use it 'judiciously' to complete administrative matters. Other students said they avoid relying too much on AI, as they take pride in their work. A 23-year-old Year 3 computer science student from SUTD said he wants to remain 'self-disciplined' in his use of AI because he realised he needed to learn from his mistakes in order to improve academically. More creativity needed in testing Academics say universities must bring AI use into the open and rethink assessments to stay ahead. SMU Associate Professor of Marketing Education Seshan Ramaswami embraces AI tools, but with caveats. In recent terms, he has encouraged students to use AI, provided they submit a full account of how tools were used and critique their outputs. He also uses AI tools to create practice quizzes, and a chatbot that allows students to ask questions about his class materials. But he tells them not to 'blindly trust' its responses. The real danger lies in uncritical AI use, he added, which can weaken students' judgment, clarity in writing or personal integrity. Dr Ramaswami said he is 'going to have to be even more thoughtful about the design of course assessments and pedagogy'. He may explore methods like 'hyper-local' assignments based on Singapore-specific contexts, oral examinations to test depth of understanding, and in-class discussions where devices are put away and ideas are exchanged in real time. Even long-standing assessment formats like individual essays may need to be reconsidered, he said. Dr Thijs Willems, a research fellow at the Lee Kuan Yew Centre for Innovative Cities at SUTD, said that while essays, presentations and prototypes still matter, these are no longer the sole markers of achievement. More attention needs to be paid to the originality of ideas, the sophistication with which AI is prompted and questioned, and the human judgment used to reshape machine output into something unexpected, he said. These qualities 'surface most clearly in reflective journals, prompt logs, design diaries, spontaneous oral critiques, and peer feedback sessions', he added. SUSS Associate Professor Wang Yue, head of the Doctor of Business Administration Programme, said undergraduates should already have basic cognitive skills and foundational knowledge. 'AI frees us to focus on higher-order thinking like developing insights and exercising wisdom,' she said, adding that restricting AI would be counterproductive to preparing students for the workplace. Critical thinking needed more than ever The same speed that makes AI exciting is also its potential hazard, said Dr Willems, warning that learners who treat it as a 'one-click answer engine' risk accepting mediocre work and weakening their own understanding. The key is to focus on the quality of human and AI interaction, he said. 'Once learners adopt the stance of investigators of their own practice, their critical engagement with both technology and subject matter deepens.' Dr Jean Liu, director at the Centre for Evidence and Implementation and adjunct assistant professor at the NUS Yong Loo Lin School of Medicine, said that while AI offers major advantages for learning, universities must clearly define the line between acceptable use and academic dishonesty. 'AI can act as a tutor who provides personalised explanations and feedback… or function as an experienced mentor or thought partner for projects,' she said. But the line is drawn when students allow AI to do the work wholesale. 'In an earlier generation, a student might pay a ghost writer to complete an essay,' Dr Liu said. 'Submitting a ChatGPT essay falls into the same category and should be banned.' 'In general, it's best practice to come to an AI platform with ideas on the table, not to have AI do all the work. Helping students find this balance should be a key goal of educators.' Universities must be upfront about what kinds of AI use are acceptable for students, and provide clearer guidance, she added. Dr Jason Tan, associate professor for policy, curriculum, and leadership at the National Institute of Education, said the rise of AI is testing students' integrity and sense of responsibility. Over-reliance on AI tools could also erode critical thinking, he added. 'Students have to decide for themselves what they want to get out of their university education,' he said. Join ST's WhatsApp Channel and get the latest news and must-reads.