
Opinion: We're losing the plot on AI in universities
A student at Nanyang Technological University said in a Reddit post that she used a digital tool to alphabetise her citations for a term paper. When it was flagged for typos, she was then accused of breaking the rules over the use of generative AI for the assignment. It snowballed when two more students came forward with similar complaints, one alleging that she was penalised for using ChatGPT to help with initial research, even though she says she did not use the bot to draft the essay.
The school, which publicly states it embraces AI for learning, initially defended its zero-tolerance stance in this case in statements to local media. But Internet users rallied around the original Reddit poster, and rejoiced at an update that she won an appeal to rid her transcript of the academic fraud label.
It may sound like a run-of-the-mill university dispute. But there's a reason the saga went so viral, garnering thousands of upvotes and heated opinions from online commentators. It laid bare the strange new world we've found ourselves in, as students and faculty are rushing to keep pace with how AI should or shouldn't be used in universities.
It's a global conundrum, but the debate has especially roiled Asia . Stereotypes of math nerds and tiger moms aside, a rigorous focus on tertiary studies is often credited for the region's economic rise. The importance of education – and long hours of studying – is instilled from the earliest age. So how does this change in the AI era? The reality is that nobody has the answer yet.
Despite the promises from edtech leaders that we're on the cusp of 'the biggest positive transformation that education has ever seen,' the data on academic outcomes hasn't kept pace with the technology's adoption. There are no long-term studies on how AI tools impact learning and cognitive functions – and viral headlines that it could make us lazy and dumb only add to the anxiety. Meanwhile, the race to not be left behind in implementing the technology risks turning an entire generation of developing minds into guinea pigs.
For educators navigating this moment, the answer is not to turn a blind eye. Even if some teachers discourage the use of AI, it has become almost unavoidable for scholars doing research in the internet age. Most Google searches now lead with automated summaries. Scrolling through these should not count as academic dishonesty. An informal survey of 500 Singaporean students from secondary school through university conducted by a local news outlet this year found that 84% were using products like ChatGPT for homework on a weekly basis.
In China , many universities are turning to AI cheating detectors, even though the technology is imperfect. Some students are reporting on social media that they have to dumb down their writing to pass these tests or shell out cash for such detection tools themselves to ensure they beat them before submitting their papers.
It doesn't have to be this way. The chaotic moment of transition has put new onus on educators to adapt, and shift the focus on the learning process as much as the final results, Yeow Meng Chee , the provost and chief academic and innovation officer at the Singapore University of Technology and Design, tells me. This doesn't mean villainizing AI, but treating it as a tool, and ensuring a student understands how they arrived at their final conclusion even if they used technology. This process also helps ensure the AI outputs, which remain imperfect and prone to hallucinations (or typos), are checked and understood.
Ultimately, professors who make the biggest difference aren't those who improve exam scores but who build trust, teach empathy and instill confidence in students to solve complex problems. The most important parts of learning still can't be optimised by a machine.
The Singapore saga shows how everyone is on edge, and whether a reference-sorting website even counts as a generative AI tool isn't clear. It also exposed another irony: Saving time on a tedious task would likely be welcomed when the student enters the workforce – if the technology hasn't already taken her entry-level job. Demand for AI literacy in the labor market is becoming a must-have, and universities ignoring it does a disservice to cohorts entering the real world.
We're still a few years away from understanding the full impact of AI on teaching and how it can best be used in higher education. But let's not miss the forest for the trees as we figure it out. – Bloomberg
(Catherine Thorbecke is a Bloomberg Opinion columnist covering Asia tech. Previously she was a tech reporter at CNN and ABC News .)

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Star
6 hours ago
- The Star
‘It's the most empathetic voice in my life': How AI is transforming the lives of neurodivergent people
-For Cape Town-based filmmaker Kate D'hotman, connecting with movie audiences comes naturally. Far more daunting is speaking with others. 'I've never understood how people [decipher] social cues,' the 40-year-old director of horror films says. D'hotman has autism and attention-deficit hyperactivity disorder (ADHD), which can make relating to others exhausting and a challenge. However, since 2022, D'hotman has been a regular user of ChatGPT, the popular AI-powered chatbot from OpenAI, relying on it to overcome communication barriers at work and in her personal life. 'I know it's a machine,' she says. 'But sometimes, honestly, it's the most empathetic voice in my life.' Neurodivergent people — including those with autism, ADHD, dyslexia and other conditions — can experience the world differently from the neurotypical norm. Talking to a colleague, or even texting a friend, can entail misread signals, a misunderstood tone and unintended impressions. AI-powered chatbots have emerged as an unlikely ally, helping people navigate social encounters with real-time guidance. Although this new technology is not without risks — in particular some worry about over-reliance — many neurodivergent users now see it as a lifeline. How does it work in practice? For D'hotman, ChatGPT acts as an editor, translator and confidant. Before using the technology, she says communicating in neurotypical spaces was difficult. She recalls how she once sent her boss a bulleted list of ways to improve the company, at their request. But what she took to be a straightforward response was received as overly blunt, and even rude. Now, she regularly runs things by ChatGPT, asking the chatbot to consider the tone and context of her conversations. Sometimes she'll instruct it to take on the role of a psychologist or therapist, asking for help to navigate scenarios as sensitive as a misunderstanding with her best friend. She once uploaded months of messages between them, prompting the chatbot to help her see what she might have otherwise missed. Unlike humans, D'hotman says, the chatbot is positive and non-judgmental. That's a feeling other neurodivergent people can relate to. Sarah Rickwood, a senior project manager in the sales training industry, based in Kent, England, has ADHD and autism. Rickwood says she has ideas that run away with her and often loses people in conversations. 'I don't do myself justice,' she says, noting that ChatGPT has 'allowed me to do a lot more with my brain.' With its help, she can put together emails and business cases more clearly. The use of AI-powered tools is surging. A January study conducted by Google and the polling firm Ipsos found that AI usage globally has jumped 48%, with excitement about the technology's practical benefits now exceeding concerns over its potentially adverse February, OpenAI told Reuters that its weekly active users surpassed 400 million, of which at least 2 million are paying business users. But for neurodivergent users, these aren't just tools of convenience and some AI-powered chatbotsare now being created with the neurodivergent community in mind. Michael Daniel, an engineer and entrepreneur based in Newcastle, Australia, told Reuters that it wasn't until his daughter was diagnosed with autism — and he received the same diagnosis himself — that he realised how much he had been masking his own neurodivergent traits. His desire to communicate more clearly with his neurotypical wife and loved ones inspired him to build Neurotranslator, an AI-powered personal assistant, which he credits with helping him fully understand and process interactions, as well as avoid misunderstandings. 'Wow … that's a unique shirt,' he recalls saying about his wife's outfit one day, without realising how his comment might be perceived. She asked him to run the comment through NeuroTranslator, which helped him recognise that, without a positive affirmation, remarks about a person's appearance could come across as criticism. 'The emotional baggage that comes along with those situations would just disappear within minutes,' he says of using the app. Since its launch in September, Daniel says NeuroTranslator has attracted more than 200 paid subscribers. An earlier web version of the app, called Autistic Translator, amassed 500 monthly paid subscribers. As transformative as this technology has become, some warn against becoming too dependent. The ability to get results on demand can be 'very seductive,' says Larissa Suzuki, a London-based computer scientist and visiting NASA researcher who is herself neurodivergent. Overreliance could be harmful if it inhibits neurodivergent users' ability to function without it, or if the technology itself becomes unreliable — as is already the case with many AI search-engine results, according to a recent study from the Columbia Journalism Review.'If AI starts screwing up things and getting things wrong,' Suzuki says, 'people might give up on technology, and on themselves." Baring your soul to an AI chatbot does carry risk, agrees Gianluca Mauro, an AI adviser and co-author of Zero to AI. 'The objective [of AI models like ChatGPT] is to satisfy the user,' he says, raising questions about its willingness to offer critical advice. Unlike therapists, these tools aren't bound by ethical codes or professional guidelines. If AI has the potential to become addictive, Mauro adds, regulation should follow. A recent study by Carnegie Mellon and Microsoft (which is a key investor in OpenAI) suggests that long-term overdependence on generative AI tools can undermine users' critical-thinking skills and leave them ill-equipped to manage without it. 'While AI can improve efficiency,' the researchers wrote, 'it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI.' While Dr. Melanie Katzman, a clinical psychologist and expert in human behaviour, recognises the benefits of AI for neurodivergent people, she does see downsides, such as giving patients an excuse not to engage with others. A therapist will push their patient to try different things outside of their comfort zone. "I think it's harder for your AI companion to push you," she says. But for users who have come to rely on this technology, such fears are academic. 'A lot of us just end up kind of retreating from society,' warns D'hotman, who says that she barely left the house in the year following her autism diagnosis, feeling overwhelmed. Were she to give up using ChatGPT, she fears she would return to that traumatic period of isolation. 'As somebody who's struggled with a disability my whole life,' she says, 'I need this.' (Editing by Yasmeen Serhan and Sharon Singleton)


The Sun
7 hours ago
- The Sun
China proposes global AI cooperation body amid US tech rivalry
SHANGHAI: China said on Saturday it wanted to create an organisation to foster global cooperation on artificial intelligence, positioning itself as an alternative to the U.S. as the two vie for influence over the transformative technology. China wants to help coordinate global efforts to regulate fast-evolving AI technology and share the country's advances, Premier Li Qiang told the annual World Artificial Intelligence Conference in Shanghai. President Donald Trump's administration on Wednesday released an AI blueprint aiming to vastly expand U.S. AI exports to allies in a bid to maintain the American edge over China in the critical technology. Li did not name the United States but appeared to refer to Washington's efforts to stymie China's advances in AI, warning that the technology risked becoming the 'exclusive game' of a few countries and companies. China wants AI to be openly shared and for all countries and companies to have equal rights to use it, Li said, adding that Beijing was willing to share its development experience and products with other countries, particularly the 'Global South'. The Global South refers to developing, emerging or lower-income countries, mostly in the southern hemisphere. How to regulate AI's growing risks was another concern, Li said, adding that bottlenecks included an insufficient supply of AI chips and restrictions on talent exchange. 'Overall global AI governance is still fragmented. Countries have great differences particularly in terms of areas such as regulatory concepts, institutional rules,' he said. 'We should strengthen coordination to form a global AI governance framework that has broad consensus as soon as possible.' SHANGHAI HEADQUARTERS The three-day Shanghai conference brings together industry leaders and policymakers at a time of escalating technological competition between China and the United States - the world's two largest economies - with AI emerging as a key battleground. Washington has imposed export restrictions on advanced technology to China, including the most high-end AI chips made by companies such as Nvidia and chipmaking equipment, citing concerns that the technology could enhance China's military capabilities. Despite these restrictions, China has continued making AI breakthroughs that have drawn close scrutiny from U.S. officials. China's Vice Foreign Minister Ma Zhaoxu told a roundtable of representatives from over 30 countries, including Russia, South Africa, Qatar, South Korea and Germany, that China wanted the organisation to promote pragmatic cooperation in AI and was considering putting its headquarters in Shanghai. The foreign ministry released online an action plan for global AI governance, inviting governments, international organisations, enterprises and research institutions to work together and promote international exchanges including through a cross-border open source community. The government-sponsored AI conference typically attracts major industry players, government officials, researchers and investors. Saturday's speakers included Anne Bouverot, the French president's special envoy for AI, computer scientist Geoffrey Hinton, known as 'the Godfather of AI', and former Google CEO Eric Schmidt. Tesla CEO Elon Musk, who has in past years regularly appeared at the opening ceremony in person or by video, did not speak this year. Besides forums, the conference features exhibitions where companies demonstrate their latest innovations. This year, more than 800 companies are participating, showcasing more than 3,000 high-tech products, 40 large language models, 50 AI-powered devices and 60 intelligent robots, according to organisers. The exhibition features predominantly Chinese companies, including tech giants Huawei and Alibaba and startups such as humanoid robot maker Unitree. Western participants include Tesla, Alphabet and Amazon. - Reuters


The Sun
8 hours ago
- The Sun
Digital Ministry drafting AI bill to combat deepfake videos
PETALING JAYA: The Digital Ministry is presently drafting an Artificial Intelligence (AI) bill to regulate the fast-evolving technology and combat deepfake videos as well as AI-related crimes, said Minister Gobind Singh Deo. 'The legislative draft is currently being refined through a comprehensive and inclusive engagement process involving various stakeholders. 'This approach is taken to ensure that the forthcoming legislation is holistic, balanced, and aligned with current technological developments as well as the nation's needs,' he said in a written parliamentary reply to Wong Chen (PH-Subang). Gobind added that the government has introduced the National AI Ethics and Governance Guidelines (AIGE), which outline seven core principles, including fairness, transparency, accountability and privacy protection, that must be observed by AI developers and users. Last month, Minister in the Prime Minister's Department (Law and Institutional Reform) Datuk Seri Azalina Othman Said said Malaysia is studying to develop AI legislation to address legal complexities in the digital age. She further said that she has formally written a letter to Gobind, proposing a meeting between the Legal Affairs Division (BHEUU) and the Digital Ministry to initiate a discussion on drafting new AI laws. According to Azalina, at present, Malaysia has no specific laws focused on AI because, unlike traditional technologies, AI operates on an entirely different platform.