
Turning off AI detection software is the right call for SA universities
The recently announced University of Cape Town decision to disable Turnitin's AI detection feature is to be welcomed – and other universities would do well to follow suit. This move signals a growing recognition that AI detection software does more harm than good.
The problems with Turnitin's AI detector extend far beyond technical glitches. The software's notorious tendency towards false positives has created an atmosphere where students live in constant fear of being wrongly accused of academic dishonesty.
Unlike their American counterparts, South African students rarely pursue legal action against universities, but this should not be mistaken for acceptance of unfair treatment.
A system built on flawed logic
As Rebecca Davis has pointed out in Daily Maverick: detection tools fail. The fundamental issue lies in how these detection systems operate. Turnitin's AI detector doesn't identify digital fingerprints that definitively prove AI use. Instead, it searches for stylistic patterns associated with AI-generated text.
The software might flag work as likely to be AI-generated simply because the student used em-dashes or terms such as 'delve into' or 'crucial' – a writing preference that has nothing to do with artificial intelligence.
This approach has led to deeply troubling situations. Students report receiving accusatory emails from professors suggesting significant portions of their original work were AI-generated.
One student described receiving such an email indicating that Turnitin had flagged 30% of her text as likely to be AI-generated, followed by demands for proof of originality: multiple drafts, version history from Google Docs, or reports from other AI detection services like GPTZero.
The AI detection feature operates as a black box, producing reports visible only to faculty members, creating an inherently opaque system.
Other academics have endorsed the use of services like Grammarly Authorship or Turnitin Clarity for students to prove their work is their own.
The burden of proof has been reversed: students are guilty until proven innocent, a principle that would be considered unjust in any legal system and is pedagogically abhorrent in an educational context.
The psychological impact cannot be overstated; students describe feeling anxious about every assignment, second-guessing their natural writing styles, and living under a cloud of suspicion despite having done nothing wrong.
The absurdity exposed
The unreliability of these systems becomes comically apparent when examined closely. The student mentioned above paid $19 to access GPTZero, another AI detection service, hoping to clear her name. The results were revealing: the programs flagged different portions of her original work as AI-generated, with only partial overlap between their accusations.
Even more telling, both systems flagged the professor's own assignment questions as AI-generated, though the Turnitin software flagged Question 2 while GPTZero flagged Question 4. Did the professor use ChatGPT to write one of the questions, both, or neither? The software provides no answers.
This inconsistency exposes the arbitrary nature of AI detection. If two leading systems cannot agree on what constitutes AI-generated text, and both flag the professor's own questions as suspicious, how can any institution justify using these tools to make academic integrity decisions?
Gaming the system
While South African universities have been fortunate to avoid the litigation that has plagued American institutions, the experiences across the Atlantic serve as a stark warning.
A number of US universities have abandoned Turnitin after facing lawsuits from students falsely accused of using AI. Turnitin's terms and conditions conveniently absolve the company of responsibility for these false accusations, leaving universities to face the legal and reputational consequences alone.
The contrast with Turnitin's similarity detection tool is important. While that feature has its own problems, primarily academics assuming that the percentage similarity is an indicator of the amount of plagiarism, at least it provides transparent, visible comparisons that students can review and make sense of.
The AI detection feature operates as a black box, producing reports visible only to faculty members, creating an inherently opaque system.
Undermining educational relationships
Perhaps most damaging is how AI detection transforms the fundamental relationship between educators and students. When academics become primarily focused on catching potential cheaters, the pedagogical mission suffers.
Education is inherently relational, built on trust, guidance and collaborative learning. AI detection software makes this dynamic adversarial, casting educators as judges, AI detection as the evidence and students as potential criminals.
The goal should be advocacy for deep learning and meaningful engagement with coursework, not policing student behaviour through unreliable technology
The lack of transparency compounds this problem. Students cannot see the AI detection reports that are being used against them, cannot understand the reasoning behind the accusations and cannot meaningfully defend themselves against algorithmic judgements.
This violates basic principles of fairness and due process that should govern any academic integrity system.
A path forward
UCT's decision to disable Turnitin's AI detector represents more than just abandoning a problematic tool. It signals a commitment to preserving the educational relationship and maintaining trust in our universities. Other institutions following suit demonstrate that the South African higher education sector is willing to prioritise pedagogical principles over technological convenience.
This doesn't mean ignoring the challenges that AI presents to academic integrity. Rather, it suggests focusing on educational approaches that help students understand appropriate AI use, develop critical thinking skills and cultivate a personal relationship with knowledge.
When tools designed to protect academic integrity instead undermine student wellbeing and the trust essential to learning, they have lost their purpose.
The goal should be advocacy for deep learning and meaningful engagement with coursework, not policing student behaviour through unreliable technology.
Detection should give way to education, suspicion to support and surveillance to guidance. When we position students as already guilty, we shouldn't be surprised that they respond by trying to outwit our systems rather than engaging with the deeper questions about learning and integrity that AI raises.
The anxiety reported by students who feel constantly watched and judged represents a failure of educational technology to serve educational goals. When tools designed to protect academic integrity instead undermine student wellbeing and the trust essential to learning, they have lost their purpose.
UCT and other South African universities deserve recognition for prioritising student welfare and educational relationships over the false security of flawed detection software. Their decision sends a clear message: technology should serve education, not the other way around.
As more institutions grapple with AI's impact on higher education, South Africa's approach offers a valuable model; one that chooses trust over surveillance, education over detection and relationships over algorithms.
In an era of rapid technological change, this commitment to fundamental educational values provides a steady foundation for navigating uncertainty.
The future of academic integrity lies not in better detection software, but in better education about integrity itself. DM
Sioux McKenna is professor of higher education studies at Rhodes University.
Neil Kramm is an educational technology specialist in the Centre of Higher Education Research, Teaching and Learning (CHERTL) at Rhodes University. He is currently completing his PhD on AI and its influence on assessment in higher education.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

IOL News
5 hours ago
- IOL News
UCT drops use of AI detection software
In a move likely to be welcomed by students, the University of Cape Town (UCT) has announced that it will discontinue the use of AI detection tools, Image: AFP In a move likely to be welcomed by students, the University of Cape Town (UCT) has announced that it will discontinue the use of AI detection tools, such as Turnitin's AI Score, effective October 1, 2025. This decision follows the endorsement of UCT's AI in Education Framework by the Senate Teaching and Learning Committee in June 2025. According to the university, the framework prioritises ethical AI literacy, assessment integrity, and innovative curriculum design over reliance on automated surveillance tools. According to the University's Deputy Vice-Chancellor, Teaching and Learning Professor Brandon Collier-Reed, the institution will discontinue the use of AI detection tools, such as Turnitin's AI Score. He also raised concerns regarding the reliability of AI detection tools, noting that they are widely considered to be unreliable and can produce both false positives and false negatives. "An important related issue is the use of the Turnitin AI Score, which flags passages of writing in student work considered to be AI-generated," Reed said in a communique directed to students and staff. "AI detection tools are widely considered to be unreliable, and can produce both false positives and false negatives. The continued use of such scores risks compromising student trust and academic fairness. The Senate Teaching and Learning Committee has now agreed that use of the AI Score should be discontinued, and this feature in Turnitin will no longer be available at UCT from 1 October 2025". Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Next Stay Close ✕ Ad Loading The university's decision comes amid global challenges with AI detection tools, which have frequently resulted in students being mistakenly accused of using AI-generated content. universities worldwide are being forced to reconsider how they monitor and assess the use of AI in student work. Collier-Reed added that the endorsement of the framework was a result of UCT moving with the times, as 'artificial intelligence technologies are becoming part of our daily lives'. 'Staff and students are using tools such as ChatGPT, Claude and Gemini in innovative and productive ways; while at other times these technologies present significant challenges, requiring us to rethink teaching pedagogies, assessment practices and the fundamentals of what a university experience should be, and what our qualifications should signify in a changing world,' he said. IOL News Get your news on the go, click here to join the IOL News WhatsApp channel


The Citizen
8 hours ago
- The Citizen
Google announces $37m in cumulative funding to advance AI in Africa
Google said Africa's AI talent is growing rapidly, but the infrastructure to support it must grow in tandem. Google has outlined a wave of artificial intelligence (AI) support across Africa, representing $37 million in cumulative funding. The announcement includes previously committed but unannounced funding to research, talent development, and infrastructure. Funding The funding package includes funding and partnerships that aim to strengthen AI research, support African languages, improve food systems, expand digital skills, and build research capacity. Google also announced $3 million in funding to the Masakhane Research Foundation, the open research collective advancing AI tools in over 40 African languages Google said Africa's AI talent is growing rapidly, but the infrastructure to support it must grow in tandem. 'That's why a cornerstone of this announcement is the launch of the AI community centre in Accra — a first-of-its-kind space for AI learning, experimentation, and collaboration in Africa. The centre will host training sessions, community events, and workshops focused on responsible AI development.' ALSO READ: Two decades of Google Maps: How people mapped out their world Four pillars Google said the programming will span four pillars: AI literacy, community technology, social impact, and arts and culture — providing a platform for a diverse ecosystem of developers, students, and creators to engage with AI in ways that are grounded in African priorities. To help meet the rising demand for AI and digital skills, Google is rolling out 100 000 Google Career Certificate scholarships for students in higher learning institutions across Ghana. Beyond Ghana, is committing an additional $7 million to support AI education across Nigeria, Kenya, South Africa, and Ghana. It stated that the funding will support academic institutions and nonprofits in building localised AI curricula, online safety training, and cybersecurity programs. Africa Speaking about the announcements, James Manyika, senior vice president for Research, Labs, and Technology & Society at Google, said Africa is home to some of the most important and inspiring work in AI today. 'We are committed to supporting the next wave of innovation through long-term investment, local partnerships, and platforms that help researchers and entrepreneurs build solutions that matter.' Yossi Matias, vice president of Engineering and Research at Google, added that this new wave of support reflects the company's belief in the talent, creativity, and ingenuity across the continent. 'By building with local communities and institutions, we're supporting solutions that are rooted in Africa's realities and built for global impact.' Initiatives These new initiatives build on Google's ongoing work across the continent. Past efforts have included partnerships to support AI-powered maternal health dashboards in Ghana and Nigeria, as well as wildfire alerts in East Africa, and regional language models developed by teams in Accra and Nairobi. ALSO READ: Google Open Buildings helping strengthen community resilience


The South African
2 days ago
- The South African
Sweden opens new honorary consulate in Cape Town
The Embassy of Sweden has officially opened a new consulate in Cape Town, expanding its diplomatic footprint across the Western and Eastern Cape. The move underscores Sweden's longstanding commitment to South Africa and aims to enhance cooperation in trade, innovation, education, and social development. The consulate hopes to bring consular and developmental services closer to local communities while strengthening bilateral ties rooted in shared values and historical solidarity – dating back to Sweden's support for the anti-apartheid movement. Entrepreneur and social innovator Carl Fredrik Sammeli, founder of Bitprop, has been appointed honorary consul. Bitprop helps township homeowners develop rental units on their properties, supporting inclusive economic growth through micro-development. Sammeli, who has close ties to both Sweden and South Africa, said: 'South Africa and Sweden are both home to me and my family. I aim to connect communities, businesses and ideas.' In Langa, township resident Nonkosi Klaas used Bitprop's model to build backyard rental units, securing a steady income stream – an example of the type of community-level impact the consulate hopes to amplify. The consulate will: Provide consular services for Swedish citizens in the region Support South African businesses exploring trade and investment links with Sweden Facilitate academic exchanges, innovation hubs, and educational collaborations Host outreach and cultural events promoting gender equality and climate action Sweden's Ambassador to South Africa described the opening as a reaffirmation of the two nations' 'mutual vision for justice, sustainability, and inclusive prosperity.' The consulate is expected to serve as both a practical support base and a symbolic extension of Sweden's enduring solidarity with South Africa. Let us know by leaving a comment below, or send a WhatsApp to 060 011 021 1 Subscribe to The South African website's newsletters and follow us on WhatsApp, Facebook, X and Bluesky for the latest news.