logo
Rare rhino armoured vehicle turns heads at VUT

Rare rhino armoured vehicle turns heads at VUT

The Citizen3 days ago
VANDERBIJLPARK – Anyone who attended Rhino Week at the Vaal University of Technology's Isak Steyl Stadium over the past two weeks, likely noticed the armoured vehicle on display and wondered about its identity and origins. Coincidentally sharing its name with the event, the vehicle is known as the Rhino. ARMSCOR developed the Rhino Mine-Resistant Ambush Protected (MRAP) vehicle in the early 1980s following a private needs assessment, shortly after the conclusion of Project Sireb, which evaluated the feasibility of replacing the Buffel MRAP.
Only twenty Rhino vehicles were manufactured for the South African Air Force. Based on the Samil 20 chassis, the Rhino was designed specifically for troop transport and patrol duties around air force bases.
It has a crew of seven consisting of a driver, co-driver in the front, and five troops in the rear compartment. The vehicle features a fully enclosed V-shaped hull, offering excellent protection against landmines, while its armoured hull and ballistic glass windows provide effective defence against small arms fire and light artillery fragments. Key features include two roof hatches, firing ports beneath each window in the troop compartment, and provision for mounting a roof-mounted machine gun.
The driver's cabin is accessible via side doors on both sides, reached by a short ladder, while the troop compartment is entered through a small door at the rear left of the vehicle. A spare wheel is mounted at the rear.
Although originally intended for military use, the Rhino went on to prove its worth in a range of international humanitarian demining operations.
It consistently demonstrated its reliability and adaptability as a platform in both conflict zones and peacetime missions.
With the adoption of the Mamba MRAP family by the South African military in the early 1990s, the Rhino was gradually withdrawn from service and subsequently sold to the private security sector.
The Rhino stands as a testament to South African engineering excellence and tactical innovation.
* Dewald Venter is a professor at the Vaal University of Technology.
At Caxton, we employ humans to generate daily fresh news, not AI intervention. Happy reading!
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

SA lawyer forces US tech giant to name site users
SA lawyer forces US tech giant to name site users

eNCA

time17 hours ago

  • eNCA

SA lawyer forces US tech giant to name site users

JOHANNESBURG - In a story similar to the Biblical David and Goliath, a South African lawyer and owner of the Digital Law Company has forced a US tech giant to disclose information about explicit content. The lawyer, Emma Sadlier, made WhatsApp, Facebook, and Instagram owner Meta Platforms disclose the information of users posting explicit content of South African schoolchildren. The Digital Law Company discovered over 1,000 explicit posts of children, including videos and photos, published by 30 Meta accounts in just a few days.

Turning off AI detection software is the right call for SA universities
Turning off AI detection software is the right call for SA universities

Daily Maverick

time20 hours ago

  • Daily Maverick

Turning off AI detection software is the right call for SA universities

Universities across South Africa are abandoning problematic artificial intelligence detection tools that have created a climate of suspicion. The recently announced University of Cape Town decision to disable Turnitin's AI detection feature is to be welcomed – and other universities would do well to follow suit. This move signals a growing recognition that AI detection software does more harm than good. The problems with Turnitin's AI detector extend far beyond technical glitches. The software's notorious tendency towards false positives has created an atmosphere where students live in constant fear of being wrongly accused of academic dishonesty. Unlike their American counterparts, South African students rarely pursue legal action against universities, but this should not be mistaken for acceptance of unfair treatment. A system built on flawed logic As Rebecca Davis has pointed out in Daily Maverick: detection tools fail. The fundamental issue lies in how these detection systems operate. Turnitin's AI detector doesn't identify digital fingerprints that definitively prove AI use. Instead, it searches for stylistic patterns associated with AI-generated text. The software might flag work as likely to be AI-generated simply because the student used em-dashes or terms such as 'delve into' or 'crucial' – a writing preference that has nothing to do with artificial intelligence. This approach has led to deeply troubling situations. Students report receiving accusatory emails from professors suggesting significant portions of their original work were AI-generated. One student described receiving such an email indicating that Turnitin had flagged 30% of her text as likely to be AI-generated, followed by demands for proof of originality: multiple drafts, version history from Google Docs, or reports from other AI detection services like GPTZero. The AI detection feature operates as a black box, producing reports visible only to faculty members, creating an inherently opaque system. Other academics have endorsed the use of services like Grammarly Authorship or Turnitin Clarity for students to prove their work is their own. The burden of proof has been reversed: students are guilty until proven innocent, a principle that would be considered unjust in any legal system and is pedagogically abhorrent in an educational context. The psychological impact cannot be overstated; students describe feeling anxious about every assignment, second-guessing their natural writing styles, and living under a cloud of suspicion despite having done nothing wrong. The absurdity exposed The unreliability of these systems becomes comically apparent when examined closely. The student mentioned above paid $19 to access GPTZero, another AI detection service, hoping to clear her name. The results were revealing: the programs flagged different portions of her original work as AI-generated, with only partial overlap between their accusations. Even more telling, both systems flagged the professor's own assignment questions as AI-generated, though the Turnitin software flagged Question 2 while GPTZero flagged Question 4. Did the professor use ChatGPT to write one of the questions, both, or neither? The software provides no answers. This inconsistency exposes the arbitrary nature of AI detection. If two leading systems cannot agree on what constitutes AI-generated text, and both flag the professor's own questions as suspicious, how can any institution justify using these tools to make academic integrity decisions? Gaming the system While South African universities have been fortunate to avoid the litigation that has plagued American institutions, the experiences across the Atlantic serve as a stark warning. A number of US universities have abandoned Turnitin after facing lawsuits from students falsely accused of using AI. Turnitin's terms and conditions conveniently absolve the company of responsibility for these false accusations, leaving universities to face the legal and reputational consequences alone. The contrast with Turnitin's similarity detection tool is important. While that feature has its own problems, primarily academics assuming that the percentage similarity is an indicator of the amount of plagiarism, at least it provides transparent, visible comparisons that students can review and make sense of. The AI detection feature operates as a black box, producing reports visible only to faculty members, creating an inherently opaque system. Undermining educational relationships Perhaps most damaging is how AI detection transforms the fundamental relationship between educators and students. When academics become primarily focused on catching potential cheaters, the pedagogical mission suffers. Education is inherently relational, built on trust, guidance and collaborative learning. AI detection software makes this dynamic adversarial, casting educators as judges, AI detection as the evidence and students as potential criminals. The goal should be advocacy for deep learning and meaningful engagement with coursework, not policing student behaviour through unreliable technology The lack of transparency compounds this problem. Students cannot see the AI detection reports that are being used against them, cannot understand the reasoning behind the accusations and cannot meaningfully defend themselves against algorithmic judgements. This violates basic principles of fairness and due process that should govern any academic integrity system. A path forward UCT's decision to disable Turnitin's AI detector represents more than just abandoning a problematic tool. It signals a commitment to preserving the educational relationship and maintaining trust in our universities. Other institutions following suit demonstrate that the South African higher education sector is willing to prioritise pedagogical principles over technological convenience. This doesn't mean ignoring the challenges that AI presents to academic integrity. Rather, it suggests focusing on educational approaches that help students understand appropriate AI use, develop critical thinking skills and cultivate a personal relationship with knowledge. When tools designed to protect academic integrity instead undermine student wellbeing and the trust essential to learning, they have lost their purpose. The goal should be advocacy for deep learning and meaningful engagement with coursework, not policing student behaviour through unreliable technology. Detection should give way to education, suspicion to support and surveillance to guidance. When we position students as already guilty, we shouldn't be surprised that they respond by trying to outwit our systems rather than engaging with the deeper questions about learning and integrity that AI raises. The anxiety reported by students who feel constantly watched and judged represents a failure of educational technology to serve educational goals. When tools designed to protect academic integrity instead undermine student wellbeing and the trust essential to learning, they have lost their purpose. UCT and other South African universities deserve recognition for prioritising student welfare and educational relationships over the false security of flawed detection software. Their decision sends a clear message: technology should serve education, not the other way around. As more institutions grapple with AI's impact on higher education, South Africa's approach offers a valuable model; one that chooses trust over surveillance, education over detection and relationships over algorithms. In an era of rapid technological change, this commitment to fundamental educational values provides a steady foundation for navigating uncertainty. The future of academic integrity lies not in better detection software, but in better education about integrity itself. DM Sioux McKenna is professor of higher education studies at Rhodes University. Neil Kramm is an educational technology specialist in the Centre of Higher Education Research, Teaching and Learning (CHERTL) at Rhodes University. He is currently completing his PhD on AI and its influence on assessment in higher education.

Nissan Magnite earns five-star Global NCAP safety rating
Nissan Magnite earns five-star Global NCAP safety rating

TimesLIVE

timea day ago

  • TimesLIVE

Nissan Magnite earns five-star Global NCAP safety rating

The updated Nissan Magnite has become the first vehicle in South Africa to earn a five-star Global NCAP safety rating under the programme's latest testing protocols. This represents a significant improvement from its earlier two-star rating. Built in India, the Magnite was originally rated just two stars for both adult and child occupant protection, with only two airbags offered as standard. Since then, Nissan has introduced a series of safety enhancements to the model, including six airbags, electronic stability control (ESC), improved seat belt systems, pedestrian protection and three-point seat belts for all passengers. After the upgrades, the vehicle was submitted for voluntary retesting and initially achieved a four-star rating. Nissan continued to refine the car and submitted it again for a second round of testing. This latest version secured five stars for adult occupant protection and three stars for child occupants. The tests were conducted under Global NCAP's latest protocols, which include assessments of frontal and side impacts, ESC performance, pedestrian protection and side pole impact protection — all essential for the highest scores. 'Nissan's leap from a two to five-star safety rating for the Magnite is more than just an achievement — it's a vital step towards safer cars and roads for everyone,' said Bobby Ramagwede, CEO of the Automobile Association of South Africa. 'This sends a strong message to the entire industry: investing in vehicle safety isn't just good practice, it saves lives. The AA will continue to advocate for safer vehicles and empower consumers with trusted, transparent safety information.' He added: 'The association is very happy that Nissan South Africa's best-selling passenger vehicle has attained the much sought after Global NCAP five-star rating and that Nissan has taken the safety of their South African customers to heart with the facelifted Magnite.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store