logo
'Now I can do it myself': Gogos learn to use smartphones in 'Gogos with Vuma' digital skills training programme

'Now I can do it myself': Gogos learn to use smartphones in 'Gogos with Vuma' digital skills training programme

The Herald13-07-2025
In a step towards digital inclusion, 49 elderly residents from Katlehong graduated on Friday from a digital skills training programme aimed at empowering senior citizens with essential tech knowledge.
Hosted by VumaTel in partnership with goGOGOgo (NPC) at Matsediso Primary School in Katlehong on the East Rand, the programme equipped the participants, many of whom had little or no prior experience with technology, with vital digital skills.
It covered fundamental skills such as using smartphones, navigating the internet, sending emails, using social media, accessing online services and practising digital safety.
One of the graduates, Phinet Lekau, 88, speaking at the graduation ceremony, said he was grateful to have been part of the course as he now knew how to use the internet, order supplies from shops and speak to his friends with ease.
'I knew nothing about the internet before this course. My grandchildren would simply call me holding the phone, telling me what to do. But this course has opened my eyes. I can now do all of those things on my own,' said Lekau.
Agnes Letsoge, 82, said she can now use her smartphone.
'I am very happy to have been part of this programme, because the phone was being used by my grandchildren and they would constantly finish my money and airtime. Since I've been taught how to use it by myself, I can finally enjoy my pension money,' Letsoge said.
Before doing the course she couldn't even load airtime, she said.
'Now I can do it myself, they can no longer rob me,' she said.
Vumatel CSI co-ordinator Thando Mokoena said with most NGOs focusing on the youth and children, they identified a gap for training for the elderly.
'We know that almost 40% of children in South Africa are living with grandparents. We want to revive that thing that they are still here, we still care and we see that they matter. The course we put them through is a standard ICT course, learning how to browse the internet, and how to use a smartphone, as most of them used or knew how to use small phones,' Mokoena said.
The programme, which was part of Vumatel's ongoing commitment to community upliftment and digital inclusion, also fostered intergenerational learning, with younger facilitators helping to mentor the elderly participants throughout their training.
Jane Simmonds, founder and executive director for goGOGOgo (NPC), an NGO aimed at building capacity in elderly people raising grandchildren, Simmonds explained that with 9.7-million children in South Africa essentially living in multigenerational households with grandparents playing an important part in their upbringing, the organisation is working to strengthen the role of the older people to build their footprint and amplify their voices. This is to ultimately give them knowledge, information and modern-day practices to navigate raising children in the digital age.
'Many of these older people are also raising fourth generation, so they raise their children, their grandchildren and are now looking after their great-grandchildren. We are working at strengthening the role of these women and men, building their footprint, their voice, amplifying their voices, giving the knowledge, information about modern-day practices. When raising children with internet and wi-fi, social media, violence, GBV, so many things that these elderly people have to address when raising children. We provide programmes where we strengthen the role of grandparents raising grandchildren and recognise the importance of this vulnerable, marginalised, often excluded population of people who are the heroes of South Africa,' she said.
Simmonds said the initiative, which started during the Covid-19 lockdown, has funded more than 10 programmes with about 400 beneficiaries.
She said the grandparents are identified through local schools and organisations.
TimesLIVE
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Explosion in waste truck: Ekurhuleni warns residents not to dump hot ash in wheelie bins
Explosion in waste truck: Ekurhuleni warns residents not to dump hot ash in wheelie bins

The Herald

time19 hours ago

  • The Herald

Explosion in waste truck: Ekurhuleni warns residents not to dump hot ash in wheelie bins

The City of Ekurhuleni is urging residents not to dump hot ash into municipal wheelie bins after an explosion inside a waste compactor truck in removal officials were forced to offload a truck mid-route last week after a blast occurred due to improperly discarded domestic waste. There were no injuries to staff or damage to the vehicle. 'Putting hot ash or explosives inside a 240l waste bin is not only damaging to the waste container but may also pose a danger to the officials picking up the waste bins,' the municipality said. The city confirmed the cause of the explosion was 'domestic waste that contained smouldering material' prompting renewed calls for residents to handle their rubbish responsibly. Ekurhuleni said only domestic waste such as the following should be placed in 240l wheelie bins: Polystyrene and polythene. Carrier bags. Tissues, napkins and kitchen towels. Nappies. Cat litter, animal faeces and bedding. Soiled fast food containers and pizza boxes. Oil or fat from food preparation or cooking. Cigarette ends. Broken crockery or glass. Cotton wool and buds. Contents of vacuum cleaners. Crisp and sweet wrappers (which are often not recyclable). The city also reminded residents that use of the 240l wheelie bins is mandatory for all households.'In an instance where a customer generates waste that exceeds the 240l wheelie bin's capacity in a week, they are advised to visit their nearest waste management depot to apply for an extra wheelie bin. An additional waste bin results in an additional monthly charge aligning with the extra waste generated by a customer,' the city said. Each 240l bin can accommodate the equivalent of three full refuse bags, enough for the average household's waste over seven days. Apart from making the collection process more efficient and uniform, the bins offer practical benefits. The city said: 'The 240l wheelie bins minimise the tearing of refuse bags by rodents and stray dogs, thus providing a clean environment. They also help protect officials from exposure to sharp and harmful objects usually found inside refuse bags.' TimesLIVE

Reimagining employment in the age of the fourth industrial revolution
Reimagining employment in the age of the fourth industrial revolution

Mail & Guardian

timea day ago

  • Mail & Guardian

Reimagining employment in the age of the fourth industrial revolution

Labour laws fall short in the fourth industrial revolution. Graphic: John McCann/M&G The fourth industrial revolution (4IR) has become a byword for transformation. As entire industries and social norms shift beneath our feet because of artificial intelligence (AI), so too does the very concept of employment. Less than a decade ago, employment structures were largely rigid, characterised by fixed hours, physical workplaces, and clearly defined responsibilities. The Covid-19 pandemic catalysed a dramatic break from this paradigm. In 2020, the world was forced into a remote-first mode, revealing the limitations of traditional employment models. This transformation, as To grapple with the legal implications of this shift, we must first understand how the scope of employment — that is, the range of activities an employee is expected to perform — has evolved. Remote work, hybrid arrangements, platform-based jobs and the gig economy are no longer anomalies; they are becoming the norm. Flexibility and autonomy, once considered perks, are now central pillars of modern work culture. As For example, remote work has rendered the concept of a fixed workplace nearly obsolete. Work now occurs in homes, co-working spaces or even across countries, raising questions about jurisdiction, supervision and employer responsibility. Gig and platform-based work presents further complexities. Determining whether a worker is an employee or an independent contractor often hinges on vague factors such as control, economic dependence or integration into the business. The rise of AI and automation compounds this further, redefining job descriptions and introducing new tasks that may fall outside traditional employee duties. Additionally, the use of personal devices and remote networks introduces heightened concerns around data security and privacy issues that conventional employment law is not fully equipped to handle. These changes have legal implications, particularly concerning the 'course and scope' of employment, which is a central doctrine to determining employer liability for acts committed by employees. Historically, courts have interpreted this concept through the lens of employer control and the direct furtherance of the employer's business. If employees were deemed to be acting within the scope of their duties, the employer could be held vicariously liable for their actions. But when an employee was engaged in what courts have termed a 'frolic of their own' or personal pursuits unrelated to their job, the employer would not bear responsibility. An important consideration is that the abandonment-mismanagement rule holds that an employer may still be vicariously liable if an employee, while participating in a personal frolic, partially performs their work duties, thus effectively committing a simultaneous act and omission. These distinctions, already intricate, are increasingly difficult to apply in the modern world. There are a number of essential questions to be considered. For example, how should courts assess the scope of employment when work is asynchronous, occurring across time zones and digital platforms? What happens when employees alternate between professional and personal tasks at the same time while working from home? How should algorithmic supervision and AI tools factor into evaluations of employer control? These questions underscore the need for a more dynamic and context-sensitive framework for interpreting the scope of employment — one that reflects the fluidity of modern work rather than clinging to the static definitions of the past. Equally urgent is the question of who qualifies as an employee. Traditional labour laws were designed with clear, stable employment relationships in mind. But in the gig economy, where many workers straddle the line between contractor and employee, these laws often fall short. If left unaddressed, this legal ambiguity could allow employers to shirk responsibilities around fair compensation, social protection, and worker benefits, undermining the principles of fairness and dignity that labour law seeks to uphold. Balancing flexibility — a key value for many modern workers — with the employer's need for accountability, productivity, and oversight is no small feat. It requires a recalibration of the legal system. As Mpedi aptly observes: 'Historically, the law has been a largely reactive tool. But, in the age of AI, it cannot remain so.' The legal system must become anticipatory, not merely responsive. It must evolve in tandem with the digital transformation it seeks to regulate. This means revisiting — and in many cases, redefining — fundamental legal concepts such as 'employee', 'employer', 'work', 'workplace' and 'scope of employment'. Policymakers must also ensure that the rights and protections afforded to traditional employees extend to gig and platform workers, who increasingly constitute a significant portion of the labour force. Just as nature adapts to survive, so must the law. As we conclude in our book on AI and the Law : 'A meaningful subject in our conversations is the necessity for a flexible legal framework capable of adjusting to the rapid progress of AI advancement. Conventional legal ideas and laws created for a world centred on humans frequently prove inadequate when applied to AI.' If we are to meet the challenges — and seize the opportunities — of the fourth industrial revolution, we must embrace a Darwinian mindset: adapt or risk obsolescence. The future of employment is already here. The law must now catch up. Letlhokwa George Mpedi is the vice-chancellor and principal of the University of Johannesburg. Tshilidzi Marwala is the rector of the United Nations University and UN under-secretary-general. The authors' latest book on this subject is Artificial Intelligence and the Law (Palgrave Macmillan, 2024).

Don't believe everything AI tells you: A cautionary tale for academia
Don't believe everything AI tells you: A cautionary tale for academia

Mail & Guardian

time2 days ago

  • Mail & Guardian

Don't believe everything AI tells you: A cautionary tale for academia

Artificial intelligence can be a powerful ally but only if we cultivate the skills and habits that affirm our commitment to truth, discernment and verification. Graphic: John McCan/M&G I recently sat in a departmental colloquium where students were defending their research proposals before a panel of academics. Anyone who has gone through this exercise will attest the process of defending your master's or PhD proposal is, at best, a daunting and nerve-racking experience. The task is simple in theory but difficult in practice. The panel is seeking the student to prove their proficiency in conducting the research and clearly showing the gap their proposed study addresses. All this, within 10 to 15 minutes, to an audience in the room (mostly online nowadays) but also an audience that is referred to as the theory, policy and practitioner press. In the corner of the student (hopefully) are the watchful eyes and muted voices of their supervisor or supervision team, who themselves stand on trial before their academic peers. The result is a delicate dance, where the spoken word must align seamlessly with the written proposal. As one student delivered their presentation, my attention was caught by their mention of an article allegedly authored by me, published in the Journal of Business Ethics. A quick glance at their supporting documents confirmed my worst fear. I have never published a paper in that journal. Further to this, I don't even research or write in the field of business ethics. So, what had happened? The student had fallen victim to what is now widely known as an AI hallucination. In simple terms, they had placed their trust in the output of an artificial intelligence tool, which generated what looked like credible information about their topic and about me but which was fabricated. For the student, the AI-generated information seemed real. It said all the 'right' things and cited the kind of references a proposal defence panel would expect to hear and see. Yet, the result was false, misleading and nonsensical. What was missing was a critical process of verification needed long before the student could even be deemed to be ready to take part in this proposal defence. What we saw here was a double-layered false confidence. First, the false confidence of the AI itself. This came in the form of confidently making connections based on user prompts, some factual, others wholly fictional. Second, the false confidence of the human user through presenting AI hallucinations as fact, without adequate scrutiny, driven perhaps by the desire to impress a panel at all costs. What happened to the student? I choose to reflect on that last, because what happened to us as supervisors was equally instructive and worth reflecting upon. The experience (including the imaginary Journal of Business Ethics paper) became, for me, what sociologist Charles Horton Cooley called a 'looking glass self'. I began to see aspects of myself and my supervision practice through the mirror held up by the student's mistake. I prefer to describe what the student did as a mistake, rather than a punishable offence or as one leading survey in the United Kingdom called it, a violation of academic integrity. This incident sparked months of reflection for me. In a sobering way, I realised that my own experience with AI was not so different from the student's. Like our students, we supervisors are also searching for timely information to meet pressing demands. Like our students, we too struggle under the weight of information overload, turning to tools like AI to help us navigate the maze. And, like our students, we must also develop and exercise a critical eye in the face of what may appear to be technological progress. How did we respond as supervisors? For starters, given the growing popularity of AI among our students, some of us as supervisors felt the need to use such technology ourselves, to stay abreast of changes in the academic and professional landscape. It meant moving out of our comfort zones into spaces of discomfort, just to keep pace with what is happening. Some supervisors were quick to praise the functionality AI offers. For instance, using an AI tool to analyse large amounts of data in a short space of time was seen as a significant benefit. Others highlighted how AI could help students develop their writing and critical thinking skills provided that students' own voices remained central to the work, rather than being drowned out by the machine-generated content. We are truly living at the height of a technological moral panic, a time when our ability to exercise our executive functioning skills is being eroded precisely when we need them the most. It is a period in which voices of falsehood are legion, spreading at the mere click of a button, often without verification or reflection. Yet, this is also the very moment when we must be most vigilant and rise to the task of cultivating the skills and habits that affirm our commitment to truth, discernment and verification. Through the experience of watching students present their research proposals, we came to realise that our struggles are, in fact, the same; they just take different forms. As supervisors in our department, we embarked on a month-long dialogue with our students, acknowledging and praising the benefits of AI while also cautioning them about the dangers of AI hallucinations. Our hope is that this process proves beneficial for everyone involved. This benefit is anchored in helping students, supervisors, the university and ultimately society at large to achieve success rooted in both innovation and integrity. AI can be a powerful ally but only if we, both students and supervisors, treat its outputs as a starting point for inquiry, not the final word. Professor Willie Chinyamurindi is in the Department of Applied Management, Administration and Ethical Leadership at the University of Fort Hare. He writes in his personal capacity.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store