logo
#

Latest news with #KnowBe4Africa

Digital gossip: When WhatsApp groups become cyber-risk zones
Digital gossip: When WhatsApp groups become cyber-risk zones

The Citizen

time5 days ago

  • Business
  • The Citizen

Digital gossip: When WhatsApp groups become cyber-risk zones

93% of African respondents use WhatsApp for work communications, surpassing email and Microsoft Teams. Despite their popularity among employees, informal messaging platforms pose significant risks to organisations' cybersecurity. This is according to Anna Collard, senior vice president of Content Strategy and Evangelist at KnowBe4 Africa. WhatsApp According to the 2025 KnowBe4 Africa Annual Cybersecurity survey, 93% of African respondents use WhatsApp for work communications, surpassing email and Microsoft Teams. 'For many organisations, platforms like WhatsApp and Telegram have become integral to workplace communication. Ease of use is what makes them so popular,' explains Collard. 'Particularly on the continent, many people prefer WhatsApp because it's fast, familiar and frictionless. These apps are already on our phones and embedded in our daily routines.' Convenience at cost Collard says while it feels natural to ping a colleague on WhatsApp, especially if you're trying to get a quick answer, convenience often comes at the cost of control and compliance. In the US, a top-secret military attack on Yemen was leaked on the messaging platform Signal earlier this year, with the plan inadvertently shared with a newspaper editor and other civilians, including the Defence Secretary's wife and brother. 'There are multiple layers of risk,' states Collard. 'It's important to remember that WhatsApp wasn't built for internal corporate use, but as a consumer tool. Because of that, it doesn't have the same business-level and privacy controls embedded in it that an enterprise communication tool, such as Microsoft Teams or Slack, would have.' ALSO READ: South Africa remains a global hotspot for data breaches Data leakage Collard explains that the biggest risk for organisations is data leakage. 'Accidental or intentional sharing of confidential information, such as client details, financial figures, internal strategies or login credentials, on informal groups can have disastrous consequences. 'Informal platforms lack the audit trails necessary for compliance with regulations, particularly in industries like finance with strict data-handling requirements,' she said. Identity theft She said phishing and identity theft are also threats. 'Attackers love platforms where identity verification is weak,' she says, adding that at least 10 people in her personal network have reported being victims of WhatsApp impersonation and takeover scams. 'Once the scammer gains access to the account, in many cases via SIM swaps, the real user is locked out, and they have access to all their previous communications, contacts and files,' she comments. 'They then impersonate the victim to deceive their contacts, often asking for money or even more personal information.' ALSO READ: SA's Treasury discovers malware as hackers exploit Microsoft flaw Mitigating risks She explained that beyond security, using these channels can also lead to inappropriate communication among employees or the blurring of work-life boundaries, resulting in burnout. ' Collard said that for organisations wanting to mitigate these risks, it's important to set up a clear communications strategy. 'First, provide secure alternatives. Don't just tell people what not to use. Make sure that tools like Teams or Slack are easy to access and clearly endorsed.' Collard said it is also vital to educate employees on why secure communication matters. 'This training should include digital mindfulness principles, such as to pause before sending, think about what you're sharing and with whom, and be alert to emotional triggers like urgency or fear, as these are common tactics in social engineering attacks.' Collard said by introducing approved communication tools, organisations can benefit from additional security features, such as audit logs, data protection, access control and integration with other business tools. 'Using approved platforms helps maintain healthy boundaries, so work doesn't creep into every corner of your personal life. It's about digital wellbeing as much as it is about cybersecurity.' Collard maintains that while informal messaging offers convenience, its unchecked use introduces significant cyber risks, saying organisations must move beyond simply acknowledging the problem and proactively implement clear policies, provide secure alternatives, and empower employees with the digital mindfulness needed to navigate these cyber-risk zones safely. ALSO READ: Data breaches cost SA organisations over R360m in 3 years

Chats, hacks and cyber traps: When WhatsApp groups become serious cyber-risk zones
Chats, hacks and cyber traps: When WhatsApp groups become serious cyber-risk zones

IOL News

time6 days ago

  • Business
  • IOL News

Chats, hacks and cyber traps: When WhatsApp groups become serious cyber-risk zones

The cybersecurity risks of informal messaging platforms in the workplace Image: Supplied In the ever-evolving landscape of workplace communication, the convenience and familiarity of informal messaging platforms like WhatsApp and Telegram have become indispensable tools for many organisations. However, their widespread popularity among employees raises significant concerns related to cybersecurity, as highlighted by the 2025 KnowBe4 Africa Annual Cybersecurity Survey. The findings reveal that an overwhelming 93% of African respondents utilise WhatsApp for work communications, eclipsing traditional email and even Microsoft Teams. But what can organisations do to safeguard themselves against potential data leakage and other evolving threats? According to Anna Collard, Senior Vice President of Content Strategy and Evangelist at KnowBe4 Africa, the comfort of using these applications is a driving force behind their integration in workplaces. 'Particularly on the continent, many people prefer WhatsApp because it's fast, familiar, and frictionless,' she explains. In today's hybrid work environment, where collaboration is key, these platforms provide a quick and effective means for employees to connect. 'It feels natural to ping a colleague on WhatsApp, especially if you're trying to get a fast answer,' she adds. However, the convenience of informal platforms can lead to detrimental risks regarding control and compliance. Informal messaging, formal risks Recent incidents have illuminated the dangers associated with using these informal channels for professional communications. Notably, WhatsApp messages have been cited as evidence in employee tribunals, indicating the gravity of what can transpire in a seemingly harmless chat. The British bank NatWest has taken the bold step of banning WhatsApp communications among its staff, signalling a growing recognition of the associated perils. Furthermore, the alarming leak of a US military operation's details via Signal, an informal messaging app, underlines how these platforms can pose threats beyond the corporate realm. Collard points out that informal messaging apps were not designed with corporate usage in mind and lack essential privacy and business-level controls found in more secure tools like Microsoft Teams or Slack. 'Organisations face multiple layers of risk,' she warns. The spectre of data leakage stands at the forefront, with accidental or intentional sharing of sensitive information, such as client details and financial data, threatening to devastate corporate integrity and client trust. 'It's also completely beyond the organisation's control, creating a shadow IT problem,' she notes. Alarmingly, the 2025 survey revealed that 80% of respondents rely on personal devices for work, many of which remain unmanaged, ultimately creating significant blind spots for organisations. Additionally, the absence of an audit trail on these platforms can jeopardise compliance with industry-specific regulations. This is particularly relevant to sectors such as finance, where meticulous data handling is obligatory. Coupled with vulnerabilities to phishing and identity theft—where criminals exploit weak identity verification on these platforms—organisations find themselves in precarious territory. As Collard observes, numerous individuals have fallen prey to WhatsApp impersonation scams, with attackers capitalising on an unsuspecting user's compromised account to manipulate their contacts. This concern extends beyond mere security threats; the informal use of messaging platforms can also lead to inappropriate employee interactions and blur the boundaries between professional and personal life, contributing to workplace burnout. 'A constant stream of messages can disrupt focus and ultimately lower productivity,' claims Collard. Having the right guardrails in place To mitigate these risks, it is crucial for organisations to establish clear communication strategies. 'First, provide secure alternatives,' Collard advises. Rather than merely prohibiting the use of informal tools, businesses should make access to secure platforms like Teams or Slack simple and accessible. Furthermore, employee education is paramount. This training should encompass the significance of secure communication, focusing on digital mindfulness principles—encouraging employees to pause and consider what they are sharing, their intended recipients, and to remain vigilant against emotional triggers such as urgency, which are often exploited in social engineering attacks. Cultivating a culture of psychological safety is essential, allowing employees to feel empowered to question odd requests, even if they originate from higher-ups. Introducing approved communication tools can also enhance security features, incorporating capabilities such as audit logs, data protection, and access control. These secure platforms foster healthier communication practices, allowing employees to schedule messages and set availability statuses, thereby preserving work-life boundaries and enhancing overall digital wellbeing. In conclusion, while informal messaging platforms provide enticing convenience, their unchecked utilisation can usher in significant cybersecurity risks. As Collard underscores, organisations must transcend mere acknowledgment of the issue and proactively implement robust policies, offer secure alternatives, and empower employees with the digital mindfulness necessary to safely navigate these treacherous cyber landscapes. IOL

Eskom launches AI chatbot 'Alfred' to speed up fault reporting
Eskom launches AI chatbot 'Alfred' to speed up fault reporting

The Citizen

time12-06-2025

  • Business
  • The Citizen

Eskom launches AI chatbot 'Alfred' to speed up fault reporting

Eskom has faced backlash for its lack of service and expediting complaints, which often leaves people in the dark. Eskom has taken a small step into the future, and probably one giant leap, with the launch of Alfred, an innovative artificial intelligence (AI)- driven chatbot designed to enhance and expedite customer service interactions. The parastatal has faced backlash over its lack of service and slow response to complaints, which often leaves people in the dark, angry, and frustrated. What is Alfred for? Eskom aims to utilise Alfred to minimise queues and provide a safer, more efficient experience. Alfred allows customers to report power outages, receive instant reference numbers, and get real-time updates on existing faults, any time of day or night. 'Alfred makes your interactions seamless, fast, socially distanced and safe. 'Utilising artificial intelligence to enhance and speed up customer service, Eskom customers can now report a power loss, get a reference number within seconds and get progress feedback on an existing fault – any time of day or night,' the utility said. ALSO READ: Report reveals alarming collection of data by AI chatbots Where is Alfred? Alfred can be found on Eskom's main page. You can also click on the Chatbot icon on the top menu. Alfred is on WhatsApp on this number 08600 37566. 'Eskom's Alfred is specifically for customers who can use their account or meter number to interact with the chatbot. Once engaged, Alfred allows you to log a power interruption as it happens and provides a reference number for your report. 'This makes it easy to track the progress of faults and stay informed without the need for long queues or phone calls,' Eskom said. Users are advised to provide accurate information when seeking assistance. Chatbots Meanwhile, The Citizen previously reported that chatbots can help diminish long queues and lengthy telephone calls to resolve queries at your bank, municipality, and telephone company. The rise of advanced language models, such as ChatGPT, has ushered in a new era of human-like interactions, where chatbots can engage in natural conversations, solve complex problems, and even exhibit creative thinking. This remarkable progress has opened up a world of possibilities, but it also raises concerns about the reliability and accountability of these systems, Anna Collard, Senior Vice President of Content Strategy and Evangelist at KnowBe4 Africa, has warned. Authentication Collard said that while she likes using chatbots, she will always double-check the original sources when using chatbots for research or to ensure accurate data. Collard added that chatbots handling sensitive transactions, such as banking queries, should authenticate users before accessing or sharing any personal information. ALSO READ: Eskom winter outlook: Here's how many days of load shedding to expect in SA

Generative AI Tools Expose Corporate Secrets Through User Prompts
Generative AI Tools Expose Corporate Secrets Through User Prompts

Arabian Post

time02-06-2025

  • Business
  • Arabian Post

Generative AI Tools Expose Corporate Secrets Through User Prompts

A significant portion of employee interactions with generative AI tools is inadvertently leaking sensitive corporate data, posing serious security and compliance risks for organisations worldwide. A comprehensive analysis by Harmonic Security, involving tens of thousands of prompts submitted to platforms such as ChatGPT, Copilot, Claude, Gemini, and Perplexity, revealed that 8.5% of these interactions contained sensitive information. Notably, 45.77% of the compromised data pertained to customer information, including billing details and authentication credentials. Employee-related data, such as payroll records and personal identifiers, constituted 26.68%, while legal and financial documents accounted for 14.95%. Security-related information, including access keys and internal protocols, made up 6.88%, and proprietary source code comprised 5.64% of the sensitive data identified. The prevalence of free-tier usage among employees exacerbates the risk. In 2024, 63.8% of ChatGPT users operated on the free tier, with 53.5% of sensitive prompts entered through these accounts. Similar patterns were observed across other platforms, with 58.62% of Gemini users, 75% of Claude users, and 50.48% of Perplexity users utilizing free versions. These free tiers often lack robust security features, increasing the likelihood of data exposure. ADVERTISEMENT Anna Collard, Senior Vice President of Content Strategy & Evangelist at KnowBe4 Africa, highlighted the unintentional nature of these data leaks. She noted that users often underestimate the sensitivity of the information they input into AI platforms, leading to inadvertent disclosures. Collard emphasized that the casual and conversational nature of generative AI tools can lower users' guards, resulting in the sharing of confidential information that, when aggregated, can be exploited by malicious actors for targeted attacks. The issue is compounded by the lack of comprehensive governance policies within organizations. A study by Dimensional Research and SailPoint found that while 96% of IT professionals acknowledge the security threats posed by autonomous AI agents, only 54% have full visibility into AI agent activities, and a mere 44% have established governance policies. Furthermore, 23% of IT professionals reported instances where AI agents were manipulated into revealing access credentials, and 80% observed unintended actions by these agents, such as accessing unauthorized systems or sharing inappropriate data. The rapid adoption of generative AI tools, driven by their potential to enhance productivity and innovation, has outpaced the development of adequate security measures. Organizations are now grappling with the challenge of balancing the benefits of AI integration with the imperative to protect sensitive data. Experts advocate for the implementation of stringent oversight mechanisms, including robust access controls and comprehensive user education programs, to mitigate the risks associated with generative AI usage.

Why Empowered People Are the Real Cyber Superpower
Why Empowered People Are the Real Cyber Superpower

Zawya

time05-05-2025

  • Business
  • Zawya

Why Empowered People Are the Real Cyber Superpower

It's time to retire the tired narrative that employees are the 'weakest link' in cybersecurity. They're not. They're simply the most frequently targeted. And that makes sense – if you're a cybercriminal, why brute-force your way into secure systems when you can just trick a human? And that is why over-relying on technical controls only goes wrong. So is treating users like liabilities to be controlled, rather than assets to be empowered. One of the core principles of Human Risk Management (HRM) is not about shifting blame, but about enabling better decisions at every level. It's a layered, pragmatic strategy that combines technology, culture, and behaviour design to reduce human cyber risk in a sustainable way. And it recognises this critical truth: your people can be your greatest defence – if you equip them well. The essence of HRM is empowering individuals to make better risk decisions, but it's even more than that. 'With the right combination of tools, culture and security practices, employees become an extension of your security programme, rather than just an increased attack surface,' asserts Anna Collard, SVP Content Strategy&Evangelist at KnowBe4 Africa. A recent IBM study revealed that more than 90% of all cybersecurity breaches can be traced back to human error ( due to employees being successfully exploited through phishing scams, their use of weak passwords or non-optimal handling of sensitive data. Companies have long seen the upward trend in this threat, thanks to numerous studies, and subsequently employees are often judged to be the biggest risk companies need to manage. This perspective, though, is denying businesses the opportunity to develop the best defence they could have: empowered, proactive employees at the frontline; not behind it. Shield users – but also train them through exposure Of course, the first thing companies should do is protect and shield employees from real threats. Prevention and detection technologies – email gateway filters, endpoint protection, AI-driven analysis – are essential to keeping malicious content from ever reaching user's inboxes or devices. But here's the catch: if users are never exposed to threats, they don't build the muscle to recognise them when they do get through. Enter the prevalence effect – a cognitive bias which shows that the less frequently someone sees a threat (like a phishing email), the less likely they are to spot it when it finally appears. It's a fascinating and slightly counterintuitive insight: in trying to protect users too much, we may be making them more vulnerable. That's why simulated phishing campaigns and realistic training scenarios are so critical. They provide safe, controlled exposure to common attack tactics – so people can develop the reflexes, pattern recognition, and critical thinking needed to respond wisely in real situations. Many of today's threats don't just rely on tech vulnerabilities – they exploit human attention. Attackers leverage stress, urgency, and distraction to bypass logic and trigger impulsive actions. Whether it's phishing, smishing, deepfakes, or voice impersonation scams, the aim is the same: manipulate humans to bypass scrutiny. That's why a foundational part of HRM is building what I call digital mindfulness – the ability to pause, observe, and evaluate before acting. This isn't abstract wellness talk; it's a practical skill that helps people notice deception tactics in real-time and stay in their system (critical thinking mode) instead of reacting on autopilot. Tools such as systems-based interventions, prompts, nudges or second chance reminders are ways to induce this friction to encourage pausing when and if it matters. 'Every day, employees face a growing wave of sophisticated, AI-powered attacks designed to exploit human vulnerabilities, not just technical ones. As attackers leverage automation, AI and social engineering at scale, traditional training just isn't effective enough.' Protection requires layered defence 'Just as businesses manage technical vulnerabilities, they need to manage human risk – through a blend of policy, technology, culture, ongoing education, and personalised interventions,' says Collard. This layered approach extends beyond traditional training. System-based interventions – such as smart prompts, real-time nudges, and in-the-moment coaching – can slow users down at critical decision points, helping them make safer choices. Personalised micro-learning, tailored to an individual's role, risk profile, and behavioural patterns, adds another important layer of defence. Crucially, Collard emphasises that zero trust shouldn't apply only to systems. 'We need to adopt the same principle with human behaviour,' she explains. 'Never assume awareness. Always verify understanding, and continuously reinforce it.' To make this concept more accessible, the acronym D.E.E.P., a framework for human-centric defence: Defend: Use technology and policy to block as many threats as possible before they reach the user. Educate: Deliver relevant, continuous training, simulations, and real-time coaching to build awareness and decision-making skills. Empower: Foster a culture where employees feel confident to report incidents without fear of blame or repercussions. Protect: Share threat intelligence transparently, and treat mistakes as learning opportunities, not grounds for shame. 'Fear-based security doesn't empower people,' she explains. 'It reinforces the idea that employees are weak points who need to be kept behind the frontline. But with the right support, they can be active defenders—and even your first line of defence.' Empowered users are part of your security fabric When people are trained, supported, and mentally prepared—not just lectured at once a year – they become a dynamic extension of your cybersecurity posture. They're not hiding behind the firewall; they are part of it. With attacks growing in scale and sophistication, it's not enough to rely on software alone. Businesses need a human layer that is just as adaptive, resilient, and alert. That means replacing blame culture with a learning culture. It means seeing people not as the problem, but as part of the solution. Because the truth is: the best defence isn't a perfect system. It's a well-prepared person who knows how to respond when something slips through. 'Human behaviour is beautifully complex,' Collard concludes. 'That's why a layered approach to HRM – integrating training, technology, processes and cognitive readiness – is essential. With the right support, employees can shift from being targets to becoming trusted defenders.' Distributed by APO Group on behalf of KnowBe4.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store