logo
#

Latest news with #SABRIC

AI fuels surge in sophisticated cybercrime
AI fuels surge in sophisticated cybercrime

IOL News

time17-07-2025

  • Business
  • IOL News

AI fuels surge in sophisticated cybercrime

Cybercrime poses an unprecedented threat to businesses. Image: File picture. Artificial intelligence is ushering in a new era of cybercrime, with AI-powered scams increasingly targeting individuals and financial systems. In recent months, experts have reported a surge in sophisticated fraud schemes that use AI to mimic real people with startling accuracy, raising concerns about security, privacy, and the erosion of public trust in digital communications. Sameer Kumandan, Managing Director of SearchWorks, says one of the key strategies being used is the creation of highly convincing fake images, videos, and audio—commonly referred to as 'deepfakes.' He explains that these are often used to impersonate real individuals and spread misleading or false information. More concerning is that, while early deepfakes were often unconvincing, recent advancements have made them increasingly difficult to detect, making it easier for bad actors to mislead, manipulate, and defraud. Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Next Stay Close ✕ Ad Loading Kumandan recounts a recent incident where criminals impersonated Risto Ketola, Momentum Group's Financial Director, on WhatsApp. They used Ketola's LinkedIn profile photo to create a closed WhatsApp group, pretending to be him. Although this particular case did not involve AI-generated imagery or video, it highlighted the risks associated with the misuse of a person's likeness for malicious purposes. 'Deepfake-driven cybercrime has escalated to the point where the South African Banking Risk Information Centre (SABRIC) recently issued a strong warning about the growing threat of AI-enabled fraud,' said Kumandan. 'SABRIC specifically highlighted the use of deepfakes and voice cloning to impersonate bank officials, promote fake investment schemes, and fabricate endorsements from well-known public figures. This emerging threat not only compromises the integrity of the financial sector but also erodes customer trust and confidence in digital interactions.' He added that fraudsters are increasingly using AI to bypass security measures such as automated onboarding systems and Know Your Customer (KYC) checks, allowing them to create accounts and access services under false identities. 'From a business email compromise (BEC) standpoint, attackers are now incorporating deepfake audio and video of senior executives into phishing attempts, convincing employees to release funds or disclose sensitive information. Social engineering attacks have also become more sophisticated, with AI being used to analyse and replicate communication styles based on publicly available information, making scams appear more authentic. 'In some cases, AI is used to generate entirely synthetic identities, combining real and fabricated data to create fake personas capable of applying for credit, laundering money, or committing large-scale financial fraud.' Kumandan warns that many legacy fraud detection tools aren't equipped to identify fake audio or video, making deepfake scams even harder to detect. 'In response, financial institutions must urgently evolve their fraud prevention strategies to stay ahead of these sophisticated threats. Regulators expect institutions to keep up with the latest cybercrime trends, and failing to detect deepfake-based fraud can result in compliance failures, fines, and legal action. 'Furthermore, financial institutions must consider the broader impact of these risks on customer trust. As awareness of deepfake threats grows, it is understandable that clients may begin to question the authenticity of video calls, digital signatures, and other remote interactions. This erosion of confidence has the potential to hinder digital transformation initiatives and may even prompt some customers to disengage from digital platforms altogether.' Kumandan says that through VOCA, an application designed to streamline compliance processes for accountable institutions, SearchWorks provides financial institutions with verified data and intelligent processes to reduce fraud exposure and ensure regulatory compliance. 'By leveraging real-time data and automated checks, VOCA helps organisations verify the identity and legitimacy of the individuals and entities they engage with. It flags discrepancies, detects suspicious behaviour, and highlights incomplete or false information, supporting informed decision-making at every stage.' He added that through continuous monitoring of client behaviour and borrower risk profiles, VOCA enables early identification of potential threats, helping institutions close compliance gaps, avoid financial penalties, and stay ahead of emerging fraud risks.

Nischal Mewalall steps down as SABRIC CEO after five transformative years
Nischal Mewalall steps down as SABRIC CEO after five transformative years

IOL News

time17-06-2025

  • Business
  • IOL News

Nischal Mewalall steps down as SABRIC CEO after five transformative years

Nischal Mewalall assumed the role of CEO amidst the chaos of the COVID-19 pandemic and nationwide lockdown and provided much-needed stability and strategic direction at a time when many organisations struggled to adapt. Image: Supplied Nischal Mewalall has announced his decision to step down as CEO of the South African Banking Risk Information Centre (SABRIC) after five years at the helm. His departure comes as he looks to pursue personal aspirations, a move met with gratitude from the board, employees, and stakeholders for his indelible impact on the organisation. Mewalall on Tuesday expressed his gratitude for the opportunity to lead SABRIC, saying it has been an incredible privilege. "I am proud of what we have achieved together, transforming the organisation, driving digital innovation, and fortifying South Africa's defences against fraud," he said. "I leave with confidence in SABRIC's continued success and extend my heartfelt thanks to the Board, our dedicated team, and our partners for their unwavering support." SABRIC is formed by the four major banks to assist the banking and cash-in-transit industries combat organised bank-related crimes by being a trusted financial crime risk information centre leveraging on strategic partnerships. Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Next Stay Close ✕ Assuming the role amidst the chaos of the COVID-19 pandemic and nationwide lockdown, Mewalall provided much-needed stability and strategic direction at a time when many organisations struggled to adapt. Under his stewardship, SABRIC not only navigated the challenges posed by the pandemic but also emerged with renewed strength and purpose, embracing a significant digital transformation that has redefined its approach to financial crime mitigation. Capable of withstanding the pressures of crisis, Mewalall's leadership fostered a technology-driven organisation capable of combating evolving threats. His forward-thinking approach also fortified partnerships between both the public and private sectors, collectively bolstering South Africa's defenses against cyber and financial threats. Bongi Kunene, SABRIC's board chairperson, said Mewalall has been instrumental in shaping SABRIC into a forward-looking, innovative organisation. "His leadership duringturbulent times was nothing short of remarkable, and his legacy will continue to benefitthe industry for years to come. We respect his decision to pursue new endeavours andwish him every success in his future pursuits," Kunene said. The Board has initiated a process to identify a successor and will announce furtherdetails in due course. In the interim, SABRIC remains firmly committed to its mission ofcombating financial crime through collaboration, innovation, and cutting-edgetechnology. BUSINESS REPORT

CEO caught on video admitting fraud - but it's a deepfake
CEO caught on video admitting fraud - but it's a deepfake

The Star

time10-06-2025

  • Business
  • The Star

CEO caught on video admitting fraud - but it's a deepfake

William Petherbridge | Published 9 hours ago Artificial intelligence has now made it possible to wake up to a video of your CEO seemingly admitting to fraud or receiving an urgent audio message from your CFO authorising a large, unexpected transaction, without any of it being real. Deepfakes aren't limited to criminal use cases targeting individuals or governments – they represent a sophisticated and escalating threat to corporations globally, including South Africa. Disinformation using deepfake technology The use of deepfake technology has become one of the most powerful tools fueling disinformation. The rise in AI and machine learning embedded in commercially available tools such as generative adversarial networks (GANs) has levelled the playing field and increased the sophistication of deepfake content. Cybercriminals, disgruntled insiders, competitors, and even state-sponsored groups can leverage deepfakes for devastating attacks, ranging from financial fraud and network compromise to severe reputational damage. The threat itself, however, is not fake; it's manifesting tangibly within South Africa. The South African Banking Risk Information Centre (SABRIC) has issued stark warnings about the rise in AI-driven fraud scams, explicitly including deepfakes and voice cloning used to impersonate bank officials or lure victims into fake investment schemes, sometimes even using fabricated endorsements from prominent local figures. With South Africa already identified by Interpol as a global cybercrime hotspot, estimating annual losses in the billions of Rands , the potential financial impact of sophisticated deepfake fraud targeting businesses is immense. There are also implications for democracy as a whole. Accenture Africa recently highlighted how easily deepfakes could amplify misinformation and political unrest in a nation where false narratives can already spread rapidly online – a critical concern when it comes to elections. Furthermore, the 'human firewall' – our employees – represents a significant area of vulnerability. Fortinet's 2024 Security Awareness and Training Global Research Report highlights that 46% of organisations now expect their employees to fall for more attacks in the future because bad actors are using AI. Phishing emails used to be easier to identify because they were poorly worded and contained multiple spelling errors, but they led to successful breaches for decades. Now, they're drastically more difficult to identify as AI-generated emails and deep-fake media have reached levels of realism that leave almost no one immune. Numerous types of malicious actors are most likely to target companies using deepfake technology. Cybercriminals who have stolen samples of a victim's email, along with their address book, for example, may use GenAI to generate tailored content that matches the language, tone and topics in the victim's previous interactions to aid in spear phishing – convincing them to take action such as clicking on a malicious attachment. Other cybercriminals use deepfakes to impersonate customers, business partners, or company executives to initiate and authorise fraudulent transactions. According to Deloitte's Center for Financial Services, GenAI-enabled fraud losses are growing at 32% year-over-year in the United States and could reach $40 billion by 2027. Disgruntled current or former employees may also generate deepfakes to seek revenge or damage a company's reputation. By leveraging their inside knowledge, they can make the deepfakes appear especially credible. Another potential deepfake danger may be from business partners, competitors or unscrupulous market speculators looking to gain leverage in negotiations or to try to affect a company's stock price through bad publicity. Combating the deepfake threat requires more than just technological solutions; it demands a comprehensive, multi-layered strategy encompassing technology, processes, and people. Advanced threat detection: Organisations must invest in security solutions capable of detecting AI-manipulated media. AI itself plays a crucial role, powering tools that can analyse content for the subtle giveaways often present in deepfakes. Robust authentication and processes: Implementing strong multi-factor authentication (MFA) remains paramount. Businesses should also review and strengthen processes around sensitive actions like financial transactions or data access requests, incorporating verification steps that cannot be easily spoofed by a deepfake voice or video call. A Zero Trust approach, verifying everything and assuming breaches when in doubt, is essential. Empowering the human firewall: Continuous education and awareness training are vital. Employees need to be equipped with the knowledge to recognise potential deepfake indicators and understand the procedures for verifying communications, especially those involving sensitive instructions or financial implications. Reputation management: Proactive reputation management and clear communication channels become even more critical. Being able to swiftly debunk a deepfake attack targeting the company or its leadership can mitigate significant damage. Staying informed and advocating: Cybersecurity teams must stay abreast of evolving deepfake tactics. Collaboration and information sharing within industries and engagement with bodies working on updating South Africa's cyber laws (such as aspects of POPIA) to specifically address deepfake crimes are important. Preparing for the inevitable Deepfakes are not a future problem; they are a clear and present danger to South African businesses. They target the very accuracy of the information we rely on as consumers, employees and investors. The question is no longer if a South African organisation will be targeted by a deepfake attack, but how prepared it will be when it happens. Proactive investment in robust security measures, stringent processes, and comprehensive employee education is not just advisable – it's essential for survival in this new era of digital deception. William Petherbridge, Systems Engineering Manager at Fortinet

CEO caught on video admitting fraud - but it's a deepfake
CEO caught on video admitting fraud - but it's a deepfake

IOL News

time09-06-2025

  • Business
  • IOL News

CEO caught on video admitting fraud - but it's a deepfake

The use of deepfake technology has become one of the most powerful tools fueling disinformation, says the writer. Image: Supplied Artificial intelligence has now made it possible to wake up to a video of your CEO seemingly admitting to fraud or receiving an urgent audio message from your CFO authorising a large, unexpected transaction, without any of it being real. Deepfakes aren't limited to criminal use cases targeting individuals or governments – they represent a sophisticated and escalating threat to corporations globally, including South Africa. Disinformation using deepfake technology The use of deepfake technology has become one of the most powerful tools fueling disinformation. The rise in AI and machine learning embedded in commercially available tools such as generative adversarial networks (GANs) has levelled the playing field and increased the sophistication of deepfake content. Cybercriminals, disgruntled insiders, competitors, and even state-sponsored groups can leverage deepfakes for devastating attacks, ranging from financial fraud and network compromise to severe reputational damage. The South African reality: A threat amplified The threat itself, however, is not fake; it's manifesting tangibly within South Africa. The South African Banking Risk Information Centre (SABRIC) has issued stark warnings about the rise in AI-driven fraud scams, explicitly including deepfakes and voice cloning used to impersonate bank officials or lure victims into fake investment schemes, sometimes even using fabricated endorsements from prominent local figures. With South Africa already identified by Interpol as a global cybercrime hotspot, estimating annual losses in the billions of Rands, the potential financial impact of sophisticated deepfake fraud targeting businesses is immense. There are also implications for democracy as a whole. Accenture Africa recently highlighted how easily deepfakes could amplify misinformation and political unrest in a nation where false narratives can already spread rapidly online – a critical concern when it comes to elections. Furthermore, the 'human firewall' – our employees – represents a significant area of vulnerability. Fortinet's 2024 Security Awareness and Training Global Research Report highlights that 46% of organisations now expect their employees to fall for more attacks in the future because bad actors are using AI. Phishing emails used to be easier to identify because they were poorly worded and contained multiple spelling errors, but they led to successful breaches for decades. Now, they're drastically more difficult to identify as AI-generated emails and deep-fake media have reached levels of realism that leave almost no one immune. Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Next Stay Close ✕ William Petherbridge, Systems Engineering Manager at Fortinet Image: Supplied Who targets companies using deepfakes? Numerous types of malicious actors are most likely to target companies using deepfake technology. Cybercriminals who have stolen samples of a victim's email, along with their address book, for example, may use GenAI to generate tailored content that matches the language, tone and topics in the victim's previous interactions to aid in spear phishing – convincing them to take action such as clicking on a malicious attachment. Other cybercriminals use deepfakes to impersonate customers, business partners, or company executives to initiate and authorise fraudulent transactions. According to Deloitte's Center for Financial Services, GenAI-enabled fraud losses are growing at 32% year-over-year in the United States and could reach $40 billion by 2027. Disgruntled current or former employees may also generate deepfakes to seek revenge or damage a company's reputation. By leveraging their inside knowledge, they can make the deepfakes appear especially credible. Another potential deepfake danger may be from business partners, competitors or unscrupulous market speculators looking to gain leverage in negotiations or to try to affect a company's stock price through bad publicity.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store