
Deepfakes pose growing threat to GCC as AI technology advances: Experts
Deepfakes - highly convincing digital forgeries created using artificial intelligence to manipulate audio, images and video of real people - have evolved rapidly in recent years, creating significant challenges for organizations and individuals alike, according to cybersecurity professionals speaking to Al Arabiya English.
Evolution of deepfake technology
'Deepfake technology has advanced at such a striking pace in recent years, largely due to the breakthroughs in generative [Artificial Intelligence (AI)] and machine learning,' David Higgins, senior director at the field technology office of CyberArk, told Al Arabiya English. 'Once seen as a tool for experimentation, it has now matured into a powerful means of producing audio, video, and image content that is almost impossible to distinguish from authentic material.'
The technology's rapid advancement has transformed what was once a novelty into a potential threat to businesses, government institutions, and individuals across the region.
'Today, deepfakes are no longer limited to simple face swaps or video alterations. They now encompass complex manipulations like lip-syncing, where AI can literally put words into someone's mouth, as well as full-body puppeteering,' said Higgins.
Rob Woods, director of fraud and identity at LexisNexis Risk Solutions, also told Al Arabiya English how the quality of such manipulations has improved dramatically in recent years.
'Deepfakes, such as video footage overlaying one face onto another for impersonation, used to show visible flaws like blurring around ears or nostrils and stuttering image quality,' Woods said. 'Advanced deepfakes now adapt realistically to lighting changes and use real-time voice manipulation. We have reached a point where even human experts may struggle to identify deepfakes by sight alone.'
Immediate threats to society
The concerns around deepfake technology are particularly relevant in the GCC region, where digital transformation initiatives are rapidly expanding, the experts said.
'The most pressing threat posed by deepfakes is their potential to create an upsurge in fraud while weakening societal trust and confidence in digital media – which is concerning given that digital platforms dominate communications and media around the world,' Higgins said.
The technology has made sophisticated tools accessible to malicious actors looking to manipulate public perception or conduct fraud, creating significant security concerns for businesses in the region.
'In Saudi Arabia, where AI adoption is advancing rapidly under Vision 2030, there is growing concern among businesses too. Over half of organizations surveyed in CyberArk's 2025 Identity Security Report cite data privacy and AI agent security as top challenges to safe deployment,' Higgins added.
Woods echoed these concerns, highlighting the democratization of deepfake technology as a particular challenge.
'The most immediate threat to society is that high-quality deepfake technology is widely available, enabling fraudsters and organized crime groups to enhance their scam tactics. Fraudsters can cheaply improve the sophistication of their attempts, making them look more legitimate and increasing the likelihood of success,' he said.
Rising concerns in Saudi Arabia
Recent reports suggest growing anxiety in Saudi Arabia specifically regarding deepfake threats. According to CyberArk's research cited by Higgins, the Kingdom is experiencing increased unease around manipulated AI systems.
'In recent months, concerns across the Middle East have intensified, particularly in countries like Saudi Arabia, where organizations report growing unease around the manipulation of AI agents and synthetic media,' Higgins said.
He cited specific data points highlighting this trend: 'According to CyberArk's 2025 Identity Security Landscape report, 52 percent of organizations surveyed in Saudi Arabia now consider misconfigured or manipulated AI behavior, internally or externally, a top security concern.'
Vulnerable sectors beyond politics
While political manipulation often dominates discussions about deepfakes, experts emphasized that virtually every sector faces potential threats from this technology.
'There are very few sectors that can safely say they are protected from potential deepfake manipulation. Industries such as finance, healthcare, and corporate enterprises are all at risk of being targeted,' Higgins warned.
He detailed how different sectors face unique vulnerabilities: 'When looking at the financial sector, deepfakes are being used to impersonate executives, leading to fraudulent transactions or insider trading. Healthcare institutions may face risks if deepfakes are used to manipulate medical records or impersonate medical professionals, potentially compromising patient care.'
The financial services sector appears particularly vulnerable in the GCC region, according to Woods.
'Financial services, including banking, digital wallets and lending, rely on verifying customer identities, making them prime targets for fraudsters,' he said. 'With diverse financial economies such as in the Middle East encouraging competition among digital banks and super apps, customer acquisition has become critical for balancing customer experience and risk management.'
Detection capabilities: A technological race
As deepfake technology continues to advance, detection methods are struggling to keep pace, creating a technological race between security systems and those seeking to exploit them.
'Detection technologies designed to combat deepfakes are advancing, but they are in a constant race against a threat that is always evolving,' Higgins said. 'As generative AI tools become more accessible and powerful, deepfakes are growing in realism and scale.'
He highlighted the limitations of current detection systems: 'While there are detection systems capable of detecting subtle inconsistencies in voice patterns, facial movements, and metadata, malicious actors continue to find ways to outpace them.'
Woods added: 'Organizations are just beginning to tackle the challenge of deepfakes and it is a race they must win. Countering AI-generated fraud, including deepfakes, demands AI-driven solutions capable of distinguishing real humans from deepfakes.'
Social media platforms' responsibility
The role of social media companies in addressing deepfake content remains a contentious issue, with experts calling for more robust measures to identify and limit the spread of malicious synthetic media.
'Social media platforms carry a critical responsibility in curbing the spread of malicious deepfakes. As the primary channels through which billions consume information globally, they are also the frontline where manipulated content is increasingly gaining traction,' Higgins said.
He acknowledged some progress while highlighting ongoing challenges: 'Some tech giants, including Meta, Google, and Microsoft, have begun introducing measures to label AI-generated content clearly – which are steps in the right direction. However, inconsistencies remain.'
Higgins pointed to specific platforms that may be exacerbating the problem: 'X (formerly Twitter) dismantled many of its verification safeguards in 2023, a move that has made public figures more vulnerable to impersonation and misinformation. This highlights a deeper issue: disinformation and sensationalism have, for some platforms, become embedded in their engagement-driven business models.'
Woods believes that while social media platforms are not responsible for the rise of deepfakes or malicious AI, irrespective of fraudsters' methods. However, these platforms can play a part in the solution, he said, adding collaboration through data-sharing initiatives between financial services, telecommunications and social media companies can significantly improve fraud prevention efforts.'
Public readiness and education
A particularly concerning aspect of the deepfake threat is the general public's limited ability to identify manipulated content, according to the experts.
'As the use of deepfakes spreads, the average internet user remains alarmingly unprepared to identify manipulated content,' Higgins said. 'Where synthetic media is becoming more and more realistic, simply trusting what we see or hear online is no longer an option.'
He advocated for a fundamental shift in how people approach digital content: 'Adopting a zero-trust mindset is key, and people must become accustomed to treating digital content with the same caution applied to suspicious emails or phishing scams.'
Woods agreed with this assessment, noting the difficulty even professionals face in identifying sophisticated deepfakes.
'Identifying deepfakes with the naked eye is challenging, even for trained professionals. People should be aware that deepfake technology is advancing quickly and not underestimate the tactics and tools available to fraudsters,' he said.
Practical advice for protection
Both experts offered practical guidance for individuals to protect themselves against deepfake-related scams, which often target emotional vulnerabilities.
'One common scenario involves fraudsters using deepfakes to imitate a distressed relative, claiming to need urgent financial help due to a lost phone or another emergency,' Woods explained.
He recommended several protective steps: 'Approach unexpected and urgent requests for money or personal information online with caution, even if they appear to come from a loved one or trusted source. Pause and consider whether it could be a scam. Verify the identity of the person by reaching out to them through a different method than the one they used to contact you.'
Higgins also emphasized the importance of education in combating the threat: 'Citizens must be encouraged to verify sources, limit public sharing of personal media, and critically assess the credibility of online content. Platforms, regulators, and educational institutions all have a role to play in equipping users with the tools and knowledge to navigate a digital landscape where not everything is as it seems.'
Regulatory frameworks
The experts agreed that regulatory frameworks addressing deepfake technology remain underdeveloped globally, despite the growing threat.
'The legal frameworks around deepfakes vary greatly across geographies and jurisdictions, sometimes creating a grey area between unethical manipulation and criminal activity,' Higgins pointed out. 'In Saudi Arabia, where laws around cybercrime are among the strictest in the Middle East, impersonation, defamation, and fraud through deepfakes may fall under existing regulations.'
Woods was more direct in his assessment of the current regulatory landscape: 'No global regulator has yet implemented a legal deterrent or regulatory framework to address the threat of deepfakes.'
Despite the serious nature of deepfake threats, the experts cautioned against complete alarmism, noting legitimate applications for the technology alongside its potential for harm.
'Not all deepfakes are bad and they do have a place in society, for example providing entertainment, gaming and augmented reality,' Woods said. 'However, as with any technological advancement, some people exploit these tools for malicious purposes.'
Higgins warned against dismissing the threat as overblown: 'Dismissing deepfakes as exaggerated or irrelevant underestimates one of the most disruptive threats faced today. While deepfake content may once have been a novelty, it has rapidly evolved into a tool capable of serious harm—targeting not just individuals or brands, but the very concept of truth.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Al Arabiya
an hour ago
- Al Arabiya
Microsoft alerts businesses, governments to server software attack
Microsoft has issued an alert about 'active attacks' on server software used by government agencies and businesses to share documents within organizations, and recommended security updates that customers should apply immediately. The FBI said on Sunday it is aware of the attacks and is working closely with its federal and private-sector partners, but offered no other details. In an alert issued on Saturday, Microsoft said the vulnerabilities apply only to SharePoint servers used within organizations. It said that SharePoint Online in Microsoft 365, which is in the cloud, was not hit by the attacks. 'We've been coordinating closely with CISA, DOD Cyber Defense Command and key cybersecurity partners globally throughout our response,' a Microsoft spokesperson said, adding that the company had issued security updates and urged customers to install them immediately. The Washington Post, which first reported the hacks, said unidentified actors in the past few days had exploited a flaw to launch an attack that targeted US and international agencies and businesses. The hack is known as a 'zero day' attack because it targeted a previously unknown vulnerability, the newspaper said, quoting experts. Tens of thousands of servers were at risk. In the alert, Microsoft said that a vulnerability 'allows an authorized attacker to perform spoofing over a network.' It issued recommendations to stop the attackers from exploiting it. In a spoofing attack, an actor can manipulate financial markets or agencies by hiding the actor's identity and appearing to be a trusted person, organization or website. Earlier, Microsoft said it is working on updates to 2016 and 2019 versions of SharePoint. If customers cannot enable recommended malware protection, they should disconnect their servers from the internet until a security update is available, it added.

Al Arabiya
an hour ago
- Al Arabiya
US nuclear weapons agency breached in Microsoft SharePoint hack: Reports
The US National Nuclear Security Administration was among those breached by a hack of Microsoft's SharePoint document management software, Bloomberg News reported on Tuesday, citing a person with knowledge of the matter. Bloomberg reported that no sensitive or classified information is known to have been compromised in the attack on the National Nuclear Security Administration, the agency responsible for maintaining and designing the nation's cache of nuclear weapons. Reuters could not immediately verify the report. The US Energy Department, US Cybersecurity and Infrastructure Security Agency, and Microsoft did not immediately respond to requests for comment from Reuters.


Arab News
5 hours ago
- Arab News
Elevare360 and Sahmik to power IR innovation in region
Elevare360 Advisory, a Saudi-based investor relations and strategic communications consultancy, and Sahmik, a Qatari capital markets data and analytics platform, have signed a strategic partnership agreement marking a pivotal step toward advancing investor relations standards across the GCC. As part of the agreement, the two firms will launch a co-branded bilingual investor relations platform tailored to clients in Saudi Arabia. The new solution, 'Powered by Sahmik × Elevare360,' will integrate real-time market data, ESG dashboards, and institutional-grade reporting tools into state-of-the-art IR websites. Already trusted by major GCC-listed companies, Sahmik's platform brings proven technology and credibility to the Kingdom. The solution will offer Saudi issuers a digital gateway to articulate their equity story, meet disclosure requirements, and actively engage shareholders — supporting the region's digital transformation drive to attract long-term capital and position-listed companies for global visibility. This partnership reflects the rising prominence of the Saudi stock market and the abundant opportunities it presents for regional and international investors. It also signals the growing maturity of GCC-based companies in adopting professional IR practices and forging high-impact cross-border alliances. As part of the agreement, the two firms will launch a co-branded bilingual investor relations platform tailored to clients in Saudi Arabia. Amid the Kingdom's busiest issuance cycle to date, 2024 saw 44 new listings raise the Saudi Exchange's total to 353 companies — up from 247 main-market names at the end of 2023. Momentum is set to accelerate, with forecasts from Al-Rajhi Capital, EY, and others projecting 50–60 IPOs across 2025–26. Against this backdrop of rapid expansion, the Elevare360 × Sahmik collaboration arrives at a pivotal moment — equipping issuers with data-rich, bilingual IR technology and strategic counsel to stand out, meet rising disclosure demands, and compete for global capital on a broader, faster-moving market stage. 'This partnership underscores our conviction that data-driven investor-relations solutions are essential for Saudi companies that want to attract and retain both institutional and retail investors. By deploying these cutting-edge tools, we can help transform the Kingdom's capital with Sahmik, we're equipping issuers to be more transparent, investor-ready, and fully aligned with the ambitions of Vision 2030,' said Dr. Mishal Al-Harbi, co-founder and COO of Elevare360. 'The Elevare-Sahmik alliance is a natural extension of our mission to democratize access to capital markets data and IR technology. By combining our analytics engine with Elevare's strategic expertise, we're delivering a truly end-to-end IR solution that scales across the GCC,' said Dalibor Pajic, head of content and data management. Rasha El-Hassan, head of IR advisory at Elevare360, added: 'The integration of advanced analytics and real-time market intelligence with strategic advisory services is a game changer for IR professionals across the region. Through our partnership with Sahmik, we are empowering companies to proactively manage investor perceptions, streamline compliance, and elevate their equity narratives in alignment with global best practices. This collaboration marks a significant leap forward in setting new standards for investor engagement and transparency in our region.' Initially focused on Saudi Arabia and Qatar, the partnership plans to expand across the broader GCC in response to growing demand for digital IR tools and localized advisory services. As members of the Middle East Investor Relations Association, both Elevare360 and Sahmik view this partnership as a testament to the value of regional collaboration within the MEIRA community — demonstrating how shared standards, data transparency, and strategic storytelling can unlock cross-border value.