
Deepfakes pose growing threat to GCC as AI technology advances: Experts
Deepfakes - highly convincing digital forgeries created using artificial intelligence to manipulate audio, images and video of real people - have evolved rapidly in recent years, creating significant challenges for organizations and individuals alike, according to cybersecurity professionals speaking to Al Arabiya English.
Evolution of deepfake technology
'Deepfake technology has advanced at such a striking pace in recent years, largely due to the breakthroughs in generative [Artificial Intelligence (AI)] and machine learning,' David Higgins, senior director at the field technology office of CyberArk, told Al Arabiya English. 'Once seen as a tool for experimentation, it has now matured into a powerful means of producing audio, video, and image content that is almost impossible to distinguish from authentic material.'
The technology's rapid advancement has transformed what was once a novelty into a potential threat to businesses, government institutions, and individuals across the region.
'Today, deepfakes are no longer limited to simple face swaps or video alterations. They now encompass complex manipulations like lip-syncing, where AI can literally put words into someone's mouth, as well as full-body puppeteering,' said Higgins.
Rob Woods, director of fraud and identity at LexisNexis Risk Solutions, also told Al Arabiya English how the quality of such manipulations has improved dramatically in recent years.
'Deepfakes, such as video footage overlaying one face onto another for impersonation, used to show visible flaws like blurring around ears or nostrils and stuttering image quality,' Woods said. 'Advanced deepfakes now adapt realistically to lighting changes and use real-time voice manipulation. We have reached a point where even human experts may struggle to identify deepfakes by sight alone.'
Immediate threats to society
The concerns around deepfake technology are particularly relevant in the GCC region, where digital transformation initiatives are rapidly expanding, the experts said.
'The most pressing threat posed by deepfakes is their potential to create an upsurge in fraud while weakening societal trust and confidence in digital media – which is concerning given that digital platforms dominate communications and media around the world,' Higgins said.
The technology has made sophisticated tools accessible to malicious actors looking to manipulate public perception or conduct fraud, creating significant security concerns for businesses in the region.
'In Saudi Arabia, where AI adoption is advancing rapidly under Vision 2030, there is growing concern among businesses too. Over half of organizations surveyed in CyberArk's 2025 Identity Security Report cite data privacy and AI agent security as top challenges to safe deployment,' Higgins added.
Woods echoed these concerns, highlighting the democratization of deepfake technology as a particular challenge.
'The most immediate threat to society is that high-quality deepfake technology is widely available, enabling fraudsters and organized crime groups to enhance their scam tactics. Fraudsters can cheaply improve the sophistication of their attempts, making them look more legitimate and increasing the likelihood of success,' he said.
Rising concerns in Saudi Arabia
Recent reports suggest growing anxiety in Saudi Arabia specifically regarding deepfake threats. According to CyberArk's research cited by Higgins, the Kingdom is experiencing increased unease around manipulated AI systems.
'In recent months, concerns across the Middle East have intensified, particularly in countries like Saudi Arabia, where organizations report growing unease around the manipulation of AI agents and synthetic media,' Higgins said.
He cited specific data points highlighting this trend: 'According to CyberArk's 2025 Identity Security Landscape report, 52 percent of organizations surveyed in Saudi Arabia now consider misconfigured or manipulated AI behavior, internally or externally, a top security concern.'
Vulnerable sectors beyond politics
While political manipulation often dominates discussions about deepfakes, experts emphasized that virtually every sector faces potential threats from this technology.
'There are very few sectors that can safely say they are protected from potential deepfake manipulation. Industries such as finance, healthcare, and corporate enterprises are all at risk of being targeted,' Higgins warned.
He detailed how different sectors face unique vulnerabilities: 'When looking at the financial sector, deepfakes are being used to impersonate executives, leading to fraudulent transactions or insider trading. Healthcare institutions may face risks if deepfakes are used to manipulate medical records or impersonate medical professionals, potentially compromising patient care.'
The financial services sector appears particularly vulnerable in the GCC region, according to Woods.
'Financial services, including banking, digital wallets and lending, rely on verifying customer identities, making them prime targets for fraudsters,' he said. 'With diverse financial economies such as in the Middle East encouraging competition among digital banks and super apps, customer acquisition has become critical for balancing customer experience and risk management.'
Detection capabilities: A technological race
As deepfake technology continues to advance, detection methods are struggling to keep pace, creating a technological race between security systems and those seeking to exploit them.
'Detection technologies designed to combat deepfakes are advancing, but they are in a constant race against a threat that is always evolving,' Higgins said. 'As generative AI tools become more accessible and powerful, deepfakes are growing in realism and scale.'
He highlighted the limitations of current detection systems: 'While there are detection systems capable of detecting subtle inconsistencies in voice patterns, facial movements, and metadata, malicious actors continue to find ways to outpace them.'
Woods added: 'Organizations are just beginning to tackle the challenge of deepfakes and it is a race they must win. Countering AI-generated fraud, including deepfakes, demands AI-driven solutions capable of distinguishing real humans from deepfakes.'
Social media platforms' responsibility
The role of social media companies in addressing deepfake content remains a contentious issue, with experts calling for more robust measures to identify and limit the spread of malicious synthetic media.
'Social media platforms carry a critical responsibility in curbing the spread of malicious deepfakes. As the primary channels through which billions consume information globally, they are also the frontline where manipulated content is increasingly gaining traction,' Higgins said.
He acknowledged some progress while highlighting ongoing challenges: 'Some tech giants, including Meta, Google, and Microsoft, have begun introducing measures to label AI-generated content clearly – which are steps in the right direction. However, inconsistencies remain.'
Higgins pointed to specific platforms that may be exacerbating the problem: 'X (formerly Twitter) dismantled many of its verification safeguards in 2023, a move that has made public figures more vulnerable to impersonation and misinformation. This highlights a deeper issue: disinformation and sensationalism have, for some platforms, become embedded in their engagement-driven business models.'
Woods believes that while social media platforms are not responsible for the rise of deepfakes or malicious AI, irrespective of fraudsters' methods. However, these platforms can play a part in the solution, he said, adding collaboration through data-sharing initiatives between financial services, telecommunications and social media companies can significantly improve fraud prevention efforts.'
Public readiness and education
A particularly concerning aspect of the deepfake threat is the general public's limited ability to identify manipulated content, according to the experts.
'As the use of deepfakes spreads, the average internet user remains alarmingly unprepared to identify manipulated content,' Higgins said. 'Where synthetic media is becoming more and more realistic, simply trusting what we see or hear online is no longer an option.'
He advocated for a fundamental shift in how people approach digital content: 'Adopting a zero-trust mindset is key, and people must become accustomed to treating digital content with the same caution applied to suspicious emails or phishing scams.'
Woods agreed with this assessment, noting the difficulty even professionals face in identifying sophisticated deepfakes.
'Identifying deepfakes with the naked eye is challenging, even for trained professionals. People should be aware that deepfake technology is advancing quickly and not underestimate the tactics and tools available to fraudsters,' he said.
Practical advice for protection
Both experts offered practical guidance for individuals to protect themselves against deepfake-related scams, which often target emotional vulnerabilities.
'One common scenario involves fraudsters using deepfakes to imitate a distressed relative, claiming to need urgent financial help due to a lost phone or another emergency,' Woods explained.
He recommended several protective steps: 'Approach unexpected and urgent requests for money or personal information online with caution, even if they appear to come from a loved one or trusted source. Pause and consider whether it could be a scam. Verify the identity of the person by reaching out to them through a different method than the one they used to contact you.'
Higgins also emphasized the importance of education in combating the threat: 'Citizens must be encouraged to verify sources, limit public sharing of personal media, and critically assess the credibility of online content. Platforms, regulators, and educational institutions all have a role to play in equipping users with the tools and knowledge to navigate a digital landscape where not everything is as it seems.'
Regulatory frameworks
The experts agreed that regulatory frameworks addressing deepfake technology remain underdeveloped globally, despite the growing threat.
'The legal frameworks around deepfakes vary greatly across geographies and jurisdictions, sometimes creating a grey area between unethical manipulation and criminal activity,' Higgins pointed out. 'In Saudi Arabia, where laws around cybercrime are among the strictest in the Middle East, impersonation, defamation, and fraud through deepfakes may fall under existing regulations.'
Woods was more direct in his assessment of the current regulatory landscape: 'No global regulator has yet implemented a legal deterrent or regulatory framework to address the threat of deepfakes.'
Despite the serious nature of deepfake threats, the experts cautioned against complete alarmism, noting legitimate applications for the technology alongside its potential for harm.
'Not all deepfakes are bad and they do have a place in society, for example providing entertainment, gaming and augmented reality,' Woods said. 'However, as with any technological advancement, some people exploit these tools for malicious purposes.'
Higgins warned against dismissing the threat as overblown: 'Dismissing deepfakes as exaggerated or irrelevant underestimates one of the most disruptive threats faced today. While deepfake content may once have been a novelty, it has rapidly evolved into a tool capable of serious harm—targeting not just individuals or brands, but the very concept of truth.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Arab News
4 hours ago
- Arab News
Trump's AI plan prioritizes deregulation to boost US dominance
WASHINGTON: President Donald Trump unveiled an aggressive, low-regulation strategy on Wednesday to boost big tech's race to stay ahead of China on artificial intelligence and cement US dominance in the fast-expanding field. Trump's 25-page 'America's AI Action Plan' outlines three aims: accelerating innovation, building infrastructure, and leading internationally on AI. The administration frames AI advancement as critical to maintaining economic and military supremacy. Environmental consequences are sidelined in the planning document. 'America is the country that started the AI race, and as president of the United States, I'm here today to declare that America is going to win it,' Trump told an AI event in Washington. 'Winning this competition will be a test of our capacities unlike anything since the dawn of the space age,' he said, before signing several executive orders to give components of the strategy additional legal weight. In its collection of more than 90 government proposals, Trump's plan calls for sweeping deregulation, with the administration promising to 'remove red tape and onerous regulation' that could hinder private sector AI development. In his wide-ranging speech, Trump insisted that 'winning the AI race will demand a new spirit of patriotism and national loyalty in Silicon Valley and beyond.' Trump complained that for too long 'many of our largest tech companies have reaped the blessings of American freedom while building their factories in China, hiring workers in India and slashing profits in Ireland.' The plan also asked federal agencies to find ways to legally stop US states from implementing their own AI regulations and threatened to rescind federal aid to states that did so. 'We have to have a single federal standard, not 50 different states, regulating this industry of the future,' Trump said. The American Civil Liberties Union warned this would thwart 'initiatives to uphold civil rights and shield communities from biased AI systems in areas like employment, education, health care, and policing.' The Trump action plan also calls for AI systems to be 'free from ideological bias' and designed to pursue objective truth rather than what the administration calls 'social engineering agendas,' such as diversity and inclusion. This criterion would apply to AI companies wanting to do business with the US government. Trump also called for AI development to be broadly immune from copyright claims — currently the subject of legal battles — saying it was a 'common sense' approach. 'You can't be expected to have a successful AI program when every single article, book, or anything else that you've read or studied, you're supposed to pay for,' he said. A major focus in the plan involves building AI infrastructure, including streamlined permitting for data centers and energy facilities that would overlook environmental concerns to build as swiftly as possible. The administration, which rejects international science showing a growing climate crisis, proposes creating new environmental review exemptions for data center construction and expanding access to federal lands for AI infrastructure development. Trump also called for the swift construction of coal and nuclear plants to help provide the energy needed to power the data centers. The strategy also calls for efforts to 'counter Chinese influence in international governance bodies' and strengthen export controls on advanced AI computing technology. At the same time, the strategy calls on the government to champion US technology in conquering overseas markets, a priority that was spelled out in an executive order. These plans will help 'ensure America sets the technological gold standard worldwide, and that the world continues to run on American technology,' US Secretary of State Marco Rubio said in a statement. Critics of the plan said the policies were a gift to US tech giants that were scaling back their goals for zero carbon emissions in order to meet the acute computing needs for AI. 'Trump's plan reads like a twisted Gilded Age playbook that rewards the rich while punishing everyday Americans and the environment,' said Jean Su of the Center for Biological Diversity


Arab News
8 hours ago
- Arab News
From expulsion to AI success — how a Saudi student's journey is inspiring a generation
ALKHOBAR: In June, a student research team at King Saud University quietly presented their breakthrough — a Saudi-built artificial intelligence agent named Mantiq. Mantiq successfully solved 84 out of 120 abstract puzzles and scored a 70 percent accuracy rate on the global Abstraction and Reasoning Corpus benchmark in a challenge widely recognized among leading artificial general intelligence researchers around the world. But behind the results was something just as compelling — a group of young Saudis that included a once-expelled university student who rebuilt his future line by line, code by code. Abdullah Al-Refai, 24, is a software engineering student at Prince Mohammed Bin Fahd University in Dhahran. With no official title, no funding, and no affiliation to a major lab, he represents a growing generation determined to push boundaries in the most advanced frontiers of AI. 'We may not have the same resources as big tech labs, but we have vision and we're proving that brilliance can come from anywhere — even a small research group in Saudi Arabia,' he said. Al-Refai's journey was far from linear; it was full of detours, setbacks and moments of deep personal doubt. First, he enrolled at Dammam Community College, where he excelled. His performance earned him a transfer to King Fahd University of Petroleum and Minerals, one of the top institutions in the Kingdom. But the transition proved overwhelming. Battling depression and struggling to adjust, his grades declined and, following a difficult time, he left. 'Getting expelled broke me. I felt like everything I had worked for was gone. But over time, I realized that failure doesn't define you — how you respond to it does,' said Al-Refai. Determined not to give up, he later enrolled at Prince Mohammad Bin Fahd University, a smaller, private center, and supported himself by working at Jarir Bookstore. As he regained his academic footing, Al-Refai rediscovered his love of technology. His coding journey had started years earlier when he received a Dell PC in sixth grade and began experimenting with Java programming to create modifications for popular video game Minecraft. A turning point came when he started a part-time AI research role at PMU. Soon after he met his mentor, Sulaiman Ureiga, who invited him to join a student-led research group focused on AGI. Unlike traditional AI systems, which rely on massive datasets, AGI aims to mimic human thought, reasoning and learning, and adapt with minimal input. It is a field into which tech giants like OpenAI and DeepMind have poured billions. In Saudi Arabia, Al-Refai and his team are approaching the same goal with minimal resources other than passion, perseverance and belief. Their focus has been the ARC challenge developed by Google researcher François Chollet, which tests a model's ability to solve logic puzzles using abstraction, not memorization. When the team presented the first phase of their research, an AGI-1 prototype that solved 70 percent of the test puzzles within minutes, it was a proud moment. 'Standing there at King Saud University, presenting our agent and (seeing) it solve 70 percent of the ARC-1 tasks, I knew this was bigger than a research demo. It was proof that Saudi youth can build world-class AI,' said Al-Refai. His motivation goes beyond personal achievement, however; he sees his story as a blueprint for others, proof that failure is not final and that Saudi youth can lead global conversations on AI. His team is already working on the next phases of their research, hoping to improve the model's reasoning capabilities and publish in an academic forum. They have also created educational posters, hosted sessions and spoken at local events to spread awareness and encourage others to explore AI. 'My dream is that when people around the world talk about the future of AI, they mention Saudi Arabia — not just for investments, but for real innovation and breakthroughs,' said Al-Refai. All this comes at a time when Saudi Arabia is pouring historic levels of investment into emerging technologies. In May, the Kingdom announced a $600 billion strategy in AI and digital transformation, reinforcing its ambition to become a global innovation hub. While high-level partnerships and summits dominate headlines, stories like Al-Refai's reveal a parallel transformation, one happening from the ground up, driven by students, self-learners and quiet researchers working after hours in labs and dorm rooms. 'If I can come back from academic failure and end up contributing to AGI research, anyone can,' said Al-Refai. 'We just need to believe in ourselves and build with purpose. We are capable of greatness. We always have been, and we always will be.'


Arab News
8 hours ago
- Arab News
New look, features in botim's fintech-first transformation
botim, Astra Tech's flagship fintech-first platform in the MENA region, has launched a refreshed brand identity and an upgraded user interface. This update marks the next chapter in the Abu Dhabi-headquartered app's evolution, serving a growing global user base of more than 150 million with an improved experience. Built on a strong VoIP foundation that established seamless communication, botim has evolved into a fintech-first, AI-native platform by integrating advanced financial features designed to meet the changing needs of its users. From January to May 2025, Botim saw a 50 percent increase in monthly active wallet users, a strong indicator of its growing relevance in the digital finance space. The redesigned interface is faster, more intuitive, and purposefully structured to simplify how users interact with its core functions including payments, transfers, lending, and calling. 'botim has always been a trusted platform for meaningful connection but is much more. With this update, we're reinforcing our position as a fintech-first platform, built on a foundation of secure VoIP, and designed with simplicity at its core,' said Dr. Tariq bin Hendi, board member at Astra Tech and CEO of botim. 'We've streamlined the experience to meet the needs of a growing user base that expects more not just from communication, but from the way they interact and manage their financial services as well.' As the UAE continues to experience significant population growth, reaching an all-time high of 11.22 million in 2025 according to Worldometer, botim stands as a trusted platform for this expanding community combining seamless communication and financial tools into one unified experience. botim's evolution from a local service to a globally scalable solution reflects market demand, with its embedded financial services such as peer-to-peer transfers and lending designed to meet users' everyday financial needs, no matter where they are. botim's AI-native foundation enhances accessibility through intelligent features such as real-time translation, smart call filters, and an interactive in-app chatbot, serving a diverse, multilingual audience. These advancements have driven the platform's latest interface update. Designed around real user behavior, the new UI makes it easier to perform everyday actions from payments and transfers to lending and calling. Services are now clearly segmented, navigation is more intuitive, and the overall experience is simplified to meet the demands of an increasingly digital, mobile-first audience. The app adapts to user profiles, from first-time users to VIPs, surfacing the most relevant services. Financial tools are now directly accessible within chat with a single click, while improved call quality and AI-powered chatbot assistance further enhance the user experience. The result is greater responsiveness, reduced clutter, and increased clarity all tailored to the needs of a growing, digitally active population.