
Can AI bots steal your crypto? The rise of digital thieves
At the heart of today's AI-driven cybercrime are AI bots — self-learning software programs designed to process vast amounts of data, make independent decisions, and execute complex tasks without human intervention. While these bots have been a game-changer in industries like finance, healthcare and customer service, they have also become a weapon for cybercriminals, particularly in the world of cryptocurrency.
Unlike traditional hacking methods, which require manual effort and technical expertise, AI bots can fully automate attacks, adapt to new cryptocurrency security measures, and even refine their tactics over time. This makes them far more effective than human hackers, who are limited by time, resources and error-prone processes. Why are AI bots so dangerous?
The biggest threat posed by AI-driven cybercrime is scale. A single hacker attempting to breach a crypto exchange or trick users into handing over their private keys can only do so much. AI bots, however, can launch thousands of attacks simultaneously, refining their techniques as they go. Speed: AI bots can scan millions of blockchain transactions, smart contracts and websites within minutes, identifying weaknesses in wallets (leading to crypto wallet hacks), decentralized finance (DeFi) protocols and exchanges.
AI bots can scan millions of blockchain transactions, smart contracts and websites within minutes, identifying weaknesses in wallets (leading to crypto wallet hacks), decentralized finance (DeFi) protocols and exchanges. Scalability: A human scammer may send phishing emails to a few hundred people. An AI bot can send personalized, perfectly crafted phishing emails to millions in the same time frame.
A human scammer may send phishing emails to a few hundred people. An AI bot can send personalized, perfectly crafted phishing emails to millions in the same time frame. Adaptability: Machine learning allows these bots to improve with every failed attack, making them harder to detect and block.
This ability to automate, adapt and attack at scale has led to a surge in AI-driven crypto fraud, making crypto fraud prevention more critical than ever.
In October 2024, the X account of Andy Ayrey, developer of the AI bot Truth Terminal, was compromised by hackers. The attackers used Ayrey's account to promote a fraudulent memecoin named Infinite Backrooms (IB). The malicious campaign led to a rapid surge in IB's market capitalization, reaching $25 million. Within 45 minutes, the perpetrators liquidated their holdings, securing over $600,000.
AI-powered bots aren't just automating crypto scams — they're becoming smarter, more targeted and increasingly hard to spot.
Here are some of the most dangerous types of AI-driven scams currently being used to steal cryptocurrency assets: 1. AI-powered phishing bots
Phishing attacks are nothing new in crypto, but AI has turned them into a far bigger threat. Instead of sloppy emails full of mistakes, today's AI bots create personalized messages that look exactly like real communications from platforms such as Coinbase or MetaMask. They gather personal information from leaked databases, social media and even blockchain records, making their scams extremely convincing.
For instance, in early 2024, an AI-driven phishing attack targeted Coinbase users by sending emails about fake cryptocurrency security alerts, ultimately tricking users out of nearly $65 million.
Also, after OpenAI launched GPT-4, scammers created a fake OpenAI token airdrop site to exploit the hype. They sent emails and X posts luring users to 'claim' a bogus token — the phishing page closely mirrored OpenAI's real site. Victims who took the bait and connected their wallets had all their crypto assets drained automatically.
Unlike old-school phishing, these AI-enhanced scams are polished and targeted, often free of the typos or clumsy wording that is used to give away a phishing scam. Some even deploy AI chatbots posing as customer support representatives for exchanges or wallets, tricking users into divulging private keys or two-factor authentication (2FA) codes under the guise of 'verification.'
In 2022, some malware specifically targeted browser-based wallets like MetaMask: a strain called Mars Stealer could sniff out private keys for over 40 different wallet browser extensions and 2FA apps, draining any funds it found. Such malware often spreads via phishing links, fake software downloads or pirated crypto tools.
Once inside your system, it might monitor your clipboard (to swap in the attacker's address when you copy-paste a wallet address), log your keystrokes, or export your seed phrase files — all without obvious signs. 2. AI-powered exploit-scanning bots
Smart contract vulnerabilities are a hacker's goldmine, and AI bots are taking advantage faster than ever. These bots continuously scan platforms like Ethereum or BNB Smart Chain, hunting for flaws in newly deployed DeFi projects. As soon as they detect an issue, they exploit it automatically, often within minutes.
Researchers have demonstrated that AI chatbots, such as those powered by GPT-3, can analyze smart contract code to identify exploitable weaknesses. For instance, Stephen Tong, co-founder of Zellic, showcased an AI chatbot detecting a vulnerability in a smart contract's 'withdraw' function, similar to the flaw exploited in the Fei Protocol attack, which resulted in an $80-million loss. 3. AI-enhanced brute-force attacks
Brute-force attacks used to take forever, but AI bots have made them dangerously efficient. By analyzing previous password breaches, these bots quickly identify patterns to crack passwords and seed phrases in record time. A 2024 study on desktop cryptocurrency wallets, including Sparrow, Etherwall and Bither, found that weak passwords drastically lower resistance to brute-force attacks, emphasizing that strong, complex passwords are crucial to safeguarding digital assets. 4. Deepfake impersonation bots
Imagine watching a video of a trusted crypto influencer or CEO asking you to invest — but it's entirely fake. That's the reality of deepfake scams powered by AI. These bots create ultra-realistic videos and voice recordings, tricking even savvy crypto holders into transferring funds.
5. Social media botnets
On platforms like X and Telegram, swarms of AI bots push crypto scams at scale. Botnets such as 'Fox8' used ChatGPT to generate hundreds of persuasive posts hyping scam tokens and replying to users in real-time.
In one case, scammers abused the names of Elon Musk and ChatGPT to promote a fake crypto giveaway — complete with a deepfaked video of Musk — duping people into sending funds to scammers.
In 2023, Sophos researchers found crypto romance scammers using ChatGPT to chat with multiple victims at once, making their affectionate messages more convincing and scalable.
Similarly, Meta reported a sharp uptick in malware and phishing links disguised as ChatGPT or AI tools, often tied to crypto fraud schemes. And in the realm of romance scams, AI is boosting so-called pig butchering operations — long-con scams where fraudsters cultivate relationships and then lure victims into fake crypto investments. A striking case occurred in Hong Kong in 2024: Police busted a criminal ring that defrauded men across Asia of $46 million via an AI-assisted romance scam.
AI is being invoked in the arena of cryptocurrency trading bots — often as a buzzword to con investors and occasionally as a tool for technical exploits.
A notable example is YieldTrust.ai, which in 2023 marketed an AI bot supposedly yielding 2.2% returns per day — an astronomical, implausible profit. Regulators from several states investigated and found no evidence the 'AI bot' even existed; it appeared to be a classic Ponzi, using AI as a tech buzzword to suck in victims. YieldTrust.ai was ultimately shut down by authorities, but not before investors were duped by the slick marketing.
Even when an automated trading bot is real, it's often not the money-printing machine scammers claim. For instance, blockchain analysis firm Arkham Intelligence highlighted a case where a so-called arbitrage trading bot (likely touted as AI-driven) executed an incredibly complex series of trades, including a $200-million flash loan — and ended up netting a measly $3.24 in profit.
In fact, many 'AI trading' scams will take your deposit and, at best, run it through some random trades (or not trade at all), then make excuses when you try to withdraw. Some shady operators also use social media AI bots to fabricate a track record (e.g., fake testimonials or X bots that constantly post 'winning trades') to create an illusion of success. It's all part of the ruse.
On the more technical side, criminals do use automated bots (not necessarily AI, but sometimes labeled as such) to exploit the crypto markets and infrastructure. Front-running bots in DeFi, for example, automatically insert themselves into pending transactions to steal a bit of value (a sandwich attack), and flash loan bots execute lightning-fast trades to exploit price discrepancies or vulnerable smart contracts. These require coding skills and aren't typically marketed to victims; instead, they're direct theft tools used by hackers.
AI could enhance these by optimizing strategies faster than a human. However, as mentioned, even highly sophisticated bots don't guarantee big gains — the markets are competitive and unpredictable, something even the fanciest AI can't reliably foresee.
Meanwhile, the risk to victims is real: If a trading algorithm malfunctions or is maliciously coded, it can wipe out your funds in seconds. There have been cases of rogue bots on exchanges triggering flash crashes or draining liquidity pools, causing users to incur huge slippage losses.
AI is teaching cybercriminals how to hack crypto platforms, enabling a wave of less-skilled attackers to launch credible attacks. This helps explain why crypto phishing and malware campaigns have scaled up so dramatically — AI tools let bad actors automate their scams and continuously refine them based on what works.
AI is also supercharging malware threats and hacking tactics aimed at crypto users. One concern is AI-generated malware, malicious programs that use AI to adapt and evade detection.
In 2023, researchers demonstrated a proof-of-concept called BlackMamba, a polymorphic keylogger that uses an AI language model (like the tech behind ChatGPT) to rewrite its code with every execution. This means each time BlackMamba runs, it produces a new variant of itself in memory, helping it slip past antivirus and endpoint security tools.
In tests, this AI-crafted malware went undetected by an industry-leading endpoint detection and response system. Once active, it could stealthily capture everything the user types — including crypto exchange passwords or wallet seed phrases — and send that data to attackers.
While BlackMamba was just a lab demo, it highlights a real threat: Criminals can harness AI to create shape-shifting malware that targets cryptocurrency accounts and is much harder to catch than traditional viruses.
Even without exotic AI malware, threat actors abuse the popularity of AI to spread classic trojans. Scammers commonly set up fake 'ChatGPT' or AI-related apps that contain malware, knowing users might drop their guard due to the AI branding. For instance, security analysts observed fraudulent websites impersonating the ChatGPT site with a 'Download for Windows' button; if clicked, it silently installs a crypto-stealing Trojan on the victim's machine.
Beyond the malware itself, AI is lowering the skill barrier for would-be hackers. Previously, a criminal needed some coding know-how to craft phishing pages or viruses. Now, underground 'AI-as-a-service' tools do much of the work.
Illicit AI chatbots like WormGPT and FraudGPT have appeared on dark web forums, offering to generate phishing emails, malware code and hacking tips on demand. For a fee, even non-technical criminals can use these AI bots to churn out convincing scam sites, create new malware variants, and scan for software vulnerabilities.
AI-driven threats are becoming more advanced, making strong security measures essential to protect digital assets from automated scams and hacks.
Below are the most effective ways on how to protect crypto from hackers and defend against AI-powered phishing, deepfake scams and exploit bots: Use a hardware wallet: AI-driven malware and phishing attacks primarily target online (hot) wallets. By using hardware wallets — like Ledger or Trezor — you keep private keys completely offline, making them virtually impossible for hackers or malicious AI bots to access remotely. For instance, during the 2022 FTX collapse, those using hardware wallets avoided the massive losses suffered by users with funds stored on exchanges.
AI-driven malware and phishing attacks primarily target online (hot) wallets. By using hardware wallets — like Ledger or Trezor — you keep private keys completely offline, making them virtually impossible for hackers or malicious AI bots to access remotely. For instance, during the 2022 FTX collapse, those using hardware wallets avoided the massive losses suffered by users with funds stored on exchanges. Enable multifactor authentication (MFA) and strong passwords: AI bots can crack weak passwords using deep learning in cybercrime, leveraging machine learning algorithms trained on leaked data breaches to predict and exploit vulnerable credentials. To counter this, always enable MFA via authenticator apps like Google Authenticator or Authy rather than SMS-based codes — hackers have been known to exploit SIM swap vulnerabilities, making SMS verification less secure.
AI bots can crack weak passwords using deep learning in cybercrime, leveraging machine learning algorithms trained on leaked data breaches to predict and exploit vulnerable credentials. To counter this, always enable MFA via authenticator apps like Google Authenticator or Authy rather than SMS-based codes — hackers have been known to exploit SIM swap vulnerabilities, making SMS verification less secure. Beware of AI-powered phishing scams: AI-generated phishing emails, messages and fake support requests have become nearly indistinguishable from real ones. Avoid clicking on links in emails or direct messages, always verify website URLs manually, and never share private keys or seed phrases, regardless of how convincing the request may seem.
AI-generated phishing emails, messages and fake support requests have become nearly indistinguishable from real ones. Avoid clicking on links in emails or direct messages, always verify website URLs manually, and never share private keys or seed phrases, regardless of how convincing the request may seem. Verify identities carefully to avoid deepfake scams: AI-powered deepfake videos and voice recordings can convincingly impersonate crypto influencers, executives or even people you personally know. If someone is asking for funds or promoting an urgent investment opportunity via video or audio, verify their identity through multiple channels before taking action.
AI-powered deepfake videos and voice recordings can convincingly impersonate crypto influencers, executives or even people you personally know. If someone is asking for funds or promoting an urgent investment opportunity via video or audio, verify their identity through multiple channels before taking action. Stay informed about the latest blockchain security threats: Regularly following trusted blockchain security sources such as CertiK, Chainalysis or SlowMist will keep you informed about the latest AI-powered threats and the tools available to protect yourself.
As AI-driven crypto threats evolve rapidly, proactive and AI-powered security solutions become crucial to protecting your digital assets.
Looking ahead, AI's role in cybercrime is likely to escalate, becoming increasingly sophisticated and harder to detect. Advanced AI systems will automate complex cyberattacks like deepfake-based impersonations, exploit smart-contract vulnerabilities instantly upon detection, and execute precision-targeted phishing scams.
To counter these evolving threats, blockchain security will increasingly rely on real-time AI threat detection. Platforms like CertiK already leverage advanced machine learning models to scan millions of blockchain transactions daily, spotting anomalies instantly.
As cyber threats grow smarter, these proactive AI systems will become essential in preventing major breaches, reducing financial losses, and combating AI and financial fraud to maintain trust in crypto markets.
Ultimately, the future of crypto security will depend heavily on industry-wide cooperation and shared AI-driven defense systems. Exchanges, blockchain platforms, cybersecurity providers and regulators must collaborate closely, using AI to predict threats before they materialize. While AI-powered cyberattacks will continue to evolve, the crypto community's best defense is staying informed, proactive and adaptive — turning artificial intelligence from a threat into its strongest ally.
Source: https://cointelegraph.com/explained/can-ai-bots-steal-your-crypto-the-rise-of-digital-thieves
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Zawya
18 minutes ago
- Zawya
Most organizations miss business context when assessing cyber risk, finds new research from Qualys
According to new research commissioned by Qualys and conducted by Dark Reading, despite rising investments, evolving frameworks, and more vocal boardroom interest, most organizations remain immature in their risk management programs. Nearly half of organizations (49%) surveyed for Qualys' 2025 State of Cyber-risk Assessment report, today have a formal business-focused cybersecurity risk management program. However, just 18% of organizations use integrated risk scenarios that focus on business-impacting processes, showing how investments manage the likelihood and impact of risk quantitatively, including risk transfer to insurance. This is a key deficiency, as business stakeholders expect the CISO to focus on business risk. Key findings from the research include: Formal Risk Programs are Expanding, But Business Context is Still Missing 49% of surveyed organizations report having a formal cyber risk program in place which looks like a promising statistic on the surface. But dig deeper, and the data shows otherwise: Business Alignment Gaps: Only 30% report that their risk management programs are prioritized based on business objectives Recent Implementations: 43% of existing programs have been in place for less than two years, indicating a nascent stage of maturity Future Plans: An additional 19% are still in the planning phase More Investment ≠ Less Risk: Why the Cyber ROI isn't Adding Up Cybersecurity spending has continued to grow. Yet one of the most revealing insights from the study is that a vast majority (71%) of organizations believe that their cyber risk levels are rising or holding steady. 51% say their overall cyber risk exposure is increasing 20% say it remains unchanged Only 6% have seen risk levels decrease The Missing Metric: Business Relevance in Asset Intelligence Visibility in cyber risk management is about a principle that hasn't changed in 20 years: you can't protect what you can't see. Yet even in 2025, asset visibility remains one of the biggest blind spots: 83% of organizations perform regular asset inventories, but only 13% can do so continuously 47% still rely on manual processes 41% say incomplete asset inventories are among their top barriers to managing cyber risk Risk Prioritization Needs to be a Business Conversation, Not a Technical One Another illusion that persists is the idea that all risks can and should be patched. The longstanding practice of prioritizing vulnerabilities based solely on severity is no longer sufficient. The industry looks to be grasping the fact that risk prioritization needs to go beyond single scoring methods like CVSS alone, with 68% of respondents using integrated risk scoring combining threat intelligence or using cyber risk quantification with forecasted loss estimates to prioritize risk mitigation actions. However, these next data points show that the industry still has some way to go: Nearly one in five (19%) of organizations continue to rank vulnerabilities using a single score like CVSS alone Just 18% update asset risk profiles monthly Reporting Risk in Business Terms, Not Security Jargon Executives do not want to hear how many vulnerabilities have been patched. They want to understand what the organization stands to lose, and what's being done to protect it. Yet the study finds that while 90% of organizations report cyber-risk findings to the board: Only 18% use integrated risk scenarios Just 14% tie risk reports to financial quantification Business stakeholders are only involved less than half the time (43%) And only 22% include finance teams in cyber risk discussions 'The key takeaway from the research isn't just that cyber risk is rising. It's that current methods are not effectively reducing that risk by prioritizing the actions that would make the greatest impact to risk reduction, tailored to the business. Every business is unique; hence, each risk profile and risk management program should also look unique to the organization. Static assessments, siloed telemetry, and CVSS-based prioritization have reached their limit,' commented Mayuresh Ektare, Vice President, Product Management, Enterprise TruRisk Management, Qualys. 'To address this, forward-leaning teams are adopting a Risk Operations Center (ROC) model: a technical framework that continuously correlates vulnerability data, asset context, and threat exposure under a single operational view. The ROC model provides a proven path forward for organizations ready to manage cyber risk the way the business understands it and expects it to be managed,' Ektare continued. Below are some recommendations to help businesses better align cybersecurity risk with business priorities: Business risk is all about context. In order to have a good understanding of organizational risk, a business first needs to understand what their business-critical assets are, then understand their risk factors or threats as it relates to those crown jewel assets. Without this context, vulnerabilities or threats are just information. If everything is critical, nothing is. Prioritizing risks is paramount as organizations do not have unlimited resources. In order to be capitally efficient, companies need to spend as little as possible to avoid the largest possible amount of risk. Whatever is not mitigated through technology represents risk that needs to be accepted, or transferred to cyber insurance. To get a good read of the cyber-risks across the enterprise, organizations need a diverse telemetry of risk signals. Organizations can't rely on just one — such as scanning for vulnerabilities — instead, companies need visibility into their application security, identity security stack, and more, every part of the enterprise that is exposing your attack surface. Instead of focusing on reactive incident response — for example with a SIEM or a SOC — organizations need a better system that proactively looks to predict risks and works to reduce the likelihood of an event happening by implementing a Risk Operations Center (ROC). This approach to risk management helps leaders make better, more informed decisions based on their unique business context. Organizations need to overhaul the way they are communicating cyber-risk to the board. Integrated risk scenarios that focus on business-impacting processes, such as how investments and insurance impact risk, will be the future of 'business-oriented' risk reporting, and much more effective at the purpose of communicating to board members.


Zawya
18 minutes ago
- Zawya
Kingston calls for urgent reinforcement of data security as cyber threats escalate across critical sectors
Dubai, UAE – With cyberattacks rising sharply against government institutions, utilities, and national infrastructure worldwide, Kingston Technology, a world leader in memory products and technology solutions, is urging public sector entities across the Gulf and beyond to adopt stronger hardware-level data protection strategies. Recent high-profile breaches and espionage attempts have underscored the region's vulnerability, especially in sectors handling sensitive or classified data. According to the State of the UAE Cybersecurity Report 2024 by the UAE Cybersecurity Council, ransomware attacks in the UAE surged by 32% year-over-year, reflecting growing pressure on public digital infrastructure. Similarly, the Saudi Arabia Cybersecurity Report 2024 from Foresiet reported a 35% increase in cyberattacks, particularly targeting the Kingdom's financial, government, and energy sectors, with ransomware and phishing among the most prevalent threats. With growing digitization, mobile data handling, and geopolitical risk, the stakes for protecting information assets have never been higher. 'In today's climate, data isn't just operational, it's sovereign,' said Antoine Harb, Team Leader Middle East at Kingston Technology. 'When state institutions are targeted, the integrity of national systems, citizen privacy, and geopolitical stability are all at stake. Effective data protection has moved from being an IT best practice to a matter of national resilience.' Kingston Technology is advocating for the adoption of FIPS 140-3 Level 3 certified hardware-encrypted storage, the highest internationally recognized standard for cryptographic device security. Unlike software-based encryption, hardware encryption delivers robust, tamper-evident protection, particularly critical in environments where data moves between facilities, field units, or is exposed to uncontrolled networks. Among Kingston's offerings is the IronKey™ D500S, a U.S. assembled device that meets both FIPS 140-3 Level 3 and TAA compliance standards. Already in use by defense and intelligence agencies globally, the device is designed for environments where failure to protect data can have national consequences. 'Security isn't about the tools, it's about the decisions behind them,' added Harb. 'It starts with policy, training, and risk-based planning. Devices like the IronKey are essential enablers, but leadership, standards, and strategic foresight come first.' To support government IT leaders, Kingston also offers its Ask an Expert program, providing advisory support on compliance alignment, secure data mobility, and best practices for endpoint protection. With cyber threats now recognized by the World Economic Forum as a top-tier global risk, Kingston Technology reiterates its call for public institutions to build cyber-resilient ecosystems rooted in trusted hardware, policy clarity, and continuous vigilance. About Kingston Digital Europe Co LLP Kingston Digital Europe Co LLP and Kingston Technology Company, Inc., are part of the same corporate group ('Kingston'). Kingston is the world's largest independent manufacturer of memory products. From big data to IoT devices, including laptops, PCs, and wearable technology, Kingston Technology is dedicated to delivering top-tier product solutions, service, and support. Trusted by leading PC manufacturers and global cloud providers, we value our long-term partnerships that help us evolve and innovate. We ensure every solution meets the highest standards by prioritising quality and customer care. At every step, we listen, learn, and engage with our customers and partners to deliver solutions that make a lasting impact. To learn more about Kingston Technology and our 'Built on Commitment' vision, visit Media Relations Mohamad El Fil BEYOND Marketing & Communications mohamad@


Zawya
18 minutes ago
- Zawya
OECD rates Oman ‘largely compliant' on tax transparency
MUSCAT: The Sultanate of Oman has received a strong endorsement from the Organisation for Economic Co-operation and Development's (OECD) Global Forum on Transparency and Exchange of Information for Tax Purposes, earning an overall rating of Largely Compliant in its 2025 Second Round Peer Review. The result highlights Oman's firm commitment to international tax transparency standards and reflects the substantial reforms the country has implemented in recent years. This is Oman's first full peer review since joining the Global Forum in 2018. The assessment, covering the period from January 2021 to December 2023, evaluated both Oman's legal and regulatory framework and its practical implementation of the international standard for Exchange of Information on Request (EOIR). The review confirms Oman's strong performance in the majority of assessed areas. Out of 11 core elements, Oman was rated Compliant in nine, including key areas such as banking information availability, access to information, confidentiality, and functioning exchange mechanisms. These results underline the robustness of Oman's legal framework and the effectiveness of its administrative practices. The report notes that Oman has developed a comprehensive banking supervisory system, with banks required to maintain client records for ten years and clear rules on identifying beneficial ownership. The Oman Tax Authority (OTA) also has wide-ranging powers to access relevant data and has strengthened cooperation with other government entities through memoranda of understanding. Operationally, Oman has made significant strides. The OTA has established a dedicated EOIR unit and developed manuals and guidance to support tax officials. All EOIR requests received in 2022 and 2023 were answered within the 90-day international benchmark—an improvement from 2021, which was affected by post-pandemic adjustments and internal restructuring. Two elements—availability of ownership and identity information (A.1) and accounting information (A.2)—were assessed as Largely Compliant, reflecting transitional areas where Oman is already implementing reforms. Legal provisions related to beneficial ownership have recently been enhanced through the introduction of the Beneficial Ownership Regulation in 2023, which requires most companies to maintain ownership registers and provide this information upon request. The Ministry of Commerce, Industry and Investment Promotion is actively building a central beneficial ownership register, a key move toward ensuring easier access to accurate and timely information. While some areas for improvement remain—such as formalising obligations for regular updates and enhancing supervision—Oman's efforts are already well underway and are expected to strengthen compliance further. The report also highlights the OTA's initiatives to improve the availability of accounting records, especially for entities with limited activity. While work is ongoing to address tax return filing rates and inactive company oversight, these are recognised as part of a broader, forward-looking reform agenda. Oman's participation in the Multilateral Convention on Mutual Administrative Assistance in Tax Matters, along with eight bilateral tax agreements, further reinforces its commitment to international cooperation. The country's legal provisions on confidentiality and professional secrecy are also aligned with the global standard. Looking ahead, Oman will provide its first self-assessment report under the Global Forum's enhanced monitoring programme in 2026, followed by biennial updates. This mechanism is designed to support continuous improvement and ensure member countries stay aligned with evolving international standards. Oman's largely compliant rating places it in good standing among peer jurisdictions and signals growing confidence in the country's financial and tax governance systems. With its ongoing reforms, Oman is well positioned to achieve full compliance in future assessments.