
94% of UAE businesses see AI as key driver of growth
Khaled Al Khawaldeh (Abu Dhabi) A new survey shows near-universal optimism about artificial intelligence among UAE businesses, with 94% of enterprises saying they believe AI will be a key driver of future growth, and 47% expecting significant returns on their investments within one to two years.The YouGov survey, commissioned by enterprise solutions firm, SAP, also found that 97% of UAE IT decision-makers are confident in using AI-driven insights to guide critical business decisions. With 72% of respondents saying they were 'very confident,' which Marwan Zeineddine, Managing Director of SAP UAE, said reflected a growing trust in AI's ability to support strategic planning.'AI is already delivering tangible business value, and the UAE market is clearly ready to embrace its potential,' Zeineddine said. 'The survey shows that businesses recognise AI as a key driver of growth, but unlocking its full value requires the right strategies.' According to the survey, 76% of UAE companies are already using industry-specific AI solutions tailored to their operations, a statistic which reflects the evolving nature of AI from a hype-led trend to a digital reality within the workplace. A total of 58% of survey respondents said they were planning to invest in data consolidation and quality improvement initiatives over the next 12 months, a prerequisite for effective AI application. The results were released in conjunction with the SAP NOW AI Tour, held on April 16 at the Dubai Expo Centre ahead of Dubai AI Week which will bring together over 180 speakers across more than 150 sessions to discuss AI best practice. Omar Sultan Al Olama, Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications, opened AI Retreat on Monday declaring that the country would continue its journey to become a world-leading AI centric economy. 'Every single company that's going to be in our digital economy, moving forward, needs to be an AI-first company,' Olama said. In a wide-ranging speech, Olama made the case for the responsible and productive implementation of AI, calling on all segments of society both public and private to work to ensure their work is focused on improving people's lives first and foremost. 'For those of you who have flown from around the world and have come to the UAE, you might have used the smart gates in Dubai airport, and that is an exceptional use of artificial intelligence that improves quality of life,' he said. 'That is the AI we want in the UAE, and that is the AI we think is done right, it's AI that is frictionless, it's AI that you don't feel is AI, and it's Ai that improves your quality of life overall.' Olama also highlighted that 325 companies had been awarded Dubai's AI Seal, a new certification that distinguishes firms offering secure and responsible AI solutions.
'We want to know who's really building best-in-class AI in the UAE, and who's simply repackaging open-source models,' he said, underscoring the need for transparency and quality in AI innovation.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tahawul Tech
8 hours ago
- Tahawul Tech
Future is 'Agentic' and already unfolding
Let me take you on a journey, not into some far-off sci-fi future, but into a tomorrow that's just around the corner. Imagine walking into your workplace and finding that some of your 'colleagues' are no longer human. They're not robots in the traditional sense, but autonomous software agents trained on vast datasets, equipped with decision-making power, and capable of performing economic, civic, and operational tasks at scale. These agents write policies, monitor supply chains, process health records, generate news, and even govern our digital interactions. This isn't a scene from a movie. A tectonic shift is heading our way, one that will transform how we work, how governments function, and even how communities operate. In this world, digital public infrastructure (DPI) will not be a convenience. It will be a lifeline. This shift is already progressing in the heart of the Middle East. Ambitious projects like NEOM in Saudi Arabia are exploring how agentic AI can be woven into the fabric of urban life. They aim to build an ecosystem of autonomous agents that redefines how cities are developed and managed. Sovereignty in the Age of Agents We like to say, 'Everyone has data.' But the real question is: Where is it? Who controls it? Who governs access to it? In a world run by agents, these are not purely technical questions but ones of power, accountability, and autonomy. A sovereign nation that cannot locate, trust, or manage its data risks losing control.. A government that cannot verify what its own agents have learned, or with whom they are communicating with, is no longer governing. To survive and thrive in this new ecosystem, DPI must evolve into Digital Shoring: a foundation for sovereign, trusted, and open environments built on four pillars: Open Data – depends on trust. It requires clear data lineage, verified provenance, and accountable governance. Knowing where your data came from and where it's going is essential for any system that relies on it.. Open Source Software – because critical infrastructure built on black boxes is neither secure nor sovereign. Open Standards – because without shared protocols, agents can't cooperate, institutions can't interoperate, and governments can't govern. Open Skills – because the capacity to read a balance sheet, or audit a neural net, shouldn't belong to a privileged few. This is the backbone of an agentic society that is fair, sovereign, and resilient. Agentic Intelligence: More Than Just Fancy Tools Let's talk about what agents actually are – and what they aren't. Imagine I hand a company's financial statement to two readers: a junior analyst and a seasoned economist. Both might understand the numbers, but only one can extract strategic insight. Similarly, agents can read, analyze, and reason. But the quality of their actions depends entirely on the skills they are equipped with. These skills can be trained, acquired, or, most importantly, shared. In public sector contexts, this presents an extraordinary opportunity. Why should every institution reinvent the same agent? Why can't the skills of a fraud detection agent used in one department be transferred, securely and ethically, to another? Just like people share their expertise, we need infrastructure for sharing agentic capabilities across digital institutions. This is where organizations like the UN can help, by setting the standards and helping everyone through the lens of the Global Digital Compact initiative. From 'Sovereign Cloud' to 'Sovereign AI Platforms' Right now, a lot of talk is around keeping data inside national borders. But in the world of agents, that is just not enough. What really matters is where and how models are trained, how they are managed, and how we keep them in check. We need Sovereign AI Platforms – akin to the way HR departments manage employees: verifying credentials, ensuring alignment, monitoring performance, and enabling collaboration. Companies such as Cloudera, are developing the scaffolding for such platforms: secure hybrid AI environments, open-source data pipelines, governance-first orchestration layers, and modular LLM serving infrastructure that respects national compliance frameworks. But no company can do this alone. This is a global mission. Open by Design. Governed by Default Governments around the world are already realising that private AI cannot be built on public cloud monopolies. Digital identity and agent oversight need to be open and transparent, not hidden, ad hoc, or opaque. So the future must be open by design – in code, in data, in protocols, and being governed by default. From Digital IDs that authenticate not only humans, but also agents and their behavior, to full knowledge graphs that maintain shared institutional knowledge across systems, together with audit trails that document every decision, every inference, every prompt. This goes beyond technology. It involves creating a new kind of digital society that is designed to empower states, safeguard citizens, and align intelligence with democratic values. The Path Forward This transformation will not be easy. It will require bold policy, sustained investment, cross-border cooperation, and, above all, technical leadership grounded in values. But make no mistake, digital cooperation is not optional. It is the condition for sovereignty in an agentic world. Without it, we are left with silos, vendor lock-in, and algorithmic drift. With it, we build a future where intelligence, human or machine, serves the public good. So let's move beyond the buzzwords. Let's build platforms, protocols, and public goods that are open, modular, and sovereign. Let's treat agents not just as tools, but as members of a digital society in need of governance, trust, and cooperation. And maybe, when we look back at today from the vantage point of tomorrow, we'll remember this moment not as a crisis, but as the moment we chose to govern the future together. This opinion piece is authored by Sergio Gago Huerta, CTO at Cloudera.


Arabian Business
9 hours ago
- Arabian Business
AI's impact expands beyond underwriting in insurance sector: Report
Artificial intelligence (AI) continues to reshape the insurance sector, extending its influence beyond underwriting and risk profiling to other critical areas of the insurance value chain, according to a new survey by GlobalData. Underwriting and risk profiling remain the areas most positively impacted by AI, with 45.8 per cent of industry professionals identifying them as the top beneficiaries. However, this represents a decline of nearly 10 percentage points since 2023, suggesting that insurers are increasingly applying AI in other functions. Claims management and customer service followed, with 20.3 per cent and 17.6 per cent of respondents, respectively, citing these areas as most influenced by AI. Customer service, in particular, has seen notable growth, increasing by 6.2 percentage points since the previous poll. Similarly, AI's role in product development more than tripled in recognition, rising from 1.9 per cent to 7.2 per cent. Charlie Hutcherson, Associate Insurance Analyst at GlobalData, said insurers are now broadening their AI applications beyond underwriting, despite challenges such as regulatory hurdles, data quality, and fairness in risk models. He highlighted the increasing traction AI has gained in customer service, where automation enables faster triage, more accurate responses, and higher satisfaction rates. Hutcherson also pointed out a rising impact of AI in product development, reflecting insurers' growing focus on trend analysis, identifying coverage gaps, and accelerating speed to market. He described the overall shift as a sign of a 'more mature and diversified approach,' with insurers recognising AI's transformative potential across multiple areas of their business. With rising competition, insurers face pressure to differentiate themselves by expanding AI capabilities not just in efficiency-driven processes but also in customer-facing and product innovation areas. Hutcherson stressed the need for a holistic deployment of AI, balancing efficiency gains with fairness, transparency, and regulatory compliance.' Those who can strike this balance will be best positioned to build long-term trust and value,' Hutcherson said.


The National
12 hours ago
- The National
Ozzy Osbourne AI tribute sparks 'digital resurrection' debate
Fans of Black Sabbath singer Ozzy Osbourne have criticised musician Tom Morello after he shared an AI-generated image of the rock star, who died this week at the age of 76. Osbourne bid farewell to fans earlier this month with a Black Sabbath reunion show in the British city of Birmingham. His death led to tributes from fans and musicians. They included Morello's post, which sparked anger among X users. The backlash over the stylised image – which included deceased rock stars Lemmy, Randy Rhodes and Ronnie James Dio – centred on what many saw as an exploitative and unsettling trend, with users questioning the ethics of sharing such visuals so soon after Osbourne's death. It is the latest flashpoint in a growing debate: when does using AI to recreate someone's likeness cross the line from tribute to invasion of privacy? While the tools behind these hyper-realistic images are evolving rapidly, the ethical frameworks and legal protections have not yet caught up. Deepfakes and grief in digital age Using AI to recreate the dead or the dying, sometimes referred to as "grief tech" or "digital resurrection", is becoming increasingly common, from fan-made tributes of celebrities to "griefbots" that simulate the voice or personality of a lost loved one. In an example of grief tech, Canadian Joshua Barbeau last year used Project December, a GPT-3-based chatbot created by Jason Rohrer, to recreate conversations with his dead fiancee from September 2020, eight years after her death. The chatbot's responses were so convincing that she "said" things like: "Of course it is me. Who else could it be? I am the girl that you are madly in love with." Mental health experts warn that such recreations can profoundly affect the grieving process. "The predictable and comforting responses of AI griefbots can create unrealistic expectations for emotional support, which could impact a person's ability to build healthy relationships in the future," said Carolyn Yaffe, a cognitive behaviour therapist at Medcare Camali Clinic in Dubai. "Some people find comfort and a sense of connection through them. In contrast, others might face negative effects, like prolonged denial, emotional pain, or even feelings of paranoia or psychosis." Interacting with AI likenesses can blur the lines between memory and reality, potentially distorting a person's emotional recovery, Ms Yaffe said. "These tools may delay acceptance and create a space where people stay connected to digital surrogates instead of moving on," she added. "Grief doesn't fit into neat algorithms." Lack of legal safeguards There is limited legal protection against these practices. In the Middle East, specific laws around AI-generated likenesses are still emerging. Countries including the UAE and Saudi Arabia address deepfakes under broader laws related to cyber crimes, defamation, or personal data protection. But there are still no clear regulations dealing with posthumous image rights or the AI-based recreation of people. Most laws focus on intent to harm, rather than on consent or digital legacy after death. In the UK, for example, there are no posthumous personality or image rights. Some states in the US, including California and New York, have begun to introduce limited protections, while others do not offer any. In China, draft legislation has begun to address AI deepfakes. Denmark, however, has been a pioneer on the issue, proposing a law that would grant people copyright-like control over their image, voice and likeness. The legislation, expected to pass this year, would allow Danish people to demand the removal of unauthorised deepfake content and seek civil damages, even posthumously, marking the first time such protections would be implemented in Europe. "Copyright does not protect someone's appearance or voice," said Andres Guadamuz, a reader in intellectual property law at the University of Sussex. "We urgently need to reform image and personality rights to address unauthorised AI depictions, particularly for vulnerable individuals, including the deceased or critically ill, where dignity, consent, and misuse risks are paramount." Consent, culture and control Ethical concerns about recreating the image or voice of someone who is critically ill or dead go beyond legal frameworks. Arda Awais, co-founder of UK-based digital rights collective Identity 2.0, believes that, even when AI tributes are carried out with good intentions, they carry significant risks. "Even with consent from the deceased, there could be ways a likeness is used which might not be 100 per cent in line with someone's wishes, too. Or how it's use evolves," Ms Awais said. She added that a one-size-fits-all approach may not be practical across different cultures, emphasising the need for more inclusive and diverse conversations when establishing ethical standards. While some families or individuals may welcome AI tributes as a means to preserve memories, others may view it as exploitative or harmful, particularly when it involves celebrities, whose images are frequently recycled without their permission. "Grief is such a personal experience," Ms Yaffe said. "For some, griefbots might provide a moment of relief. But they should be seen as a bridge, not the final destination." Experts warn that AI should never replace the emotional labour of mourning or the human connections that aid the healing process. "AI-generated responses can completely miss the point, not because the technology is harmful, but because it lacks the essential quality that grief requires – humanity," Ms Yaffe said.