
Egypt accelerates AI adoption with new strategy and ecosystem engagement
Dr. Amr Talaat, Egypt's Minister of Communications and Information Technology, reaffirmed the government's commitment to AI and its role in driving digital transformation. He highlighted Egypt's National AI Strategy (2025-2030), built on six key pillars:
Computing infrastructure to support AI model training.
Data governance ensuring AI accessibility and responsible use.
AI-powered systems for real-world applications.
AI talent and capacity-building to meet market needs.
Regulatory frameworks for AI ethics and policies.
Ecosystem growth connecting startups, enterprises, and investors.
He emphasized the ministry's commitment to regular engagement with stakeholders across the ICT sector to address their needs. He highlighted AI's expanding role across industries and the forum's aim to foster dialogue among Egypt's AI ecosystem.
"We are committed to enhancing Egypt's AI ecosystem by fostering collaboration, expanding our talent pool, and ensuring a regulatory framework that enables innovation," Dr. Talaat stated.
Bridging the Gap Between AI Talent and Market Needs
Egypt is scaling AI talent development to fuel AI innovation and enterprise adoption. Ahmed El-Zaher, CEO of ITIDA, highlighted the agency's specialized and novel training programs in AI coding, MLOps, and Responsible AI governance, led by ITIDA's Software Engineering Competence Center (SECC).
"At ITIDA, we are committed to fostering a thriving AI ecosystem by connecting stakeholders, equipping talent with cutting-edge skills, and supporting startups to scale their innovations," said El-Zaher, CEO of ITIDA.
"By bringing together industry leaders, investors, and policymakers, we aim to bridge the gap between AI innovation and real-world applications, positioning Egypt as a competitive AI hub."; he added.
AI Startup Investment and Market Expansion
Dr. Hoda Baraka, Advisor to the Minister for AI, emphasized the government's efforts in building AI capabilities and integrating AI solutions into key sectors.
'Since 2019, Egypt has been committed to developing its AI ecosystem through capacity building and policy development. Our updated AI strategy (2025-2030) focuses on expanding AI applications, upskilling, and ensuring that AI solutions address national challenges across industries such as healthcare, finance, and agriculture,' she stated.
The event featured insights from 500 Global, which has invested in over 65 Egyptian startups and continues to identify AI-driven business opportunities. Amal Enan, Managing Partner at 500 Global, underscored the firm's commitment to scaling AI startups in Egypt. "Egypt's AI ecosystem is growing rapidly, and we see tremendous potential in startups integrating AI into their solutions. By fostering collaboration between investors, startups, and government stakeholders, we can unlock new opportunities and scale AI-driven businesses," Enan said.
She also noted that 157 startups are currently participating in 500 Global's accelerator programs, many of which focus on AI-driven innovations.
The event featured a session on AI-powered innovations, moderated by Dr. Haitham Hamza of ITIDA's SECC. Panelists from Baheya Foundation, e& Egypt, and CIB shared insights on AI's role in enhancing services, cybersecurity, and healthcare.
Another session, "Betting on AI," led by Amal Enan of 500 Global, gathered investors from Synapse Analytics, Intella, Tektonik Ventures, and Algebra Ventures to discuss AI investment trends and scaling challenges.
Additionally, the Applied Innovation Center of MCIT is developing AI-powered solutions in agriculture, education, healthcare, and legal sectors, demonstrating the real-world impact of AI research and development.
With plans to train 30,000 AI specialists, support 250 AI-driven companies, and expand AI awareness across society, Egypt is positioning itself as a leading AI innovation hub in the region.
With its well-established position as a leading global delivery hub for high-end digital and technology services, Egypt continues to attract major investments in AI and innovation. The country's deep talent pool, cost competitiveness, and strong government support make it an ideal destination for enterprises looking to scale AI-driven solutions and digital services.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tahawul Tech
8 hours ago
- Tahawul Tech
Future is 'Agentic' and already unfolding
Let me take you on a journey, not into some far-off sci-fi future, but into a tomorrow that's just around the corner. Imagine walking into your workplace and finding that some of your 'colleagues' are no longer human. They're not robots in the traditional sense, but autonomous software agents trained on vast datasets, equipped with decision-making power, and capable of performing economic, civic, and operational tasks at scale. These agents write policies, monitor supply chains, process health records, generate news, and even govern our digital interactions. This isn't a scene from a movie. A tectonic shift is heading our way, one that will transform how we work, how governments function, and even how communities operate. In this world, digital public infrastructure (DPI) will not be a convenience. It will be a lifeline. This shift is already progressing in the heart of the Middle East. Ambitious projects like NEOM in Saudi Arabia are exploring how agentic AI can be woven into the fabric of urban life. They aim to build an ecosystem of autonomous agents that redefines how cities are developed and managed. Sovereignty in the Age of Agents We like to say, 'Everyone has data.' But the real question is: Where is it? Who controls it? Who governs access to it? In a world run by agents, these are not purely technical questions but ones of power, accountability, and autonomy. A sovereign nation that cannot locate, trust, or manage its data risks losing control.. A government that cannot verify what its own agents have learned, or with whom they are communicating with, is no longer governing. To survive and thrive in this new ecosystem, DPI must evolve into Digital Shoring: a foundation for sovereign, trusted, and open environments built on four pillars: Open Data – depends on trust. It requires clear data lineage, verified provenance, and accountable governance. Knowing where your data came from and where it's going is essential for any system that relies on it.. Open Source Software – because critical infrastructure built on black boxes is neither secure nor sovereign. Open Standards – because without shared protocols, agents can't cooperate, institutions can't interoperate, and governments can't govern. Open Skills – because the capacity to read a balance sheet, or audit a neural net, shouldn't belong to a privileged few. This is the backbone of an agentic society that is fair, sovereign, and resilient. Agentic Intelligence: More Than Just Fancy Tools Let's talk about what agents actually are – and what they aren't. Imagine I hand a company's financial statement to two readers: a junior analyst and a seasoned economist. Both might understand the numbers, but only one can extract strategic insight. Similarly, agents can read, analyze, and reason. But the quality of their actions depends entirely on the skills they are equipped with. These skills can be trained, acquired, or, most importantly, shared. In public sector contexts, this presents an extraordinary opportunity. Why should every institution reinvent the same agent? Why can't the skills of a fraud detection agent used in one department be transferred, securely and ethically, to another? Just like people share their expertise, we need infrastructure for sharing agentic capabilities across digital institutions. This is where organizations like the UN can help, by setting the standards and helping everyone through the lens of the Global Digital Compact initiative. From 'Sovereign Cloud' to 'Sovereign AI Platforms' Right now, a lot of talk is around keeping data inside national borders. But in the world of agents, that is just not enough. What really matters is where and how models are trained, how they are managed, and how we keep them in check. We need Sovereign AI Platforms – akin to the way HR departments manage employees: verifying credentials, ensuring alignment, monitoring performance, and enabling collaboration. Companies such as Cloudera, are developing the scaffolding for such platforms: secure hybrid AI environments, open-source data pipelines, governance-first orchestration layers, and modular LLM serving infrastructure that respects national compliance frameworks. But no company can do this alone. This is a global mission. Open by Design. Governed by Default Governments around the world are already realising that private AI cannot be built on public cloud monopolies. Digital identity and agent oversight need to be open and transparent, not hidden, ad hoc, or opaque. So the future must be open by design – in code, in data, in protocols, and being governed by default. From Digital IDs that authenticate not only humans, but also agents and their behavior, to full knowledge graphs that maintain shared institutional knowledge across systems, together with audit trails that document every decision, every inference, every prompt. This goes beyond technology. It involves creating a new kind of digital society that is designed to empower states, safeguard citizens, and align intelligence with democratic values. The Path Forward This transformation will not be easy. It will require bold policy, sustained investment, cross-border cooperation, and, above all, technical leadership grounded in values. But make no mistake, digital cooperation is not optional. It is the condition for sovereignty in an agentic world. Without it, we are left with silos, vendor lock-in, and algorithmic drift. With it, we build a future where intelligence, human or machine, serves the public good. So let's move beyond the buzzwords. Let's build platforms, protocols, and public goods that are open, modular, and sovereign. Let's treat agents not just as tools, but as members of a digital society in need of governance, trust, and cooperation. And maybe, when we look back at today from the vantage point of tomorrow, we'll remember this moment not as a crisis, but as the moment we chose to govern the future together. This opinion piece is authored by Sergio Gago Huerta, CTO at Cloudera.


Arabian Business
10 hours ago
- Arabian Business
AI's impact expands beyond underwriting in insurance sector: Report
Artificial intelligence (AI) continues to reshape the insurance sector, extending its influence beyond underwriting and risk profiling to other critical areas of the insurance value chain, according to a new survey by GlobalData. Underwriting and risk profiling remain the areas most positively impacted by AI, with 45.8 per cent of industry professionals identifying them as the top beneficiaries. However, this represents a decline of nearly 10 percentage points since 2023, suggesting that insurers are increasingly applying AI in other functions. Claims management and customer service followed, with 20.3 per cent and 17.6 per cent of respondents, respectively, citing these areas as most influenced by AI. Customer service, in particular, has seen notable growth, increasing by 6.2 percentage points since the previous poll. Similarly, AI's role in product development more than tripled in recognition, rising from 1.9 per cent to 7.2 per cent. Charlie Hutcherson, Associate Insurance Analyst at GlobalData, said insurers are now broadening their AI applications beyond underwriting, despite challenges such as regulatory hurdles, data quality, and fairness in risk models. He highlighted the increasing traction AI has gained in customer service, where automation enables faster triage, more accurate responses, and higher satisfaction rates. Hutcherson also pointed out a rising impact of AI in product development, reflecting insurers' growing focus on trend analysis, identifying coverage gaps, and accelerating speed to market. He described the overall shift as a sign of a 'more mature and diversified approach,' with insurers recognising AI's transformative potential across multiple areas of their business. With rising competition, insurers face pressure to differentiate themselves by expanding AI capabilities not just in efficiency-driven processes but also in customer-facing and product innovation areas. Hutcherson stressed the need for a holistic deployment of AI, balancing efficiency gains with fairness, transparency, and regulatory compliance.' Those who can strike this balance will be best positioned to build long-term trust and value,' Hutcherson said.


The National
12 hours ago
- The National
Ozzy Osbourne AI tribute sparks 'digital resurrection' debate
Fans of Black Sabbath singer Ozzy Osbourne have criticised musician Tom Morello after he shared an AI-generated image of the rock star, who died this week at the age of 76. Osbourne bid farewell to fans earlier this month with a Black Sabbath reunion show in the British city of Birmingham. His death led to tributes from fans and musicians. They included Morello's post, which sparked anger among X users. The backlash over the stylised image – which included deceased rock stars Lemmy, Randy Rhodes and Ronnie James Dio – centred on what many saw as an exploitative and unsettling trend, with users questioning the ethics of sharing such visuals so soon after Osbourne's death. It is the latest flashpoint in a growing debate: when does using AI to recreate someone's likeness cross the line from tribute to invasion of privacy? While the tools behind these hyper-realistic images are evolving rapidly, the ethical frameworks and legal protections have not yet caught up. Deepfakes and grief in digital age Using AI to recreate the dead or the dying, sometimes referred to as "grief tech" or "digital resurrection", is becoming increasingly common, from fan-made tributes of celebrities to "griefbots" that simulate the voice or personality of a lost loved one. In an example of grief tech, Canadian Joshua Barbeau last year used Project December, a GPT-3-based chatbot created by Jason Rohrer, to recreate conversations with his dead fiancee from September 2020, eight years after her death. The chatbot's responses were so convincing that she "said" things like: "Of course it is me. Who else could it be? I am the girl that you are madly in love with." Mental health experts warn that such recreations can profoundly affect the grieving process. "The predictable and comforting responses of AI griefbots can create unrealistic expectations for emotional support, which could impact a person's ability to build healthy relationships in the future," said Carolyn Yaffe, a cognitive behaviour therapist at Medcare Camali Clinic in Dubai. "Some people find comfort and a sense of connection through them. In contrast, others might face negative effects, like prolonged denial, emotional pain, or even feelings of paranoia or psychosis." Interacting with AI likenesses can blur the lines between memory and reality, potentially distorting a person's emotional recovery, Ms Yaffe said. "These tools may delay acceptance and create a space where people stay connected to digital surrogates instead of moving on," she added. "Grief doesn't fit into neat algorithms." Lack of legal safeguards There is limited legal protection against these practices. In the Middle East, specific laws around AI-generated likenesses are still emerging. Countries including the UAE and Saudi Arabia address deepfakes under broader laws related to cyber crimes, defamation, or personal data protection. But there are still no clear regulations dealing with posthumous image rights or the AI-based recreation of people. Most laws focus on intent to harm, rather than on consent or digital legacy after death. In the UK, for example, there are no posthumous personality or image rights. Some states in the US, including California and New York, have begun to introduce limited protections, while others do not offer any. In China, draft legislation has begun to address AI deepfakes. Denmark, however, has been a pioneer on the issue, proposing a law that would grant people copyright-like control over their image, voice and likeness. The legislation, expected to pass this year, would allow Danish people to demand the removal of unauthorised deepfake content and seek civil damages, even posthumously, marking the first time such protections would be implemented in Europe. "Copyright does not protect someone's appearance or voice," said Andres Guadamuz, a reader in intellectual property law at the University of Sussex. "We urgently need to reform image and personality rights to address unauthorised AI depictions, particularly for vulnerable individuals, including the deceased or critically ill, where dignity, consent, and misuse risks are paramount." Consent, culture and control Ethical concerns about recreating the image or voice of someone who is critically ill or dead go beyond legal frameworks. Arda Awais, co-founder of UK-based digital rights collective Identity 2.0, believes that, even when AI tributes are carried out with good intentions, they carry significant risks. "Even with consent from the deceased, there could be ways a likeness is used which might not be 100 per cent in line with someone's wishes, too. Or how it's use evolves," Ms Awais said. She added that a one-size-fits-all approach may not be practical across different cultures, emphasising the need for more inclusive and diverse conversations when establishing ethical standards. While some families or individuals may welcome AI tributes as a means to preserve memories, others may view it as exploitative or harmful, particularly when it involves celebrities, whose images are frequently recycled without their permission. "Grief is such a personal experience," Ms Yaffe said. "For some, griefbots might provide a moment of relief. But they should be seen as a bridge, not the final destination." Experts warn that AI should never replace the emotional labour of mourning or the human connections that aid the healing process. "AI-generated responses can completely miss the point, not because the technology is harmful, but because it lacks the essential quality that grief requires – humanity," Ms Yaffe said.