logo
#

Latest news with #techleaders

Superintelligent AI is coming and Saudi Arabia is ready
Superintelligent AI is coming and Saudi Arabia is ready

Arab News

time4 days ago

  • Business
  • Arab News

Superintelligent AI is coming and Saudi Arabia is ready

When people hear the term 'artificial intelligence,' they typically think of chatbots and digital assistants. But what's coming next could significantly impact the digital economy in the Middle East and beyond. What we are referring to is superintelligent AI. And if global tech leaders are right, it could arrive in fewer than five years. But what does that involve? How is it different from today's AI? And what are the implications for a region focused on leading in technology and innovation? Most people know AI through generative tools like ChatGPT, Gemini, and DALL-E — systems that can write, code, and produce art. While powerful, these tools are best suited to narrow tasks and rely on patterns found in existing data. The new challenge is to create artificial general intelligence — AI that thinks and acts like a human across a wide range of tasks. In short, AGI could learn new subjects, solve unfamiliar problems creatively, and adapt its behavior much like a human mind. Artificial superintelligence, or ASI, would go even further. It would outperform the most intelligent humans in virtually every domain, from science and economics to emotional intelligence. Not just faster or smarter, but capable of things humans can't yet do. The foundations are already in place: faster computers, improved neural systems, and reasoning systems with numerous agents. The Middle East is increasingly gearing up for the change — with Saudi Arabia at the forefront. In the Kingdom, the focus has shifted from simply using AI to developing and managing homegrown AI systems. Earlier this month, Saudi Arabia launched Humain, a new initiative backed by the Public Investment Fund. The project has ambitious goals: to build robust AI infrastructure, develop local cloud solutions, and create a powerful multimodal language model in Arabic. Because superintelligence will require adapting to local contexts, respecting cultural values, and maintaining control over data and systems, Saudi Arabia aims not only to use AI, but to shape it as a platform for future generations. Humain will be powered by more than 18,000 Blackwell GPUs from Nvidia. AMD and Microsoft will help fund research on AI training systems and chip architecture, while Amazon Web Services plans to invest $5 billion to build an AI Zone in the Kingdom. These partnerships are more than transactions — they are building blocks for long-term technological strength. As the world prepares for the emergence of superintelligence, we'll need more computing power, deeper government coordination, and stronger cross-border collaboration. Saudi Arabia is making its move now, ahead of the curve. With superintelligent systems, we could see autonomous legal platforms, AI-designed cities, and travel driven by emotional experiences. Yousef Khalili But what will superintelligent AI mean for the broader Middle East economy? It could accelerate four major transformations, starting with more intelligent governance and rapid infrastructure development. Such systems could analyze countless policies in real time and improve sectors such as traffic management, public health, and economic planning. This kind of capability could help Saudi Arabia achieve its Vision 2030 goals more quickly and accurately. Superintelligent AI will also unlock personalized learning. Imagine AI tutors that adapt to each student's learning style, cultural context, and emotional state. With superintelligence, it's possible to deliver large-scale, individualized education, therefore building a generation of skilled experts across fields. The scientific potential is even greater. In areas like medicine, clean energy, and materials science, AI could enable breakthroughs, whether in drug discovery, hydrogen technologies, or advanced materials. These applications align closely with Saudi Arabia's growing investments in biotechnology and sustainable energy. New industries will also emerge. With superintelligent systems, we could see autonomous legal platforms, AI-designed cities, and travel driven by emotional experiences. NEOM may serve as a testing ground for many of these innovations. Regional leadership in AI governance must also grow. The future is not guaranteed to be positive. Superintelligence is unlike any tool humanity has ever created. Without clear rules and alignment, it could harm economies, displace jobs, or deepen inequality. This is why governance, alignment, and ethics must evolve in parallel with technological progress. The region is well placed to lead not only in adoption but in shaping the frameworks around it. As Saudi minister Abdullah Al-Swaha recently said: 'Instead of only following standards, we should help create them.' In the end, readiness provides the edge. Superintelligent AI is approaching quickly. The nations that invest early, think boldly, and manage wisely will have a real opportunity to leap ahead in this century. Saudi Arabia is demonstrating what it means to think ahead. From building sovereign AI systems to securing large-scale infrastructure deals, it is laying the foundation for a future where prosperity is driven not by oil or labor, but by intelligence. If superintelligence emerges by 2028, the Middle East will not simply be a witness — it will be a leader. • Yousef Khalili is the global chief transformation officer and CEO for the Middle East and Africa at Quant, a company developing advanced digital employee technology aimed at redefining the future of customer experience.

For Clues On AI's Impact On Jobs, Watch Today's Tech Jobs
For Clues On AI's Impact On Jobs, Watch Today's Tech Jobs

Forbes

time7 days ago

  • Business
  • Forbes

For Clues On AI's Impact On Jobs, Watch Today's Tech Jobs

We know artificial intelligence – particularly generative and agentic AI – is reshaping jobs. But the exact impact is still a great unknown. But the impacts we're already seeing in tech jobs, many of which are at the forefront of the AI, generative AI, and agentic revolution – may provide clues to where things are going – a crucible for the AI-shaped job market of the near future. For starters, there doesn't appear to be evidence that AI is sweeping away jobs. There's even some evidence that it may help increase, rather than reduce jobs, particularly for technology occupations. There has been no noticeable impact on graduates starting out in the job market, and there's even been growth in white-collar jobs, an analysis published in The Economist shows. The researchers cite the relative immaturity of AI development – only 10% use AI on a enterprise scale – and it's primary role as a productivity platform. In addition, looking more closely at tech roles, at least seven in 10 technology leaders surveyed by one major analyst firm, 69%, indicate they're planning to increase headcounts – at least within technology areas – to build genAI capabilities. Technology jobs are the first category being reshaped by AI, a recent study out of the Federal Reserve Bank of Atlanta confirmed. The category of 'computer and mathematical occupations' saw demand for AI skills grow from two percent of postings in 2010 to 12% of postings in 2024. Other occupational groups, including architecture and engineering; business and financial operations; and management, are also seeing increasing proportions of AI within their job descriptions. AI and genAI 'are already changing the set of skills employers are demanding from the workforce,' the Fed survey suggested, with the percentage of job postings requiring AI-related skills increasing steadily. "Demand for AI skills is rising not just in computer and mathematical occupations but in a broader set of occupations, which they attribute to the increasing technical capabilities of AI to perform more tasks." Industry observers point to technology roles as examples of how jobs are evolving to hybrid mixes of human and genAI and agentic AI-led tasks. Notably, the latest evolution of AI – agents – are poised to take on more tasks within a range of jobs. AI agents 'can take a goal, break it into subtasks, and work on finding the best solution for these tasks individually,' aid Andreas Welsch, founder and chief AI strategist at Intelligence Briefing. 'Agents have access to additional information, tools, and resources – for example, code repositories, APIs, or websites. They can take on specialized roles such as an architect, software engineer or QA tester, and work on tasks within the typical scope of that role.' This doesn't mean AI will pick up tasks and business will go on as usual. 'Firstly, it is a complete paradigm change in how we use and interact with software systems,' said Chris Burchett, senior vice president for genAI at Blue Yonder. "Secondly, it is evolving at an unprecedented pace never before seen." To break in and thrive in such a world, Burchett advises "not to wait. You have to get started using the technology immediately. Second, you must have staying power to evolve with the changes because that is the only way to keep up and learn the unique capabilities AI unlocks. Third, you need to an abstraction layer that allows you technical agility to move across different models, frameworks and providers with minimal rework.' At the same time, the role of AI has limits. "AI might initially perform at a junior coder's level, 'but still requires human input and oversight,' Welsch pointed out. 'This means that human software developers will still need to define the project, its objectives and personas, and the expected behavior of an application. Users will need to acquire this knowledge as well as learn how to communicate with agentic AI systems to derive the most relevant results quickly.' While large language models have been trained on historic data and are able to generate code, 'this code is not always the most efficient implementation of a solution,' he added. Importantly, 'just because the AI-generated code is functional doesn't automatically make it secure. Additional tools or humans in the loop are needed to conduct security reviews of the generated code to mitigate any loopholes.' AI in general, "has the opportunity to amplify – not eliminate – human talent," said Gajen Kandiah, AI and enterprise transformation leader and former president and COO of Hitachi Digital. 'This is not about whether AI replaces developers. It is about how the role of developers—and the systems they create—are being redefined. The truth, as with most meaningful shifts, sits in the nuance. We will not see the future of engineers vanish. Instead, they'll evolve into AI trainers, strategic integrators, and problem-solvers." One thing is clear, Kandiah continued. 'The best developers will not be those who write the most lines of code – but those who design and deliver the most impact by partnering with intelligent systems." This applies to all workers as well, as the ability to work with AI to create new approaches to problems and opportunities will be a necessity in the months and years ahead.

Pope Leo calls for an ethical AI framework in a message to tech execs gathering at the Vatican
Pope Leo calls for an ethical AI framework in a message to tech execs gathering at the Vatican

CNN

time20-06-2025

  • Business
  • CNN

Pope Leo calls for an ethical AI framework in a message to tech execs gathering at the Vatican

Pope Leo XIV says tech companies developing artificial intelligence should abide by an 'ethical criterion' that respects human dignity. AI must take 'into account the well-being of the human person not only materially, but also intellectually and spiritually,' the pope said in a message sent Friday to a gathering on AI attended by Vatican officials and Silicon Valley executives. 'No generation has ever had such quick access to the amount of information now available through AI,' he said. But 'access to data — however extensive — must not be confused with intelligence.' He also expressed concern about AI's impact on children's 'intellectual and neurological development,' writing that 'society's well-being depends upon their being given the ability to develop their God-given gifts and capabilities.' That statement from the Pope came on the second of a two-day meeting for tech leaders in Rome to discuss the societal and ethical implications of artificial intelligence. The second annual Rome Conference on AI was attended by representatives from AI leaders including Google, OpenAI, Anthropic, IBM, Meta and Palantir along with academics from Harvard and Stanford and representatives of the Holy See. The event comes at a somewhat fraught moment for AI, with the rapidly advancing technology promising to improve worker productivity, accelerate research and eradicate disease, but also threatening to take human jobs, produce misinformation, worsen the climate crisis and create even more powerful weapons and surveillance capabilities. Some tech leaders have pushed back against regulations intended to ensure that AI is used responsibly, which they say could hinder innovation and global competition. 'In some cases, AI has been used in positive and indeed noble ways to promote greater equality, but there is likewise the possibility of its misuse for selfish gain at the expense of others, or worse, to foment conflict and aggression,' Leo said in his Friday statement. Although it doesn't have any direct regulatory power, the Vatican has been increasingly vocal on AI policy, seeking to use its influence to push for ethical technological developments. In 2020, the Vatican hosted an event where tech leaders, EU regulators and the late Pope Francis discussed 'human-centric' AI, which resulted in the Rome Call for AI Ethics, a document outlining ethical considerations for the development of AI algorithms. IBM, Microsoft and Qualcomm were among the signatories who agreed to abide by the document's principles. Two years later, Francis called for an international treaty to regulate the use of AI and prevent a 'technological dictatorship' from emerging. In that statement — which came months after an AI-generated image of Francis in a puffy coat went viral — he raised concerns about AI weapons and surveillance systems, as well as election interference and growing inequality. In 2024, he became the first pope to participate in the G7 summit, laying out the ethical framework for the development of AI that he hoped to get big tech companies and governments on board with. When Pope Leo XIV became leader of the Catholic Church last month, he signaled that his papacy would follow in Francis' footsteps on topics of church reform and engaging with AI as a top challenge for working people and 'human dignity.' The new pontiff chose to name himself after Pope Leo XIII who led the church during the industrial revolution and issued a landmark teaching document which supported workers' rights to a fair wage and to form trade unions. With the development of AI posing a similar revolution to the one during the 19th century, Leo has suggested that the church's social teaching — which offers a framework on engaging with politics and business — be used when it comes to new tech advancements. 'In our own day, the church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice and labor,' Leo said in that May address. The Friday event, which took place inside the Vatican's apostolic palace, included a roundtable discussion on AI ethics and governance. Among those present from the Vatican side were Archbishop Vincenzo Paglia, who has engaged with business leaders on AI, and Archbishop Edgar Peña Parra, who holds the position of 'sostituto' (substitute) in the Vatican, a papal chief of staff equivalent. Earlier this week, Leo referenced AI during a speech to Italian bishops, talking about 'challenges' that 'call into question' the respect for human dignity. 'Artificial intelligence, biotechnologies, data economy and social media are profoundly transforming our perception and our experience of life,' he told them. 'In this scenario, human dignity risks becoming diminished or forgotten, substituted by functions, automatism, simulations. But the person is not a system of algorithms: he or she is a creature, relationship, mystery.' A key issue at Friday's event is AI governance, or how the companies building it should manage their need to generate profit and responsibilities to shareholders with the imperative not to create harm in the world. That conversation is especially pressing at a moment when the United States is on the brink of kneecapping the enforcement of much of the limited regulations on AI that exist, with a provision in President Donald Trump's proposed agenda bill that would prohibit the enforcement of state laws on AI for 10 years. In his statement, Leo called on tech leaders to acknowledge and respect 'what is uniquely characteristic of the human person' as they seek to develop an ethical framework for AI development.

Pope Leo calls for an ethical AI framework in a message to tech execs gathering at the Vatican
Pope Leo calls for an ethical AI framework in a message to tech execs gathering at the Vatican

CNN

time20-06-2025

  • Business
  • CNN

Pope Leo calls for an ethical AI framework in a message to tech execs gathering at the Vatican

Pope Leo XIV says tech companies developing artificial intelligence should abide by an 'ethical criterion' that respects human dignity. AI must take 'into account the well-being of the human person not only materially, but also intellectually and spiritually,' the pope said in a message sent Friday to a gathering on AI attended by Vatican officials and Silicon Valley executives. 'No generation has ever had such quick access to the amount of information now available through AI,' he said. But 'access to data — however extensive — must not be confused with intelligence.' He also expressed concern about AI's impact on children's 'intellectual and neurological development,' writing that 'society's well-being depends upon their being given the ability to develop their God-given gifts and capabilities.' That statement from the Pope came on the second of a two-day meeting for tech leaders in Rome to discuss the societal and ethical implications of artificial intelligence. The second annual Rome Conference on AI was attended by representatives from AI leaders including Google, OpenAI, Anthropic, IBM, Meta and Palantir along with academics from Harvard and Stanford and representatives of the Holy See. The event comes at a somewhat fraught moment for AI, with the rapidly advancing technology promising to improve worker productivity, accelerate research and eradicate disease, but also threatening to take human jobs, produce misinformation, worsen the climate crisis and create even more powerful weapons and surveillance capabilities. Some tech leaders have pushed back against regulations intended to ensure that AI is used responsibly, which they say could hinder innovation and global competition. 'In some cases, AI has been used in positive and indeed noble ways to promote greater equality, but there is likewise the possibility of its misuse for selfish gain at the expense of others, or worse, to foment conflict and aggression,' Leo said in his Friday statement. Although it doesn't have any direct regulatory power, the Vatican has been increasingly vocal on AI policy, seeking to use its influence to push for ethical technological developments. In 2020, the Vatican hosted an event where tech leaders, EU regulators and the late Pope Francis discussed 'human-centric' AI, which resulted in the Rome Call for AI Ethics, a document outlining ethical considerations for the development of AI algorithms. IBM, Microsoft and Qualcomm were among the signatories who agreed to abide by the document's principles. Two years later, Francis called for an international treaty to regulate the use of AI and prevent a 'technological dictatorship' from emerging. In that statement — which came months after an AI-generated image of Francis in a puffy coat went viral — he raised concerns about AI weapons and surveillance systems, as well as election interference and growing inequality. In 2024, he became the first pope to participate in the G7 summit, laying out the ethical framework for the development of AI that he hoped to get big tech companies and governments on board with. When Pope Leo XIV became leader of the Catholic Church last month, he signaled that his papacy would follow in Francis' footsteps on topics of church reform and engaging with AI as a top challenge for working people and 'human dignity.' The new pontiff chose to name himself after Pope Leo XIII who led the church during the industrial revolution and issued a landmark teaching document which supported workers' rights to a fair wage and to form trade unions. With the development of AI posing a similar revolution to the one during the 19th century, Leo has suggested that the church's social teaching — which offers a framework on engaging with politics and business — be used when it comes to new tech advancements. 'In our own day, the church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice and labor,' Leo said in that May address. The Friday event, which took place inside the Vatican's apostolic palace, included a roundtable discussion on AI ethics and governance. Among those present from the Vatican side were Archbishop Vincenzo Paglia, who has engaged with business leaders on AI, and Archbishop Edgar Peña Parra, who holds the position of 'sostituto' (substitute) in the Vatican, a papal chief of staff equivalent. Earlier this week, Leo referenced AI during a speech to Italian bishops, talking about 'challenges' that 'call into question' the respect for human dignity. 'Artificial intelligence, biotechnologies, data economy and social media are profoundly transforming our perception and our experience of life,' he told them. 'In this scenario, human dignity risks becoming diminished or forgotten, substituted by functions, automatism, simulations. But the person is not a system of algorithms: he or she is a creature, relationship, mystery.' A key issue at Friday's event is AI governance, or how the companies building it should manage their need to generate profit and responsibilities to shareholders with the imperative not to create harm in the world. That conversation is especially pressing at a moment when the United States is on the brink of kneecapping the enforcement of much of the limited regulations on AI that exist, with a provision in President Donald Trump's proposed agenda bill that would prohibit the enforcement of state laws on AI for 10 years. In his statement, Leo called on tech leaders to acknowledge and respect 'what is uniquely characteristic of the human person' as they seek to develop an ethical framework for AI development.

Pope Leo calls for an ethical AI framework in a message to tech execs gathering at the Vatican
Pope Leo calls for an ethical AI framework in a message to tech execs gathering at the Vatican

CNN

time20-06-2025

  • Business
  • CNN

Pope Leo calls for an ethical AI framework in a message to tech execs gathering at the Vatican

Pope Leo XIV says tech companies developing artificial intelligence should abide by an 'ethical criterion' that respects human dignity. AI must take 'into account the well-being of the human person not only materially, but also intellectually and spiritually,' the pope said in a message sent Friday to a gathering on AI attended by Vatican officials and Silicon Valley executives. 'No generation has ever had such quick access to the amount of information now available through AI,' he said. But 'access to data — however extensive — must not be confused with intelligence.' He also expressed concern about AI's impact on children's 'intellectual and neurological development,' writing that 'society's well-being depends upon their being given the ability to develop their God-given gifts and capabilities.' That statement from the Pope came on the second of a two-day meeting for tech leaders in Rome to discuss the societal and ethical implications of artificial intelligence. The second annual Rome Conference on AI was attended by representatives from AI leaders including Google, OpenAI, Anthropic, IBM, Meta and Palantir along with academics from Harvard and Stanford and representatives of the Holy See. The event comes at a somewhat fraught moment for AI, with the rapidly advancing technology promising to improve worker productivity, accelerate research and eradicate disease, but also threatening to take human jobs, produce misinformation, worsen the climate crisis and create even more powerful weapons and surveillance capabilities. Some tech leaders have pushed back against regulations intended to ensure that AI is used responsibly, which they say could hinder innovation and global competition. 'In some cases, AI has been used in positive and indeed noble ways to promote greater equality, but there is likewise the possibility of its misuse for selfish gain at the expense of others, or worse, to foment conflict and aggression,' Leo said in his Friday statement. Although it doesn't have any direct regulatory power, the Vatican has been increasingly vocal on AI policy, seeking to use its influence to push for ethical technological developments. In 2020, the Vatican hosted an event where tech leaders, EU regulators and the late Pope Francis discussed 'human-centric' AI, which resulted in the Rome Call for AI Ethics, a document outlining ethical considerations for the development of AI algorithms. IBM, Microsoft and Qualcomm were among the signatories who agreed to abide by the document's principles. Two years later, Francis called for an international treaty to regulate the use of AI and prevent a 'technological dictatorship' from emerging. In that statement — which came months after an AI-generated image of Francis in a puffy coat went viral — he raised concerns about AI weapons and surveillance systems, as well as election interference and growing inequality. In 2024, he became the first pope to participate in the G7 summit, laying out the ethical framework for the development of AI that he hoped to get big tech companies and governments on board with. When Pope Leo XIV became leader of the Catholic Church last month, he signaled that his papacy would follow in Francis' footsteps on topics of church reform and engaging with AI as a top challenge for working people and 'human dignity.' The new pontiff chose to name himself after Pope Leo XIII who led the church during the industrial revolution and issued a landmark teaching document which supported workers' rights to a fair wage and to form trade unions. With the development of AI posing a similar revolution to the one during the 19th century, Leo has suggested that the church's social teaching — which offers a framework on engaging with politics and business — be used when it comes to new tech advancements. 'In our own day, the church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice and labor,' Leo said in that May address. The Friday event, which took place inside the Vatican's apostolic palace, included a roundtable discussion on AI ethics and governance. Among those present from the Vatican side were Archbishop Vincenzo Paglia, who has engaged with business leaders on AI, and Archbishop Edgar Peña Parra, who holds the position of 'sostituto' (substitute) in the Vatican, a papal chief of staff equivalent. Earlier this week, Leo referenced AI during a speech to Italian bishops, talking about 'challenges' that 'call into question' the respect for human dignity. 'Artificial intelligence, biotechnologies, data economy and social media are profoundly transforming our perception and our experience of life,' he told them. 'In this scenario, human dignity risks becoming diminished or forgotten, substituted by functions, automatism, simulations. But the person is not a system of algorithms: he or she is a creature, relationship, mystery.' A key issue at Friday's event is AI governance, or how the companies building it should manage their need to generate profit and responsibilities to shareholders with the imperative not to create harm in the world. That conversation is especially pressing at a moment when the United States is on the brink of kneecapping the enforcement of much of the limited regulations on AI that exist, with a provision in President Donald Trump's proposed agenda bill that would prohibit the enforcement of state laws on AI for 10 years. In his statement, Leo called on tech leaders to acknowledge and respect 'what is uniquely characteristic of the human person' as they seek to develop an ethical framework for AI development.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store