
C-suite overconfidence in AI could prove bad for business, says survey
In an ever-changing geo-political and economic climate, the key to business success appears to be the ability to adapt.
Add into that the fast-paced world of tech and AI, and if you're not ready for change, you'll get left behind.
'AI is [such] an evolving ecosystem that companies are thinking through introducing a level of agility in their decision-making to give them the capacity to take on what AI is going to bring our way over the next while, without really, completely knowing what the answer is,' Jad Shimaly, global managing partner of client service at EY, told Euronews.
But as companies prepare for increased AI adoption and innovation, are their consumers happy with the risks they're taking?
What does responsible AI look like?
The true definition of responsible AI appears to be somewhat up for debate, and that could have serious business consequences, according to EY's latest Responsible AI Pulse survey.
'There seems to be a decent gap between C-suite expectations and understanding of what responsible AI and what the risk of AI is and what customer and consumer's expectations are,' Jad explained.
The survey highlighted that, in organisations that have already fully integrated AI, many C-suite leaders 'have misplaced confidence in the strength of their responsible AI practices and their alignment with consumer concerns'.
This could lead to a reduction in consumer trust and a reduced competitive edge for the company, which is only set to increase as things like agentic AI become more prevalent.
'CEOs stand out as an exception — showing greater concern around responsible AI, a viewpoint that's more closely aligned with consumer sentiment,' the report noted, however.
'One in five CEOs think that they have AI risk under control, while a third of their C-suite think that they have AI risks under control,' Jad told Euronews.
'So the CEOs seem to be a bit less comfortable that AI risk is being fully understood and mitigated than their C-suite.'
This difference in perception between CEOs and their senior colleagues is potentially linked to lower awareness levels or lower perceived accountability.
'It may also exhibit an imperfect understanding of the true potential of AI,' the survey added.
Jad was confident, however, that as regulations around responsible AI become clearer and more harmonised across the world, consumers will feel more assured that risks are being sufficiently mitigated.
Perception vs. Reality
Topics that incited different levels of concern between C-suite individuals and consumers included AI-generated misinformation, the use of AI to manipulate individuals, and AI's impact on vulnerable segments of society.
Both sides weren't hugely concerned by the idea of job losses, the topic where they aligned most.
The survey also found that companies still in the process of integrating AI were much more closely aligned with the level of concern of their consumers, compared to those who had already fully integrated AI.
'Just over half (51%) of [C-suite] in this group believe they're well aligned — compared with 71% of [C-suite] in organisations where AI is already fully integrated across the business,' the survey stated.
Watch the video above to see more from the interview with EY's Jad Shimaly.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Euronews
2 hours ago
- Euronews
GenAI job postings rise across Europe: Which countries lead the way?
Recent figures reveal a sharp rise in the share of job postings mentioning generative AI (GenAI) over the past two years across Europe, North America, and Australia. 'Nearly every job will be impacted by AI (artificial intelligence) at some point,' said Pawel Adrjan, Director of Economic Research at Indeed. In major European economies, the share of GenAI-related job postings more than doubled in the 12 months to March 2025, according to the global hiring platform Indeed. What are GenAI jobs? GenAI jobs refer to roles involving the development, implementation, or oversight of generative artificial intelligence technologies. This could include positions building GenAI features, or roles leveraging this tech to create more efficient processes such as reviewing data, summarising reports, or drafting written or creative materials. Ireland is leading the way by a significant extent in Europe when it comes to these kinds of jobs. Indeed data shows that, as of 31 March 2025, more than 0.7% of Irish job postings include terms related to GenAI. This is an increase of 204%, with the share more than tripling in just one year. The figure was only 0.02% in the same period in 2023, highlighting a tremendous rise over the past two years. For comparison, job openings in Ireland for chefs currently represent 1.1% of total postings. Opportunities for lorry drivers and bartenders represent 0.8% and 0.6% respectively. These figures highlight Ireland's position at the forefront of digital innovation in the European labour market. How has Ireland become a hub for GenAI jobs? "Ireland's leading presence in GenAI job postings reflects the country's well-established technology sector and its role as a European base for many global firms,' Pawel Adrjan told Euronews Business. 'With a high concentration of tech employers, including major multinationals and a number of start-ups, it's natural we would see a proportionate increase in GenAI roles there too,' he added. Globally recognised names such as Alphabet, Amazon, Apple, Meta, IBM, Intel, Microsoft, Oracle, Salesforce, and Tencent, among many others, have established significant European operations in Ireland. Adrjan of Indeed also noted that the steady growth in AI-related roles is also indicative of Ireland's focus on industries like software, financial services, and life sciences, which are increasingly integrating AI tools into their operations. GenAI job postings surge in Germany, the UK, and France Several major EU and international markets — including Germany, France, Australia, the US, the UK, and Canada — lag behind Ireland in incorporating GenAI into job roles. In each of these countries, the share of job postings mentioning GenAI remains at or below 0.3% as of late March 2025. However, the share has risen by around 100% or more in these European countries over the past year. This highlights how the job market is evolving, even if still well behind Ireland's 204% increase. The UK has the highest share of GenAI-related job postings among the three largest European economies, at 0.33% as of 31 March 2025. That's up 120% from 0.15% the previous year. Germany follows with 0.23% (a 109% annual increase), and France at 0.21% (a 91% increase). Which jobs most commonly mention GenAI? GenAI jobs appear across a range of categories. Among the top occupations in Ireland where job postings mention GenAI, mathematics leads by a wide margin. As of March 2025, 14.7% of advertised roles in mathematics referenced GenAI, significantly higher than any other category. This was followed by software development (4.9%), media & communications (3.9%), architecture (2.4%), and scientific research & development (2.1%). Other fields showing notable GenAI activity include industrial engineering (1.8%), legal (1.7%), marketing (1.6%), medical information (1.5%), and production & manufacturing (0.9%). Human intelligence remains a strong requirement Pawel Adrjan explained that in many developed markets, ageing populations are contributing to labour shortages and widening skills gaps. As a result, employers face growing competition for talent and are increasingly turning to skills-first hiring approaches, including the use of AI to expand and enhance their workforce. While nearly every job will be impacted by AI at some point, Adrjan emphasised that human intelligence remains a key requirement. 'We know that GenAI tools are an excellent resource to enhance efficiencies, but they are currently limited in comparison to human expertise,' he said. To what extent can GenAI replace jobs? Joint research by Indeed and the World Economic Forum earlier this year showed that humans will remain an essential part of the global workforce as AI continues to evolve. Indeed analysed over 2,800 work-related skills to assess GenAI's potential to substitute employees. The findings show that around two-thirds (69%) are unlikely to be replaced by GenAI, underscoring the continued importance of human expertise in the workplace. The chart above shows the likelihood of certain skills to be replaced or substituted by GenAI. They are ranked from 'very low capacity' (hard to replace) to 'high capacity' (easy to replace). AI and Big Data, as well as reading, writing, and mathematics are on the 'high capacity' side of the scale. On the 'very low capacity' side of the scale, we can see sensory-processing abilities, along with empathy and active listening.


France 24
6 hours ago
- France 24
AI is learning to lie, scheme, and threaten its creators
In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. "O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives. - 'Strategic kind of deception' - For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS). No rules Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. "I don't think there's much awareness yet," he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.". Researchers are exploring various approaches to address these challenges. Some advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it." Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability. © 2025 AFP


Euronews
2 days ago
- Euronews
EU Commission to call on companies to sign AI Code
The European Commission will next week stage a workshop in an effort try to convince companies to sign the Code of Practice on general-purpose AI (GPAI) before it enters into force on 2 August, according to a document seen by Euronews. The Code of Practice on GPAI, a voluntary set of rules, aims to help providers of AI models, such as ChatGPT and Gemini, comply with the EU's AI Act. The final version of the Code was set to come out early May but has been delayed. The workshop, organised by the Commission's AI Office, will discuss the final code of practice, as well as 'benefits of signing the Code', according to the internal document. In September 2024 the Commission appointed thirteen experts to draft the rules, using plenary sessions and workshops to gather feedback. The process has been criticised throughout, by tech giants as well as publishers and rights-holders concerned that the rules violate the EU's Copyright laws. The US government's Mission to the EU sent a letter to the EU executive pushing back against the Code in April, claiming that it stifles innovation. In addition, Meta's global policy chief, Joel Kaplan, said in February that it will not sign the Code because it took issue with the then latest version. An EU official told Euronews in May, that US companies 'are very proactive' and there was sense that 'they are pulling back because of a change in the administration', following the trade tensions between the US and EU. Euronews reported last month that US tech giants Amazon, IBM, Google, Meta, Microsoft and OpenAI have called upon the EU executive to keep its Code 'as simple as possible', to avoid redundant reporting and unnecessary administrative burdens'. A spokesperson for the European Commission previously said the Code will appear before early August, when the rules on GPAI tools enter into force. The Commission will assess companies' intentions to sign the code, and carry out an adequacy assessment with the member states. The EU executive can then decide to formalise the Code through an implementing act. The AI Act – which regulates AI tools according to the risks they pose to society – entered into force gradually last year, however, some provisions will only apply in 2027.