logo
AI Use Is Outpacing Policy and Governance, ISACA Finds

AI Use Is Outpacing Policy and Governance, ISACA Finds

Business Wire2 days ago

LONDON--(BUSINESS WIRE)--Nearly three out of four European IT and cybersecurity professionals say staff are already using generative AI at work – up ten points in a year – but just under a third of organisations have put formal policies in place, according to new ISACA research.
AI use is outpacing policy and governance, ISACA finds.
The use of AI is becoming more prevalent within the workplace, and so regulating its use is best practice. Yet not even a third (31%) of organisations have a formal, comprehensive AI policy in place, highlighting a disparity between how often AI is used versus how closely it's regulated in workplaces.
Policies work twofold to enhance activity and protect businesses
AI is already making a positive impact– for example, over half (56%) of respondents say it has boosted organisational productivity, and 71% report efficiency gains and time savings. Looking ahead, 62% are optimistic that AI will positively impact their organisation in the next year.
Yet that same speed and scale make the technology a magnet for bad actors. Almost two-thirds (63%) are extremely or very concerned that generative AI could be turned against them, while 71% expect deepfakes to grow sharper and more widespread in the year ahead. Despite that, only 18% of organisations are putting money into deepfake-detection tools—a significant security gap. This disconnect leaves businesses exposed at a time when AI-powered threats are evolving fast.
AI has significant promise, but without clear policies and training to mitigate risks, it becomes a potential liability. Robust, role-specific guidelines are needed to help businesses safely harness AI's potential.
'With the EU AI Act setting new standards for risk management and transparency, organisations need to move quickly from awareness to action,' says Chris Dimitriadis, ISACA's Chief Global Strategy Officer. 'AI threats, from misinformation to deepfakes, are advancing rapidly, yet most organisations have not invested in the tools or training to counter them. Closing this risk-action gap isn't just about compliance – it's critical to safeguarding innovation and maintaining trust in the digital economy.'
Education is the way to get the best from AI
But policies are only as effective as the people who understand - and can confidently put them into practice.
As AI continues to evolve, there is a need to upskill and gain new qualifications - 42% believe that they will need to increase their skills and knowledge in AI within the next six months in order to retain their job or advance their career - an increase of 8% from just last year. Most (89%) recognise that this will be needed within the next two years.
For more on the 2025 AI pulse poll, visit www.isaca.org/ai-pulse-poll. For ISACA resources on AI, including free content guides as well as training courses and certifications on AI audit and AI security management, visit www.isaca.org/ai.
Notes to Editors
All figures are based on fieldwork conducted by ISACA between 28 March and 14 April 2025, amongst a total of 561 business and IT professionals in Europe. In total, ISACA surveyed more than 3,200 business and IT professionals worldwide.
About ISACA
ISACA ® (www.isaca.org) has empowered its community of 185,000+ members with the knowledge, credentials, training and network they need to thrive in fields like information security, governance, assurance, risk management, data privacy and emerging tech. With a presence in more than 190 countries and with nearly 230 chapters worldwide, ISACA offers resources tailored to every stage of members' careers.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Authors call on publishers to limit their use of AI
Authors call on publishers to limit their use of AI

TechCrunch

timean hour ago

  • TechCrunch

Authors call on publishers to limit their use of AI

In Brief An open letter from authors including Lauren Groff, Lev Grossman, R.F. Kuang, Dennis Lehane, and Geoffrey Maguire calls on book publishers to pledge to limit their use of AI tools, for example by committing to only hire human audiobook narrators. The letter argues that authors' work has been 'stolen' by AI companies: 'Rather than paying writers a small percentage of the money our work makes for them, someone else will be paid for a technology built on our unpaid labor.' Among other commitments, the authors call for publishers to 'make a pledge that they will never release books that were created by machine' and 'not replace their human staff with AI tools or degrade their positions into AI monitors.' While the initial letter was signed by an already impressive list of writers, NPR reports that another 1,100 signatures were added in the 24 hours after it was initially published. Authors are also suing tech companies over using their books to train AI models, but federal judges dealt significant blows to those lawsuits earlier this week.

Meta hires four more OpenAI researchers, The Information reports
Meta hires four more OpenAI researchers, The Information reports

Yahoo

timean hour ago

  • Yahoo

Meta hires four more OpenAI researchers, The Information reports

(Reuters) -Meta Platforms is hiring four more OpenAI artificial intelligence researchers, The Information reported on Saturday. The researchers, Shengjia Zhao, Jiahui Yu, Shuchao Bi and Hongyu Ren have each agreed to join, the report said, citing a person familiar with their hiring. Earlier this week, the Instagram parent hired Lucas Beyer, Alexander Kolesnikov and Xiaohua Zhai, who were all working in OpenAI's Zurich office, the Wall Street Journal reported. Meta and ChatGPT maker OpenAI did not immediately respond to a Reuters request for comment. The company has recently been pushing to hire more researchers from OpenAI to join chief executive Mark Zuckerberg's superintelligence efforts. Reuters could not immediately verify the report.

OpenAI Loses Four Key Researchers to Meta
OpenAI Loses Four Key Researchers to Meta

WIRED

timean hour ago

  • WIRED

OpenAI Loses Four Key Researchers to Meta

Jun 28, 2025 4:16 PM Mark Zuckerberg has been working to poach talent from rival labs for his new superintelligence team. Photograph:Four OpenAI researchers are leaving the company to go to Meta, two sources confirm to WIRED. Shengjia Zhao, Shuchao Bi, Jiahui Yu, and Hongyu Ren have joined Meta's superintelligence team. Their OpenAI Slack profiles have been deactivated. The Information first reported on the departures. It's the latest in a series of aggressive moves by Mark Zuckerberg, who is racing to catch up to OpenAI, Anthropic and Google in building artificial general intelligence. Earlier this month, OpenAI CEO Sam Altman said that Meta has been making 'giant offers' to OpenAI staffers with '$100 million signing bonuses.' He added that, 'none of our best people have decided to take them up on that.' A source at OpenAI confirmed the offers. Hongyu Ren was OpenAI's post-training lead for the o3 and o4 mini models, along with the open source model that's set to be released this summer, sources say. Post-training is the process of refining a model after it has been trained on a primary dataset. Shengjia Zhao is highly skilled in deep learning research, according to another source. He joined OpenAI in the summer of 2022, and helped build the startup's GPT-4 model. Jiahui Yu did a stint at Google DeepMind before joining OpenAI in late 2023. Shuchao Bi was a manager of OpenAI's multimodal models. The departures from OpenAI come shortly after the company lost three researchers from its Zurich office, the Wall Street Journal reported. OpenAI and Meta did not immediately respond to a request for comment. This is a developing story. Please check back for updates .

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store