logo
Systems Facing Increased Risk of AI-Enhanced Cyber Attacks, Warns NCSC

Systems Facing Increased Risk of AI-Enhanced Cyber Attacks, Warns NCSC

Epoch Times15-05-2025
By 2027, artificial intelligence tools will significantly increase the ability of malicious actors to find and exploit vulnerabilities in our systems, the National Cyber Security Centre (NCSC) has warned.
The NCSC, which is the UK's technical authority for cyber security and part of the GCHQ intelligence agency, said in its
It also warned that the window between discovering and exploiting a vulnerability has already narrowed to just days, and that AI is expected to shorten it even further, making it harder for those working to keep networks secure.
The cyber security specialists said that malicious actors, such as cyber criminals and those acting on behalf of hostile states, are very likely already using AI to enhance their existing tactics to penetrate systems, including through victim reconnaissance, vulnerability research, and malware generation.
Critical Systems
If cyber security does not keep pace with these advancements, 'there is a realistic possibility of critical systems becoming more vulnerable to advanced threat actors by 2027,' the report said.
The NCSC says that in order to protect themselves, organisations must implement advanced strategies to counter AI-driven attacks, including continued monitoring and using AI-based defence systems.
Paul Chichester, NCSC director of operations, said: "We know AI is transforming the cyber threat landscape, expanding attack surfaces, increasing the volume of threats, and accelerating malicious capabilities.
Related Stories
4/30/2025
12/3/2024
'While these risks are real, AI also presents a powerful opportunity to enhance the UK's resilience and drive growth—making it essential for organisations to act.
'Organisations should implement strong cyber security practices across AI systems and their dependencies and ensure up-to-date defences are in place.'
Serious Organised Crime
The report's publication coincided with the first day of the annual security conference put on by CyberUK, which is hosted by the NCSC.
In his keynote speech to the conference on Wednesday, Chancellor of the Duchy of Lancaster Pat McFadden called cyber attacks 'serious organised crime.'
He told business leaders and tech experts: 'The purpose is to damage and extort good businesses. It's the digital version of an old-fashioned shake down. Either straight theft or a protection racket where your business will be safe as long as you pay the gangsters.
'And what we've seen over the past couple of weeks should serve as a wake-up call for everyone - for government and the public sector, for businesses and organisations up and down the country, as if we needed one, that cybersecurity is not a luxury - it's an absolute necessity.'
Chancellor of the Duchy of Lancaster Pat McFadden delivers a keynote speech to the CyberUK conference at the Central Convention Complex in Manchester, England, on May 7, 2025.
Ryan Jenkinson/PA Wire
His remarks come after major British retailers Marks & Spencer, the Co-op, and Harrods all experienced serious cyber incidents.
The minister announced that the government would be investing an extra £7 million in the Laboratory for AI Security Research, which was launched by the Labour administration in November and comprises of experts from organisations including Oxford University, the Alan Turing Institute, and the Department for Science, Innovation, and Technology (DSIT).
DSIT said a further £8 million will be given to Ukraine for its cyber defences and £1.1 million will go to the Moldovan government to 'protect the country's upcoming Parliamentary Election.'
China Becoming a 'Cyber Superpower'
In his speech, the minister specifically highlighted China as a key point of discussion, saying, 'we need to be clear-eyed about the challenge posed' by the nation.
He said: 'It is well on its way to becoming a cyber superpower. It has the sophistication. The scale. And the seriousness.
'It's one of the world leaders in AI, as the world's second largest economy it's deeply embedded in global supply chains and markets.'
'Disengagement economically from China is not an option. Neither's naivety,' McFadden said.
'Our approach should be to engage constructively and consistently with China where it is in the UK's economic interests, but also to be clear that we will robustly defend our own cyberspace,' he said.
His comments come after British intelligence services have
The NCSC's annual review published in December
Last year, the then-Conservative government
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Guggenheim downgrades Datadog on fears that OpenAI will cut spending
Guggenheim downgrades Datadog on fears that OpenAI will cut spending

Yahoo

time25 minutes ago

  • Yahoo

Guggenheim downgrades Datadog on fears that OpenAI will cut spending

-- Guggenheim cut Datadog (NASDAQ:DDOG) rating to Sell from Neutral on a target price of $105, warning the software company could suffer a sharp revenue hit if OpenAI, which is its biggest customer, moves workloads in‑house. The brokerage said OpenAI is building its own log‑management and metrics tools, a shift that may begin eroding Datadog's billings in the second half of 2025. Guggenheim sees growth at about 17% in the fourth quarter, versus its 'plausible' 24.6% gain in the current quarter, and sees only 15% growth in 2026, four points below consensus. Analysts estimate OpenAI now accounts for roughly $170 million, about 60% of Datadog's 'AI‑native' customer cohort. Should the start‑up migrate away, Guggenheim says Datadog faces 'a $150 million or greater…hole to fill in 2026' as core enterprise spending remains subdued. The brokerage still expects a strong second quarter, projecting nearly 25% revenue growth, about 200 basis points ahead of Wall Street and enough, it believes, for management to lift full‑year guidance slightly. But any optimism could fade as OpenAI's optimisation 'creates a potential step‑function down' in the second half, it added. Datadog has cautioned investors about volatility among AI‑focused clients, but Guggenheim argues OpenAI is an outlier whose hyperscale growth makes third‑party monitoring increasingly expensive. It values the shares at 9.3 times its 2026 revenue forecast, versus a current multiple of 13.8, and 36 times free cash flow. Risks to the bearish call include OpenAI maintaining current spend or faster‑than‑expected take‑up by other AI‑native and enterprise customers. Longer term, Guggenheim still sees Datadog re‑accelerating to high‑teens growth and 25%‑plus free‑cash‑flow margins after 2027, supported by what it describes as an 'impressive product portfolio and category leadership.' Related articles Guggenheim downgrades Datadog on fears that OpenAI will cut spending Robinhood's Tenev confirms EU probe, reaffirms stock tokens on Bloomberg TV Fair Isaac stock falls after Fannie, Freddie allow VantageScore use Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Factbox-Zuckerberg's Meta Superintelligence Labs poaches top AI talent in Silicon Valley
Factbox-Zuckerberg's Meta Superintelligence Labs poaches top AI talent in Silicon Valley

Yahoo

time28 minutes ago

  • Yahoo

Factbox-Zuckerberg's Meta Superintelligence Labs poaches top AI talent in Silicon Valley

(Reuters) -Meta Platforms is racing to secure top artificial intelligence talent for its newly created Superintelligence Labs to better compete with rivals including OpenAI, Google and Anthropic. The Facebook and Instagram parent's aggressive hiring for its unified AI initiative has intensified the talent war in Silicon Valley. It also follows senior staff departures and a poor reception for Meta's latest open-source Llama 4 model, challenges that have allowed Google, OpenAI and China's DeepSeek to seize momentum in the AI race. In June, OpenAI CEO Sam Altman said Meta had offered his employees bonuses of $100 million to recruit them. Here is a list of the new recruits at Meta: ALEXANDR WANG Meta hired the former Scale AI CEO to head the new division as chief AI officer, according to a memo reviewed by Reuters. Meta CEO Mark Zuckerberg has also hired some Scale AI staff after the company invested $14.3 billion in the data-labeling startup. NAT FRIEDMAN The former GitHub CEO will co-lead the unit with Wang and head the company's work on AI products and applied research. Friedman co-founded venture capital firm NFDG, which has backed high-profile startups including Safe Superintelligence, Perplexity and Figma. DANIEL GROSS The former CEO of AI startup Safe Superintelligence has joined the team to lead the AI products division, sources told Reuters. Gross had co-founded NFDG. RUOMING PANG Pang was the head of Apple's Foundation Models team and responsible for advanced AI features, sources familiar with the matter told Reuters. He joined Meta with a multi-million-dollar compensation package, according to Bloomberg News. TRAPIT BANSAL The AI researcher joined OpenAI in 2022, where he played a key role in developing the "o-series" reasoning models. Bansal has directly worked with OpenAI and Safe Superintelligence co-founder Ilya Sutskever, according to his LinkedIn page. SHUCHAO BI Bi joined OpenAI in 2024 after working for over 10 years at YouTube and Google. He co-founded YouTube Shorts and built multi-stage deep learning models to optimize Google Ads performance, according to his LinkedIn page. HUIWEN CHANG Chang joined OpenAI in 2023 after working as a Research Scientist at Google for more than four years, according to her LinkedIn page. She is a co-creator of GPT-4o, OpenAI's multimodal model, and invented MaskGIT and Muse text-to-image architectures at Google Research. JI LIN Lin joined OpenAI in 2023, where he contributed to building advanced multimodal reasoning systems and Operator reasoning stack, OpenAI's computer-using agent architecture. JOEL POBAR Pobar joined Anthropic in 2023, where he oversaw infrastructure and inference pipelines for large language models. He also worked at Meta for about 11 years. JACK RAE Rae was a pre‑training technical lead for Google DeepMind's Gemini and spearheaded the reasoning development for Gemini 2.5. HONGYU REN Ren joined OpenAI in 2023, co-creating multiple o‑series and GPT‑4o models. He helped lead post-training efforts for the ChatGPT maker's most advanced reasoning models. JOHAN SCHALKWYK Schalkwyk was a former Google Fellow and oversaw major research and product integrations in speech AI. He has joined Meta Superintelligence Labs as a Voice Lead, according to his LinkedIn page. PEI SUN Sun worked on post-training, coding and reasoning for Gemini at Google DeepMind. He previously created the last two generations of self-driving unit Waymo's perception models. JIAHUI YU Yu joined OpenAI in 2023. Previously, he led the perception team at the AI startup. He co-created o3, o4-mini, GPT-4.1 and GPT-4o models. SHENGJIA ZHAO Zhao worked as a research scientist at OpenAI. He co-created ChatGPT, GPT-4, all mini models, 4.1 and o3.

How to turn off the government's emergency mobile alert test on your phone
How to turn off the government's emergency mobile alert test on your phone

Yahoo

time29 minutes ago

  • Yahoo

How to turn off the government's emergency mobile alert test on your phone

An emergency alarm will ring out on people's mobile phones across the country in September as the government tests its alert system. In its second ever nationwide drill, the alert system will be tested at around 3pm on 7 September this year – with approximately 87 million devices vibrating and ringing out with a high-pitched alarm for about 10 seconds. The system is there to warn if there is a danger to life nearby, and has already been used for storms, flooding, and in one case when an unexploded Second World War bomb was discovered. Pat McFadden, Chancellor of the Duchy of Lancaster, is the Cabinet Office minister who has taken charge of efforts to boost national resilience against crises, said: 'Emergency alerts have the potential to save lives, allowing us to share essential information rapidly in emergency situations including extreme storms. 'Just like the fire alarm in your house, it's important we test the system so that we know it will work if we need it.' Domestic violence charities have raised concerns about these alerts potentially putting at risk victims who have a second phone that their abuser does not know about. 'We must be clear - abuse is always a choice a perpetrator makes, and technology itself is not responsible," Emma Pickering, head of technology-facilitated abuse and economic empowerment at Refuge told Yahoo News. "When used correctly and as intended, technology can make vast improvements to our lives, and we know that having a hidden device, that can be used to access support without fear of being tracked, can offer a lifeline for many survivors. However, in the hands of a perpetrator of abuse, technology becomes another tool to misuse and weaponise, often with devastating effects. 'Abuse doesn't happen because of an emergency alert, but we know that the widespread use of this system may be incredibly worrying for survivors with a hidden device." Lucy Hadley, head of policy at Women's Aid, added: "For many survivors, a second phone which the perpetrator does not know about is an important form of communication with friends or family – as some abusers confiscate or monitor and control their partner's phone. "It may also be their only lifeline in emergencies. The emergency alerts pose a risk, not only because an abuser could discover a survivors' second phone, but also because they could use this as a reason to escalate abuse." Ahead of this year's test, the government said it will be running a public information campaign to notify people that the test is taking place, including communications targeted at vulnerable groups, such as victims of domestic abuse. How to disable the emergency alert depends on your device, the government says. iPhones and Android phones Search your settings for "emergency alerts". Turn off 'severe alerts' and "extreme alerts". Other mobile phones and tablets Depending on your phone's manufacturer and software version, emergency alerts settings may be called different names, such as "wireless emergency alerts" or "emergency broadcasts", the government says. The relevant settings can usually be found in one of the following ways: Go to "message", then "message settings", then "wireless emergency alerts", then "alert' Go to "settings", then "sounds", then "advanced", then "emergency broadcasts" Go to "settings", then "general settings", then "emergency alerts" One you've followed one of the above three methods, turn off "severe alerts", "extreme alerts" and "test alerts". There is also a step-by-step guide here on Refuge's YouTube channel and a "secure your tech" guide here on the charity's website. The government also offers a guide on how to opt out of "operator test alerts", which are carried out by the mobile network operators and the government to assess improvements made to the emergency alerts service. Android phones and tablets Search your phone's settings for "emergency alerts" and turn off "test alerts", "exercise alerts", "operator defined" and "operator alerts". If you cannot see them in your settings: Open your phone calling app Use the keypad to enter *#*#2627#*#* Search your settings for "emergency alerts" and turn off "test alerts", "exercise alerts", "operator defined" and "operator alerts" Other mobile phones and tablets As mentioned above, alert settings may have different names, such as "wireless emergency alerts" or "emergency broadcasts", depending on the manufacturer and software version of your phone. The settings can usually be found in one of the following ways: Go to "message", then "message settings", then "wireless emergency alerts", then "alert" "Settings", then "sounds", then "advanced", then "emergency broadcasts" "Settings", then "general settings", then "emergency alerts" Then turn off "test alerts", "exercise alerts", "operator defined" and "operator alerts". Most mobile phones and tablets will not get an operator test alert, the government says. The message will always have "operator" in the title and will confirm that no action is needed. If you get an operator test alert, your phone may make a loud siren-like sound for about 10 seconds, unless it's on silent. Read more What you need to know as emergency alarm set to alert millions of phones (Yahoo News) Flood defences to receive £7.9bn investment over next decade (PA Media) I'm a disasters expert – the UK isn't prepared enough for a nuclear strike (Yahoo News)

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store