
Microsoft SharePoint zero-day flaw prompts urgent global response
The flaw, catalogued as CVE-2025-53770, was revealed last week after several cyber security researchers, including Microsoft and Google's Threat Intelligence Group, published emergency advisories.
Microsoft has clarified that the vulnerability affects only on-premises versions of SharePoint. SharePoint Online, the cloud-based variant included in Microsoft 365, is not impacted by this zero-day flaw.
The urgency of the threat became clear after Eye Security researchers published findings that highlighted "active, large-scale exploitation" of the flaw, which they related to a set of vulnerabilities coined "ToolShell." Attackers who successfully exploit CVE-2025-53770 can access sensitive MachineKey configuration details on vulnerable servers, including the validationKey and decryptionKey. These critical parameters can then be used to craft specially designed requests that enable unauthenticated remote code execution, effectively giving attackers full control over the targeted servers.
Late breaking fixes for SharePoint Server 2019 and SharePoint Subscription Edition have been made available, with a patch for SharePoint Server 2016 expected to follow. Organisations are being urged to conduct incident response investigations, apply available patches, and closely review Microsoft's temporary mitigation instructions to limit exposure.
In recent reports, the scope and impact of the exploit have become clearer. More than 100 servers across at least 60 global organisations, including critical infrastructure such as the US National Nuclear Security Administration, have reportedly been breached via the vulnerability. Cyber security analysts have attributed the campaign to Chinese state-linked groups, among them Linen Typhoon, Violet Typhoon, and Storm-2603. These groups are said to have used stolen credentials to establish persistent access, potentially enabling ongoing espionage even after patches are applied.
According to Charles Carmakal, CTO of Mandiant Consulting at Google Cloud, attackers are using the vulnerability to install webshells - malicious scripts that provide ongoing unauthorised access - and to exfiltrate cryptographic secrets from compromised servers. This presents a substantial risk to organisations, as it allows persistent, unauthenticated access by malicious actors.
"If your organisation has on-premises Microsoft SharePoint exposed to the internet, you have an immediate action to take," Carmakal said.
He stressed that mitigation steps must be implemented without delay, as well as the application of patches as they become available. "This isn't an 'apply the patch and you're done' situation. Organisations need to assume compromise, investigate for any evidence of prior intrusion, and take appropriate remediation actions."
Satnam Narang, Senior Staff Research Engineer at Tenable, warned of the widespread consequences, stating: "The active exploitation of the SharePoint zero-day vulnerability over the weekend will have far-reaching consequences for those organisations that were affected. Attackers were able to exploit the flaw to steal MachineKey configuration details, which could be used to gain unauthenticated remote code execution."
Narang added that early signs of compromise could include the presence of a file named spinstall0.aspx, although it might carry a different extension in some cases.
Bob Huber, Chief Security Officer and President of Public Sector at Tenable, commented: "The recent breach of multiple governments' systems […] is yet another urgent reminder of the stakes we're facing. This isn't just about a single flaw, but how sophisticated actors exploit these openings for long-term gain."
Huber noted that because Microsoft's identity stack is so deeply embedded in government and corporate environments, a breach in SharePoint can create "a massive single point of failure." He argued for a more proactive, preventative approach to cyber security, emphasising the need for exposure management platforms that provide unified oversight across complex infrastructures.
For now, the coordinated response by vendors, security firms, and government agencies continues, as organisations track for signs of compromise and await further guidance on long-term remediation. The incident serves as a stark reminder of the intricate cyber threats faced by modern institutions, and the pressing need for rigorous, ongoing defence strategies against ever-evolving adversaries.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

1News
14 hours ago
- 1News
YouTube threatens to sue Aus govt if roped into social media ban
Google has been warned its threats to sue won't sway the potential late inclusion of YouTube in Australia's world-first social media ban for children. The tech giant wrote to Communications Minister Anika Wells declaring it was considering its legal position if its video sharing platform was included in the ban for children 16 and under. The letter, first reported by the Daily Telegraph, flagged the ban could be challenged on the grounds it restricts the implied constitutional freedom of political communication. Signals the Australian government was contemplating an "abrupt policy reversal" prompted Google to seek further clarity. "YouTube is a video sharing platform, not a social media service, that offers benefit and value to younger Australians," a YouTube spokesperson said. ADVERTISEMENT "We have written directly to the government, urging them to uphold the integrity of the legislative process and protect the age-appropriate experiences and safeguards we provide for young Australians." The social media ban is due to come into effect in December. Facebook, Instagram, TikTok and Snapchat were among platforms covered when legislation passed parliament. YouTube was exempted, in a move TikTok described as a "sweetheart deal". "The government was firm in its decision that YouTube would be excluded because it is different and because of its value to younger Australians," a YouTube spokesperson told AAP. "This intention was repeatedly made clear in its public statements, including to the Australian parliament." But e-Safety Commissioner Julie Inman Grant has recommended a rethink, citing research showing children were exposed to harmful content on YouTube more than any other platform. ADVERTISEMENT "The new law will only restrict children under the age of 16 from having their own accounts — not accessing content on YouTube or any other service through links from the school or in a 'logged-out' state," she told the National Press Club in June. "There is nothing in the legislation that prevents educators with their own accounts from continuing to incorporate school-approved educational content on YouTube or any other service just as they do now." Prime Minister Anthony Albanese took a dim view of Google's attempt to muscle in on the decision-making process. "The minister will make these assessments... independent of any these threats that are made by the social media companies," he told ABC TV on Sunday. "I say to them that social media has a social responsibility. "There is no doubt that young people are being impacted adversely in their mental health by some of the engagement with social media and that is why the government has acted." The early findings of an age-verification trial found technologies could block young kids from social media platforms, but not without loopholes. ADVERTISEMENT Platforms will face penalties worth up to AUS$50 million (NZ$54.5 million) if caught not taking reasonable steps to prevent children 16 and under from creating accounts.


Techday NZ
2 days ago
- Techday NZ
NetSuite adds AI translation to workflows for seamless teamwork
NetSuite has released a new feature called Text Enhance Translate, providing in-field translation capabilities integrated within NetSuite workflows. The feature allows users to instantly translate text fields into more than 20 supported languages, including Chinese (Simplified & Traditional), Czech, Danish, Dutch, various English variants, Finnish, French (France/Canada), German, Indonesian, Italian, Japanese, Korean, Norwegian, Portuguese (Brazil), Spanish, Swedish, Thai, Turkish, and Vietnamese. Enhanced collaboration The company states that this new functionality is aimed at boosting collaboration between teams operating across different language regions, simplifying daily activities such as reviewing invoices, responding to customer support queries, or setting up HR job descriptions. The upgrade allows organisations to avoid delays that can arise when texts have to be translated externally or by human translators, especially for routine operational usage. "With Text Enhance Translate, users can instantly translate a field or a portion of text within the NetSuite product, whether that's a vendor invoice, a customer support response, or a job description in an HR record." According to NetSuite, about one in three of its customers currently uses the Text Enhance tools to improve productivity, minimise errors, and craft contextually relevant content across different company departments. The new translation functionality is described as a significant extension of these capabilities. Business context awareness The company asserts that Text Enhance Translate differs from generic translation tools by deploying generative AI designed to understand both business and operational context. This includes interpreting intent, tone, industry jargon, and workflow-specific terminology. This approach, the company claims, is especially beneficial in handling business documents and specialised records across multiple industries and regions. "Unlike basic translation tools, Text Enhance Translate can understand your business context, your data, and your workflow, helping your teams collaborate more effectively across borders and languages." For example, a sales manager based in Tokyo is able to translate deal records from English to Japanese prior to discussing pricing with local leadership. Finance teams can interpret vendor contracts in French that are stored in NetSuite without waiting for a human translator. Likewise, customer support agents can respond to tickets in the original language submitted by the customer using the context-aware capabilities of the tool. Technology and usage Text Enhance Translate can be accessed from any text field in the NetSuite product, where users can select the translate option via the Text Enhance icon. The translation process will prioritise enabled languages based on each user's settings in NetSuite's multi-language configuration. For more complex requirements, NetSuite Admins and power users can utilise Prompt Studio to construct tailored workflows that combine translation operations with other AI-powered instructions. Additionally, developers are able to incorporate translation into custom processes via the SuiteScript GenAI API, such as automatically translating customer comments when synchronising with a customer relationship management system. Language support and deployment Text Enhance Translate is available in all markets where NetSuite Text Enhance is currently supported, with direct translation capabilities across over 20 languages. "With translation embedded directly into NetSuite workflows, customers can now onboard international employees faster, localise customer communications with confidence, and help reduce operational friction as they expand abroad." NetSuite indicates that the feature is now available to all customers using NetSuite Text Enhance, supporting global operational requirements as businesses continue to grow and engage in international activities.


NZ Herald
3 days ago
- NZ Herald
Trump targets ‘woke AI' with new federal contract rules
Experts on the technology say the answer to both questions is murky. Some lawyers say the prospect of the Trump Administration shaping what AI chatbots can and can't say raises First Amendment issues. Experts warn the order raises First Amendment issues and question the feasibility of bias-free AI. Photo / Getty Images 'These are words that seem great – 'free of ideological bias,'' said Rumman Chowdhury, executive director of the non-profit Humane Intelligence and former head of machine learning ethics at Twitter. 'But it's impossible to do in practice.' The concern that popular AI tools exhibit a liberal skew took hold on the right in 2023, when examples circulated on social media of OpenAI's ChatGPT endorsing affirmative action and transgender rights or refusing to compose a poem praising Trump. It gained steam last year when Google's Gemini image generator was found to be injecting ethnic diversity into inappropriate contexts – such as portraying black, Asian and Native American people in response to requests for images of Vikings, Nazis or America's 'Founding Fathers'. Google apologised and reprogrammed the tool, saying the outputs were an inadvertent by-product of its effort to ensure that the product appealed to a range of users around the world. ChatGPT and other AI tools can indeed exhibit a liberal bias in certain situations, said Fabio Motoki, a lecturer at the University of East Anglia. In a study published last month, he and his co-authors found that OpenAI's GPT-4 responded to political questionnaires by evincing views that aligned closely with those of the average Democrat. But assessing a chatbot's political leanings 'is not straightforward', he added. On certain topics, such as the need for US military supremacy, OpenAI's tools tend to produce writing and images that align more closely with Republican views. And other research, including an analysis by the Washington Post, has found that AI image generators often reinforce ethnic, religious and gender stereotypes. AI models exhibit all kinds of biases, experts say. It's part of how they work. Chatbots and image generators draw on vast quantities of data ingested from across the internet to predict the most likely or appropriate response to a user's query. So they might respond to one prompt by spouting misogynist tropes gleaned from an unsavoury anonymous forum – then respond to a different prompt by regurgitating DEI policies scraped from corporate hiring policies. Trump's AI plan: Federal contracts for bias-free models only. Photo / 123RF Training an AI model to avoid such biases is notoriously tricky, Motoki said. You could try to do it by limiting the training data, paying humans to rate its answers for neutrality, or writing explicit instructions into its code. All three approaches come with limitations and have been known to backfire by making the model's responses less useful or accurate. 'It's very, very difficult to steer these models to do what we want,' he said. Google's Gemini blooper was one example. Another came this year, when Elon Musk's xAI instructed its Grok chatbot to prioritise 'truth-seeking' over political correctness – leading it to spout racist and anti-Semitic conspiracy theories and at one point even refer to itself as 'mecha-Hitler'. The Google Gemini app, an AI-based, multimodal chatbot developed by Google. Photo / Getty Images Political neutrality, for an AI model, is simply 'not a thing', Chowdhury said. 'It's not real.' For example, she said, if you ask a chatbot for its views on gun control, it could equivocate by echoing both Republican and Democratic talking points, or it might try to find the middle ground between the two. But the average AI user in Texas might see that answer as exhibiting a liberal bias, while a New Yorker might find it overly conservative. And to a user in Malaysia or France, where strict gun control laws are taken for granted, the same answer would seem radical. How the Trump Administration will decide which AI tools qualify as neutral is a key question, said Samir Jain, vice-president of policy at the non-profit Centre for Democracy and Technology. The executive order itself is not neutral, he said, because it rules out certain left-leaning viewpoints but not right-leaning viewpoints. The order lists 'critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism' as concepts that should not be incorporated into AI models. 'I suspect they would say anything providing information about transgender care would be 'woke,'' Jain said. 'But that's inherently a point of view.' Imposing that point of view on AI tools produced by private companies could run the risk of a First Amendment challenge, he said, depending on how it's implemented. 'The Government can't force particular types of speech or try to censor particular viewpoints, as a general matter,' Jain said. However, the Administration does have some latitude to set standards for the products it purchases, provided its speech restrictions are related to the purposes for which it's using them. Some analysts and advocates said they believe Trump's executive order is less heavy-handed than they had feared. Neil Chilson, head of AI policy at the right-leaning non-profit Abundance Institute, said the prospect of an overly prescriptive order on 'woke AI' was the one element that had worried him in advance of Trump's AI plan, which he generally supported. After reading the order, he said that those concerns were 'overblown' and he believes the order 'will be straightforward to comply with'. Mackenzie Arnold, director of US policy at the Institute for Law and AI, a nonpartisan think-tank, said he was glad to see the order makes allowances for the technical difficulty of programming AI tools to be neutral and offers a path for companies to comply by disclosing their AI models' instructions. 'While I don't like the styling of the EO on 'preventing woke AI' in government, the actual text is pretty reasonable,' he said, adding that the big question is how the Administration will enforce it. 'If it focuses its efforts on these sensible disclosures, it'll turn out okay,' he said. 'If it veers into ideological pressure, that would be a big misstep and bad precedent.'