
Snowflake Unveils Comprehensive Product Innovations To Empower Enterprises To Achieve Full Potential Through Data And AI
Snowflake Standard Warehouse - Generation 2 and Snowflake Adaptive Compute deliver faster analytics performance to accelerate customer insights, without driving up costs
Snowflake Intelligence allows business users to harness AI data agents to analyse, understand, and act on structured and unstructured data
Snowflake Cortex AISQL embeds generative AI directly into customers' queries, empowering teams to analyse all types of data and build flexible AI pipelines with familiar SQL syntax
With Cortex Knowledge Extensions, enterprises can enrich their AI apps and agents with real-time news and content from trusted third-party providers
Snowflake (NYSE: SNOW), the AI Data Cloud company, today announced several product innovations at its annual user conference, Snowflake Summit 2025, designed to revolutionise how enterprises manage, analyse, and activate their data in the AI era. These announcements span data engineering, compute performance, analytics, and agentic AI capabilities, all aimed at helping organisations break down data silos and bridge the gap between enterprise data and business action — without sacrificing control, simplicity, or governance.
'Today's announcements underscore the rapid pace of innovation at Snowflake in our drive to empower every enterprise to unlock its full potential through data and AI,' said Theo Hourmouzis, Senior Vice President, ANZ and ASEAN, Snowflake. 'Organisations across A/NZ are looking to take their AI projects to the next level – from testing, to production, to ultimately providing business value. Today's innovations are focused on providing them with the easiest, most connected, and most trusted data platform to do so.'
Snowflake Openflow Unlocks Full Data Interoperability, Accelerating Data Movement for AI Innovation
Snowflake unveiled Snowflake Openflow, a multi-modal data ingestion service that allows users to connect to virtually any data source and drive value from any data architecture. Now generally available on AWS, Openflow eliminates fragmented data stacks and manual labor by unifying various types of data and formats, enabling customers to rapidly deploy AI-powered innovations.
Snowflake Openflow embraces open standards, so organisations can bring data integrations into a single, unified platform without vendor lock-in and with full support for architecture interoperability. Powered by Apache NiFi™[1], an Apache Software Foundation project built to automate the flow of data between systems, Snowflake Openflow enables data engineers to build custom connectors in minutes and run them seamlessly on Snowflake's managed platform.
With Snowflake Openflow, users can harness their data across the entire end-to-end data lifecycle, while adapting to evolving data standards and business demands. Hundreds of ready-to-use connectors and processors simplify and rapidly accelerate data integration from a broad range of data sources including Box, Google Ads, Microsoft Dataverse, Microsoft SharePoint, Oracle, Proofpoint, ServiceNow, Workday, Zendesk, and more, to a wide array of destinations including cloud object stores and messaging platforms, not just Snowflake.
Snowflake Unveils Next Wave of Compute Innovations For Faster, More Efficient Warehouses and AI-Driven Data Governance
Snowflake announced the next evolution of compute innovations that deliver faster performance, enhanced usability, and stronger price-performance value — raising the bar for modern data infrastructure. This includes Standard Warehouse – Generation 2 (Gen2) (now generally available), an enhanced version of Snowflake's virtual Standard Warehouse with next-generation hardware and additional enhancements to deliver 2.1x[2] faster analytics performance and 1.9x faster analytics performance than Managed Spark.
Snowflake also introduced Snowflake Adaptive Compute (now in private preview), a new compute service that lowers the burden of resource management by maximising efficiency through automatic resource sizing and sharing. Warehouses created using Adaptive Compute, known as Adaptive Warehouses, accelerate performance for users without driving up costs, ultimately redefining data management in the evolving AI landscape.
Snowflake Intelligence and Data Science Agent Deliver The Next Frontier of Data Agents for Enterprise AI and ML
Snowflake announced Snowflake Intelligence (public preview soon), which enables technical and non-technical users alike to ask natural language questions and instantly uncover actionable insights from both structured tables and unstructured documents. Snowflake Intelligence is powered by state-of-the-art large language models from Anthropic and OpenAI, running inside the secure Snowflake perimeter, and is powered by Cortex Agents (public preview) under the hood — all delivered through an intuitive, no-code interface that helps provide transparency and explainability.
Snowflake also unveiled Data Science Agent (private preview soon), an agentic companion that boosts data scientists' productivity by automating routine ML model development tasks. Data Science Agent uses Anthropic's Claude to break down problems associated with ML workflows into distinct steps, such as data analysis, data preparation, feature engineering, and training.
Today, over 5,200[3] customers from companies like BlackRock, Luminate, and Penske Logistics are using Snowflake Cortex AI to transform their businesses.
Snowflake Introduces Cortex AISQL and SnowConvert AI: Analytics Rebuilt for the AI Era
Snowflake announced major innovations that expand on Snowflake Cortex AI, Snowflake's suite of enterprise-grade AI capabilities, empowering global organisations to modernise their data analytics for today's AI landscape. This includes SnowConvert AI, an agentic automation solution that accelerates migrations from legacy platforms to Snowflake. With SnowConvert AI, data professionals can modernise their data infrastructure faster, more cost-effectively, and with less manual effort.
Once data lands in Snowflake, Cortex AISQL (now in public preview) then brings generative AI directly into customers' query engines, enabling teams to extract insights across multi-modal data and build flexible AI pipelines using SQL — all while providing bestinclass performance and cost efficiency.
Snowflake Marketplace Adds Agentic Products and AI-Ready Data from Leading News, Research, and Market Data Providers
Snowflake announced new agentic products on Snowflake Marketplace that accelerate agentic AI adoption across the enterprise. This includes Cortex Knowledge Extensions (generally available soon) on Snowflake Marketplace, which enables enterprises to enrich their AI apps and agents with proprietary unstructured data from third-party providers — all while allowing providers to protect their intellectual property and ensure proper attribution. Users can tap into a selection of business articles and content from The Associated Press, which will help users further enhance the usefulness of results in their AI systems.
In addition, Snowflake unveiled sharing of Semantic Models (now in private preview), which allows users to easily integrate AI-ready structured data within their Snowflake Cortex AI apps and agents — both from internal teams or third-party providers like CARTO, CB Insights, Cotality™ powered by Bobsled, Deutsche Börse, IPinfo, and truestar.
Learn More:
Check out all the innovations and announcements coming out of Snowflake Summit 2025 on Snowflake's Newsroom.
Stay on top of the latest news and announcements from Snowflake on LinkedIn and X, and follow along at #SnowflakeSummit.
About Snowflake
Snowflake is the platform for the AI era, making it easy for enterprises to innovate faster and get more value from data. More than 11,000 companies around the globe, including hundreds of the world's largest, use Snowflake's AI Data Cloud to build, use, and share data, apps and AI. With Snowflake, data and AI are transformative for everyone. Learn more at snowflake.com (NYSE: SNOW).
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
3 days ago
- Techday NZ
Devart launches dbForge 2025.1 with AI assistant & new UI
Devart has unveiled dbForge 2025.1, presenting new features including an AI assistant, updated user interface, and expanded connectivity options across its database development tool suite. The new dbForge 2025.1 update introduces the dbForge AI Assistant, offering support for database professionals working with SQL Server, MySQL, Oracle, and PostgreSQL. With this release, Devart aims to address the evolving needs of those developing, managing, and optimising databases. dbForge AI Assistant is described as an AI-powered co-pilot designed to enhance SQL development. Its features include context-aware query generation, natural language conversion to SQL, query optimisation, and advanced troubleshooting for a broad spectrum of database engines. The new functionality seeks to allow users, regardless of their expertise, to interact with databases via natural language while maintaining precise output for complex queries. Oleksii Honcharov, Head of Engineering at Devart, commented on the introduction of the AI Assistant and the wider scope of the update: dbForge 2025.1 isn't just another version, it's a grand step forward for our entire product line. We believe technology should work for you, and with our AI-powered assistant, working with data is a breeze. This means fewer bottlenecks, high-quality answers, and smarter decisions across the board. The AI Assistant is available across the dbForge product family, providing users with several functionalities aimed at accelerating and simplifying SQL development. Amongst its capabilities, the assistant can: Generate context-aware SQL queries by analysing database metadata, without accessing actual data. Convert plain language commands directly into syntactically correct SQL queries, making database access more inclusive for non-experts or those seeking code-free interaction. Optimise existing SQL code with performance suggestions designed for quicker and more efficient queries. Offer explanations that deconstruct complex SQL statements into more comprehensible terms. Troubleshoot problematic SQL queries, diagnosing issues and suggesting or implementing fixes. Conduct error analysis with recommendations that account for the unique requirements and dialects of SQL Server, MySQL, MariaDB, Oracle, and PostgreSQL. Support an interactive chat interface that provides conversational assistance for SQL, general database topics, and dbForge products, removing reliance on separate documentation. The AI Assistant is fully integrated into all editions of dbForge products, including the free Express versions. New users receive a free 14-day trial period for the AI functions. UI and UX upgrades Enhancements to user interface and experience form another key aspect of dbForge 2025.1. The update features a redesigned interface that aims to be cleaner and more intuitive, improving navigation, panel management, and the discoverability of tools. These adjustments arise in response to feedback from users, with the goal of streamlining workflow and reducing potential friction within the application environment. Expanded analysis and compatibility Further improvements have been made to static code analysis capabilities in dbForge Studio for SQL Server and dbForge SQL Complete. The built-in T-SQL Analyzer receives an expanded set of rules, offering more customisation and deeper insight during code reviews. Aligning with evolving cloud integration requirements, dbForge 2025.1 introduces support for dedicated and serverless SQL pools within Azure Synapse Analytics. This enhancement is intended to improve data integration and management workflows for teams who rely on Microsoft's analytics service. Additionally, Devart has updated its support for PostgreSQL tools, which are now compatible with the recently released PostgreSQL 18. This ensures that users are able to leverage new database features as soon as they become available. Product availability dbForge 2025.1 is available for download or update across all core product lines, including dbForge Edge, the Studio series for SQL Server, MySQL, Oracle, and PostgreSQL, as well as dbForge SQL Complete and SQL Tools. Existing users can check for updates within their installed product, while new users may download and trial the full suite of dbForge tools.


Techday NZ
4 days ago
- Techday NZ
Ventia adopts AI platform to speed up major infrastructure bids
Ventia has implemented an AI-powered platform, developed in partnership with DXC Technology, to streamline its bid writing process for major infrastructure contracts across Australia and New Zealand. Automation of bid writing The new platform, known as Tendia, automates the search, collation, and drafting of early-stage bid content – a process previously measured in days, now reduced to minutes. The solution is designed to help Ventia's teams prepare responses more quickly and accurately for complex, high-value tenders within its extensive operations. Tendia was developed by DXC's data, AI, and cloud experts, utilising Amazon Web Services (AWS) technologies including Amazon Bedrock and Kendra. The system is trained on Ventia's historical submissions to ensure that its outputs are relevant and accurate for current tender requirements. More than 10,000 AWS-certified professionals at DXC have contributed technical and security support to ensure the solution is viable at scale and enterprise-ready. The implementation of Tendia is seen as a practical demonstration of generative AI's expanding role beyond pilot projects, addressing complicated, document-heavy business processes at enterprise scale. Operational benefits for Ventia Ventia, one of the largest infrastructure service providers in the region, previously faced significant time and resource challenges preparing major tenders. The company has access to a workforce of more than 35,000 people operating across over 400 sites. Addressing these pressures was a primary driver for developing an AI-powered solution that could assist its teams in focusing on higher-value work within the bidding process. "Working with DXC, we've been able to improve the speed and quality of our bid development process. Tendia enables our teams to focus on higher-value work, deliver more accurate proposals, and respond faster to complex, multi-million-dollar tenders. This project marks the first phase of Ventia's broader AI adoption strategy to improve how we support clients and deliver services across the business." Ventia's General Manager for Strategy, Digital & Corporate Affairs, Em Hogan, pointed to these advantages, noting that the initiative is part of a wider programme to extend AI adoption across the organisation and its services. Technical background and partnership DXC's data, AI, and cloud teams worked closely with Ventia throughout the project, integrating AWS services such as Amazon Fargate, Kendra, and Cognito to deliver the Tendia solution. These components enable rapid, context-aware content generation and secure access for teams across different business units and geographies. "This collaboration shows how AI can support business-critical operations – within the public sector," said Seelan Nayagam, President, Asia Pacific, Middle East & Africa, DXC Technology. "We have drawn on our global scale and cross-industry AI experience to help Ventia turn an initial concept into an enterprise-ready solution. With over 10,000 AWS skilled resources and more than 15,000 experts trained through DXC's AI Academy and AI-Xcelerate programs, we're delighted to be supporting Ventia as it extends AI applications across more parts of its business," said Nayagam. DXC emphasised that its partnership with Ventia demonstrates how technology and global expertise can be applied to overcome barriers to generative AI use within critical business functions. The company's Consulting & Engineering Services team has a remit to operate and optimise mission-critical systems, including the co-creation and delivery of solutions based on automation and AI technologies. Productivity and security considerations The deployment of Tendia comes against a backdrop of growing demand for efficiency and accuracy in high-stakes processes such as infrastructure tenders. By automating the early stages of bid development, Ventia expects its staff to be able to dedicate more time to the strategic aspects of crafting proposals tailored to client needs and sector requirements. Tendia's support for compliance and data security is grounded in DXC's scale and AWS certifications, providing additional assurance for both technology stakeholders and business users. Both organisations have indicated that the platform's introduction represents only the initial stage in broader AI integration efforts across Ventia's operations, with further developments and expansions expected in the future.


Techday NZ
6 days ago
- Techday NZ
When trusted tools go rogue: The return of the ‘Confused Deputy Problem'
A decades-old cybersecurity vulnerability is staging a dangerous comeback, and this time it involves modern tools and has far-reaching consequences. Known as the 'Confused Deputy Problem,' this flaw sees trusted software - such as administrative tools, privileged scripts, or even AI agents - being manipulated to misuse their powers on behalf of less-privileged applications operating autonomously or by users. And in today's rapidly evolving threat landscape, the consequences are more severe than ever. From compiler quirk to enterprise crisis The confused deputy problem isn't new. First described by computer scientist Norm Hardy in 1988, it referred to a case where a compiler (legitimately empowered to write to billing files) was tricked by less-privileged applications into overwriting those sensitive files. The applications themselves didn't have the necessary access, but the compiler acted on their behalf, unwittingly executing their intent. Fast forward to today, and this fundamental breakdown of privilege separation is now playing out in some of the most advanced enterprise systems, including those that rely on artificial intelligence, automation, and cloud-native infrastructure. In most modern enterprises, trusted systems or processes - like automation scripts, CI/CD pipelines, and privileged service accounts - are the deputies. These programs are entrusted with elevated access because they serve as conduits to essential business functions. However, if they lack mechanisms to evaluate the context of the commands and honour least privilege performing functions, they can be exploited just as easily as Hardy's compiler. The problem becomes even more alarming when applied to Agentic AI which are tools that act independently to complete tasks using delegated authority. If these AI agents are manipulated into making requests or executing operations they weren't intended to, they become confused deputies on a much larger scale. Real-world risks The confused deputy issue surfaces in multiple ways across enterprise IT today. These include: SuDo misuse: Scripts with superuser privileges can be hijacked by untrusted inputs, elevating user privilege without directly attacking the OS. Scripts with superuser privileges can be hijacked by untrusted inputs, elevating user privilege without directly attacking the OS. CI/CD exploits: Shared service accounts in development pipelines can be coerced into leaking secrets or deploying malicious artifacts, especially in the absence of role isolation and context validation. Shared service accounts in development pipelines can be coerced into leaking secrets or deploying malicious artifacts, especially in the absence of role isolation and context validation. Cloud token abuse: In AWS or Azure environments, services can inadvertently use their assumed roles to fulfill malicious requests initiated by compromised peers, turning secure microservices into agents of privilege escalation. Why the problem persists Despite increasing awareness and tooling, the confused deputy problem persists largely because enterprises have not fully embraced the principle of least privilege. That is, systems, applications, and users continue to have more access than they need. What's more, the explosion of machine identities, such as automated services, scripts, bots, and now AI agents, has made it far harder to track privilege boundaries. Machines now communicate with other machines more frequently than humans do, and without adequate oversight, these interactions become fertile ground for exploitation. Reimagining Privileged Access Management To confront this resurgent threat, businesses must rethink their approach to Privileged Access Management (PAM). It's no longer enough to store secrets or manage user credentials. Modern PAM must be dynamic, context-aware, and tightly integrated into every aspect of the IT ecosystem. Key strategies to consider include: Command validation and filtering: Systems should whitelist commands, sanitise inputs, and block privilege escalation via indirect parameters. Systems should whitelist commands, sanitise inputs, and block privilege escalation via indirect parameters. Context-aware decisions: Access should be evaluated based on behavioural context and not just identity. Why is a session being initiated? What other systems has the user accessed? What's the broader pattern? Access should be evaluated based on behavioural context and not just identity. Why is a session being initiated? What other systems has the user accessed? What's the broader pattern? Segregation of duties: Different roles and accounts should be used for automation, deployment, and debugging. A single account with broad entitlements poses a massive risk if compromised. Different roles and accounts should be used for automation, deployment, and debugging. A single account with broad entitlements poses a massive risk if compromised. Real-time monitoring and forensics: PAM solutions must include session recording, keystroke logging, and audit trails to detect both deliberate abuse and accidental misuse. AI's double-edged sword Agentic AI represents both the future and the frontier of the confused deputy problem. These systems are capable of incredible operational gains, but their autonomous nature makes them ripe for exploitation. A prompt, parameter, or request that seems benign on the surface can trigger actions that cause significant harm or data leakage, especially if the agent can't distinguish between valid commands and malicious manipulation. This isn't just a technical flaw but a governance challenge. Enterprises must ensure that, as they embrace AI and automation, they do so with controls that prioritise intent verification, privilege minimization, and oversight. A strategic imperative The confused deputy problem is no longer a relic of early computing. It's a central challenge for modern digital security. As organisations deploy more intelligent and powerful tools, they must recognise that privilege without perspective is an attack vector in its own right. To prevent trusted systems from becoming dangerous liabilities, enterprises need to enforce least privilege not just as a policy, but as a design principle embedded in every layer of infrastructure, automation, and AI deployment.