
Dell unveils new AI server range & managed services with NVIDIA
The company unveiled advancements to the Dell AI Factory with NVIDIA, offering updated infrastructure, full-stack enterprise AI solutions, and managed services for organisations seeking to scale their AI operations.
Among the newly announced offerings are the air-cooled Dell PowerEdge XE9780 and XE9785 servers, which simplify integration into existing enterprise data centres. Complementing these are the liquid-cooled Dell PowerEdge XE9780L and XE9785L models, specifically designed to accelerate rack-scale deployment. The new server range supports up to 192 NVIDIA Blackwell Ultra GPUs with direct to chip liquid cooling and can be customised with up to 256 NVIDIA Blackwell Ultra GPUs per Dell IR7000 rack.
These servers, marking the next generation after Dell's PowerEdge XE9680, can deliver up to four times faster large language model training with the 8-way NVIDIA HGX B300. The Dell PowerEdge XE9712 featuring NVIDIA GB300 NVL72 is highlighted for its rack-scale efficiency in training, up to fifty times more AI reasoning inference output, and a five times improvement in throughput. Dell also introduced PowerCool technology to improve power efficiency within these platforms.
The expansion of the server portfolio includes the Dell PowerEdge XE7745, which will be available with NVIDIA RTX Pro 6000 Blackwell Server Edition GPUs in July 2025. This platform, supported within the NVIDIA Enterprise AI Factory validated design, supports up to eight GPUs in a 4U chassis and targets physical and agentic AI use cases such as robotics, digital twins, and multi-modal AI applications.
Dell has also announced plans to support the NVIDIA Vera CPU and the NVIDIA Vera Rubin platform, with a new PowerEdge XE server planned for use within Dell Integrated Rack Scalable Systems.
To address connectivity and networking demands, Dell expanded its offering with the PowerSwitch SN5600 and SN2201 Ethernet, both part of the NVIDIA Spectrum-X Ethernet networking platform, as well as the NVIDIA Quantum-X800 InfiniBand switches. These switches deliver up to 800 gigabits per second of throughput and are supported by Dell's ProSupport and Deployment Services.
The Dell AI Factory with NVIDIA solutions have been built to support the NVIDIA Enterprise AI Factory validated design, incorporating Dell and NVIDIA compute, networking, storage, and NVIDIA AI Enterprise software to deliver a fully integrated AI solution for enterprises.
Another focus is on enhancing data management for AI applications. With improvements to the Dell AI Data Platform, the company states that applications can benefit from always-on access to high-quality data. Dell ObjectScale now supports large-scale AI deployments, aiming to reduce cost and data centre footprint with a denser, software-defined system. Integrations with NVIDIA BlueField-3 and Spectrum-4 networking components aim to boost performance and scalability.
A new high-performance solution leveraging Dell PowerScale, Dell Project Lightning, and PowerEdge XE servers has been introduced, utilising KV cache and NVIDIA's NIXL Libraries, to support large-scale distributed inference workloads. Additionally, Dell ObjectScale will support S3 over RDMA, which the company claims can result in up to 230% higher throughput, up to 80% lower latency, and 98% reduced CPU load compared to traditional S3, enabling improved GPU utilisation.
Dell has announced an integrated offering that incorporates the NVIDIA AI Data Platform, targeted at accelerating curated insights from data and agentic AI applications and tools.
In terms of software, the NVIDIA AI Enterprise platform is available directly from Dell and includes NVIDIA NIM, NVIDIA NeMo microservices, NVIDIA Blueprints, NVIDIA NeMo Retriever for RAG, and NVIDIA Llama Nemotron reasoning models. Dell said this enables the development of agentic workflows and helps organisations shorten the time to achieve AI outcomes.
To simplify deployment and management, Dell will offer Red Hat OpenShift support on the Dell AI Factory with NVIDIA. The company has also launched Dell Managed Services for the AI Factory, providing management across the entire NVIDIA AI solutions stack, including ongoing monitoring, reporting, version upgrades, and patching.
Michael Dell, Chairman and Chief Executive Officer at Dell Technologies, said, "We're on a mission to bring AI to millions of customers around the world. Our job is to make AI more accessible. With the Dell AI Factory with NVIDIA, enterprises can manage the entire AI lifecycle across use cases, from training to deployment, at any scale."
Jensen Huang, Founder and Chief Executive Officer at NVIDIA, said, "AI factories are the infrastructure of modern industry, generating intelligence to power work across healthcare, finance and manufacturing. With Dell Technologies, we're offering the broadest line of Blackwell AI systems to serve AI factories in clouds, enterprises and at the edge."
The new solutions and managed services will become available across 2025 in line with server platform rollouts and future NVIDIA integration support.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
3 days ago
- Techday NZ
Linux Foundation adopts AGNTCY to standardise agentic AI
The Linux Foundation has announced that it is welcoming the AGNTCY project, an open source initiative aimed at standardising foundational infrastructure for open multi-agent artificial intelligence (AI) systems. AGNTCY delivers core components required for discovery, secure messaging, and cross-platform collaboration among AI agents that originate from different companies and frameworks. The project has the backing of industry players including Cisco, Dell Technologies, Google Cloud, Oracle, and Red Hat, all of whom have joined as formative members under the Linux Foundation's open governance. Originally released as open source by Cisco in March 2025 with collaboration from LangChain and Galileo, AGNTCY now includes support from over 75 companies. Its infrastructure forms the basis for the so-called 'Internet of Agents' - an environment where AI agents from diverse origins are able to communicate, collaborate, and be discovered regardless of vendor or execution environment. The increasing adoption of AI agents across industries has led to concerns about fragmentation and the formation of closed silos, constraining agents' ability to communicate across platforms securely and efficiently. AGNTCY's infrastructure aims to address these issues by standardising secure identity, robust messaging, and comprehensive observability. This allows organisations and developers to manage AI agents with improved transparency, performance, and trust. Compatibility is a focus for AGNTCY, which is interoperable with the Agent2Agent (A2A) project, also part of the Linux Foundation, as well as Anthropic's Model Context Protocol (MCP). The project supports agent discovery through AGNTCY directories, enables observable environments using AGNTCY's software development kits (SDKs), and utilises the Secure Low Latency Interactive Messaging (SLIM) protocol for secure message transport. "The AGNTCY project lays groundwork for secure, interoperable collaboration among autonomous agents," said Jim Zemlin, executive director of the Linux Foundation. "We are pleased to welcome the AGNTCY project to the Linux Foundation to ensure its infrastructure remains open, neutral, and community-driven." The AGNTCY project's infrastructure offers several key functions for multi-agent environments. Agent discovery is facilitated using the Open Agent Schema Framework (OASF), allowing agents to identify and understand each other's capabilities. Agent identity is supported via cryptographically verifiable processes to ensure secure activity across organisational boundaries. The agent messaging component supports various communication modes, including human-in-the-loop and quantum-safe options via the SLIM protocol. Observability functionalities provide evaluation and debugging across complex, multi-vendor workflows. "Building the foundational infrastructure for the Internet of Agents requires community ownership, not vendor control," said Vijoy Pandey, general manager and senior vice president of Outshift by Cisco. "The Linux Foundation ensures this critical infrastructure remains neutral and accessible to everyone building multi-agent systems." The project is underpinned by real-world applications, including AI-driven continuous integration and deployment pipelines, multi-agent IT operations, and the automation of telecom networks. This underlines the diversity of use cases benefitting from AGNTCY's open source approach. Various leaders and members have shared their perspective on the announcement: "Interoperability is central to Dell's agentic AI vision. The ability of agents to work together empowers enterprises to reap the full value of AI. Additionally, interworking technologies must accommodate agents wherever they are deployed whether in public clouds, private data centres, the edge or on devices. Dell is working hand-in-hand with industry leaders to establish open standards for agentic interoperability. Being a formative member of the Linux Foundation's AGNTCY project is one such step towards fulfilling the promise of agentic AI." – John Roese, global CTO and chief AI officer, Dell Technologies. "We've been building AGNTCY's evaluation and observability components from day one because reliable Agents cannot scale without purpose-built monitoring. Moving all components of AGNTCY to the Linux Foundation ensures these tools serve the entire ecosystem, not just our customers. As a founding member of AGNTCY, we're eager to see neutral governance accelerate adoption of standards we know enterprises need for production agent deployments." – Yash Sheth, co-founder, Galileo. "Open, community-driven standards are essential for creating a diverse, interoperable agentic AI ecosystem. We're pleased that Cisco is moving AGNTCY to the Linux Foundation, where it will be neutrally governed alongside the Agent2Agent protocol to advance powerful, collaborative agent systems for the industry." – Rao Surapaneni, vice president, business applications platform, Google Cloud. "Enterprise customers need agent infrastructure they can trust for mission-critical workloads. We welcome AGNTCY's move to the Linux Foundation and are proud to be a formative member of this project. A tight control over data security and governance helps discovery, identity, and observability components work reliably across the entire enterprise technology stack, not just specific vendor ecosystems." – Roger Barga, senior vice president, AI & ML, Oracle Cloud Infrastructure. "Our customers and partners, as well as the open source communities we work with, are actively exploring agentic capabilities to bring the inferencing benefits of vLLM and llm-d to their applications. Red Hat welcomes AGNTCY's move to the Linux Foundation and we look forward to working with the community to help bring open, agnostic governance to the agentic AI ecosystem." – Steve Watt, vice president and distinguished engineer, Office of the CTO, Red Hat. Follow us on: Share on:


Techday NZ
3 days ago
- Techday NZ
In the 'Golden Age of tech stocks, how do we use tech itself to assess risk and evaluate the markets?
The tech-heavy Nasdaq told the story in 2024, with Palantir up 340.5%, Nvidia up 171.2% and Broadcom up 107.7%. The latter two are among the so-called BATMMAAN stocks whose success has led some worried commentators to point out the concentration risk now present in US share markets: the top 10 stocks in the S&P 500 hit 37.3% of the value of the entire index in mid-July, just shy of the highest level on record, which was 38% in January this year according to Reuters. In the current Golden Age of the tech sector, it is the emergent AI and analytics tools being created by top-performing companies that are proving some of the best virtual assistants for evaluating stocks in the tumultuous 2020s, generally by using in combination with traditional analytical techniques. Here's how: 1. Going beyond the basics. We have access to enormous amounts of research and data about every tradable stock, but traditional statistics, like revenue or P/E ratio, don't always tell the whole story with tech companies, especially those rapidly reinvesting for growth. Rather, one can look for: Price-to-sales (P/S) ratio: Especially useful for high-growth, pre-profit tech firms. Free cash flow (FCF) growth: Indicates whether a company is capable of self-funding continued innovation. R&D expense growth: Is the business consistently investing in future products and features? Scale and market cap: Is the company large enough to weather market challenges? SG&A (selling, general, and admin expenses) to revenue: Offers clues about efficiency in scaling operations. 2. Using technical analysis for trends. The rise of quantitative trading and algorithmic strategies means technical analysis is an important supplementary lens for active traders, though not a substitute for deep research. Traders are looking at markers such as: Volatility metrics: Identifying periods where momentum or reversals are likely. Advanced charting: Using visual tools to spot levels of investor support or resistance. Changes in options implied volatility and ratios of puts to calls in analysis of both the indexes and individual stocks. 3. Leveraging quantitative and AI tools. The next generation of evaluation involves AI and big data. These tools filter vast amounts of information from financial reports, market sentiment, news, web analytics. Some of my preferred research platforms and AI-driven tools include: Tiger Trade App's AI-powered chatbot TigerAI. Its features allow investors to research stocks, summarise key insights from earnings calls and releases, and extract pertinent company news and sentiment analysis based on the nature of the questions asked, all within seconds. TigerAI can be accessed through the Tiger Trade app, so everything is in one place. Perplexity: This AI-powered research co-pilot synthesises web results and provides live monitoring, trend analysis, Q&A to feed back to users. ChatGPT: The biggest name brand in LLMs to date, this is conversational AI for brainstorming and quick synthesis, and a good tool to test investment ideas and pull data summaries. AlphaSense: Offers AI search for business/financial filings and news; users can deep dive for company and sector insights. Google Gemini: This is multimodal AI (text and images) for competitive research; users can scan public information fast. 4. Developing valuation frameworks. Valuing tech stocks is both an art and a science. Of course, getting it right or wrong can make a big difference in ROI terms for traders and clients. Key techniques one can use: Discounted cash flow (DCF): Projects future value but is highly sensitive to assumptions. Relative valuation: Compares companies' multiples within the sector. Premium for growth: Sometimes justified if a company is truly dominant or highly innovative. 5. Making qualitative assessments. Without context, numbers can be misleading – and in an age of massive data volume, investors need to figure out which context is actually relevant. One can evaluate: Leadership quality: Track record, vision, and ability to execute. Innovation pipeline: New products/services and IP protection. Industry ecosystem position: Is the business a vital cog in a rising sector like AI, cloud computing, or cybersecurity? ESG practices: Environmental, social, and governance disclosures, especially around climate responsibility, are highly relevant. For companies involved in AI, the conversation is becoming increasingly heated around the vast energy consumption of data centres. 6. Finding practical uses for AI in research. AI can change the scope of intense periods such as earnings season. It can be used in reporting and analysis in a few ways: Hourly news alerts: Using Perplexity or AlphaSense for customisable updates on specific tech companies. Rapid data summarisation: With ChatGPT, one can parse lengthy earnings calls or filings quickly. Scenario analysis: running "what if" scenarios via AI, such as how a new product might reshape a market, the expected effects of tariffs on sector X or Y, or what headwinds a new regulation could create. Monitoring social trends: AI tools aggregate social media sentiment and web traffic, offering another layer of insight into a company's traction. Idea validation: When considering a trend or hypothesis, cross-examine it using multiple AI platforms to find the weak points. 7. Remembering the risks. AI can give the impression that there is a final right answer to everything, but any tool can only digest the data it is designed to process. And like any tool, it is only as good as the person using it. Given AI's complexity and known pitfalls (like "hallucinations"), the risk of relying on its output is for the user to bear – it should not be taken as nor is a substitute for professional advice. Only experienced users of AI should use it for financial analysis. It is well known that AI can be, and often is, wrong in its analysis and every finding of AI needs to be verified and double checked. No research method guarantees anything, and the risks include: Extreme volatility: Of course, tech stocks can swing wildly, and AI can't tell you for sure when or by how much. AI can be a predictor, but it is not a perfect one. Disruption risk: Share market leaders today can lag tomorrow if innovation slows. Overvaluation: High hopes can lead to painful corrections. These can be sudden or extreme. Regulatory changes: New rules on data or antitrust can shift the landscape overnight. Behavioural bias: Even seasoned investors can be swayed by hype or groupthink. There are investors who think the current Golden Age of tech is another bubble and the only question is when it will burst, not if. Sources: business/autos-transportation/ us-stock-market-concentration- risks-come-fore-megacaps- report-earnings-2025-07-23/ Disclaimer: This article is presented by Tiger Fintech (NZ) Limited and is for information only. It does not constitute financial advice. Investing involves risk. You should always seek professional financial advice before making any investment decisions.

NZ Herald
22-07-2025
- NZ Herald
Anxious parents face tough choices on AI, from concern at what it might do to fear of their kids missing out
For Marc Watkins, a professor at the University of Mississippi who focuses on AI in teaching, 'we've already gone too far' to shield children from AI past a certain age. Yet some parents are still trying to remain gatekeepers to the technology. 'In my circle of friends and family, I'm the only one exploring AI with my child,' remarked Melissa Franklin, mother of a 7-year-old boy and a law student in Kentucky. 'I don't understand the technology behind AI,' she said, 'but I know it's inevitable, and I'd rather give my son a head start than leave him overwhelmed.' 'Benefits and risks' The path is all the more difficult for parents given the lack of scientific research on AI's effects on users. Several parents cite a study published in June by MIT, showing that brain activity and memory were more stimulated in individuals not using generative AI than in those who had access to it. 'I'm afraid it will become a shortcut,' explained a father-of-three who preferred to remain anonymous. 'After this MIT study, I want them to use it only to deepen their knowledge.' This caution shapes many parents' approaches. Tal prefers to wait before letting his sons use AI tools. Melissa Franklin only allows her son to use AI with her supervision to find information 'we can't find in a book, through Google, or on YouTube'. For her, children must be encouraged to 'think for themselves', with or without AI. But one father – a computer engineer with a 15-year-old – doesn't believe kids will learn AI skills from their parents anyway. 'That would be like claiming that kids learn how to use TikTok from their parents,' he said. It's usually 'the other way around'. Watkins, himself a father, says he is 'very concerned' about the new forms that generative AI is taking, but considers it necessary to read about the subject and 'have in-depth conversations about it with our children'. 'They're going to use artificial intelligence,' he said, 'so I want them to know the potential benefits and risks.' The chief executive of AI chip giant Nvidia, Jensen Huang, often speaks of AI as 'the greatest equalisation force that we have ever known', democratising learning and knowledge. But Watkins fears a different reality: 'Parents will view this as a technology that will be used if you can afford it, to get your kid ahead of everyone else'. The computer scientist father readily acknowledged this disparity, saying: 'My son has an advantage because he has two parents with PhDs in computer science'. 'But that's 90% due to the fact that we are more affluent than average' – not their AI knowledge. 'That does have some pretty big implications,' Watkins said. -Agence France-Presse