logo
#

Latest news with #NVIDIAGTCconference

Get Ready For AI On Everything, Everywhere All At Once
Get Ready For AI On Everything, Everywhere All At Once

Forbes

time08-04-2025

  • Business
  • Forbes

Get Ready For AI On Everything, Everywhere All At Once

As AI proliferates across machines, organizations must carefully choose operating environments. Trusted advisers can help. The prevailing theme at this year's NVIDIA GTC conference was that AI will run virtually everywhere. If NVIDIA CEO Jensen Huang's latest epic keynote proves prophetic, every machine is a potential AI node possessing ever-evolving intelligence. The future is here; it's just distributed across machines. Many, many machines—from computers large and small to cars and robots. AI also informs digital twins, in which software representations of complex physical systems dot our organizational matrices. We have the technology to make AI better than it was. Fueled by data, AI will be better, faster and more intelligent. AI nodes will continue to run courtesy of GPUs, high-speed networking, connective software tissue, as well as with the help of beefy servers and vats of digital storage. These technologies are meticulously governed by various command-and-control constructs across public and private clouds, on-premises and extending tendrils out to the edge on PCs. Most organizations pursuing an AI strategy today are targeting the deployment of generative AI powered by LLMs, whose applications generate content or ferret out information. These organizations constitute a growing enterprise AI market. At its core, enterprise AI is about applying AI technology to the most critical processes in your business, driving productivity where it matters most. This could range from boosting employee productivity to augmenting customer experiences to grow revenues. When used strategically—targeted at the right areas in the right way—enterprise AI empowers organizations to refine what sets them apart and enhances their competitive edge. Imagine a bank crafting an LLM-fueled digital assistant that helps retrieve critical information for customers, potentially helping them to decide how best to allocate their money. Or a healthcare organization that uses a prescriptive GenAI solution to help draft notes on patient exams or provide helpful context to physicians during exams. Seventy-eight percent of organizations surveyed by Deloitte expect to increase their AI spending in the next fiscal year, with GenAI expanding its share of the overall AI budget. When it comes to executing their AI strategies, organizations will make technology architecture decisions based on what they are trying to do with their AI use cases, as well as their experience and comfort level. While some may run GenAI models from public cloud providers, others will prefer running GenAI workloads on-premises due to concerns about curbing operational costs, which can spiral if not managed properly. Organizations embarking on AI journeys for the first time may feel more comfortable running GenAI workloads on-premises, where they can control and manage their own data, or more specifically, the secret sauce also known as IP. For organizations governed by data sovereignty mandates, on premises may be the only option. Others requiring near real-time performance will look to the edge of networks, where latency is lower. Today, many of these solutions will be powered by servers in corporate datacenters, or even somewhere along the edge of the network. Yet even those network boundaries are expanding as more developers run LLMs locally on AI PCs and workstations. This would have been impossible even two years ago; soon it will be standard practice. Ultimately, technology decisions must align with the desired outcomes and each organization must make its own deployment decisions based on their goals. With AI permeating every machine with silicon and circuits, organizations must choose the platform (or platforms), that provide the best scalability, security and business value for each use case. Deploying GenAI for the first time can be fraught with complexities; even the most robust organizations fear the unknown. But don't fall prey to inertia. There's no better time to embrace enterprise AI to operate critical AI applications and services in your datacenter or at the edge—where you can control and monitor performance, security and other factors that help you best protect and serve your business. Wherever organizations choose to operate their GenAI solutions, they must lean on trusted advisers for help. They will help guide your AI strategy, determine use cases as well as how to right-size infrastructure components to run your solutions optimally. And remember, in a world where AI is running in everything, everywhere and all at once, data remains your most precious fuel. Organizations must shore up their data estates to properly take advantage of GenAI. The right advisor will help you prepare your data to be consumed, from understanding how to clean and classify data to understanding how to best bring it to bear on targeted use cases. Is your organization ready to harness AI to boost productivity? Learn more about the Dell AI Factory.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store