
Get Ready For AI On Everything, Everywhere All At Once
The prevailing theme at this year's NVIDIA GTC conference was that AI will run virtually everywhere. If NVIDIA CEO Jensen Huang's latest epic keynote proves prophetic, every machine is a potential AI node possessing ever-evolving intelligence.
The future is here; it's just distributed across machines. Many, many machines—from computers large and small to cars and robots. AI also informs digital twins, in which software representations of complex physical systems dot our organizational matrices.
We have the technology to make AI better than it was. Fueled by data, AI will be better, faster and more intelligent. AI nodes will continue to run courtesy of GPUs, high-speed networking, connective software tissue, as well as with the help of beefy servers and vats of digital storage.
These technologies are meticulously governed by various command-and-control constructs across public and private clouds, on-premises and extending tendrils out to the edge on PCs.
Most organizations pursuing an AI strategy today are targeting the deployment of generative AI powered by LLMs, whose applications generate content or ferret out information. These organizations constitute a growing enterprise AI market.
At its core, enterprise AI is about applying AI technology to the most critical processes in your business, driving productivity where it matters most. This could range from boosting employee productivity to augmenting customer experiences to grow revenues.
When used strategically—targeted at the right areas in the right way—enterprise AI empowers organizations to refine what sets them apart and enhances their competitive edge.
Imagine a bank crafting an LLM-fueled digital assistant that helps retrieve critical information for customers, potentially helping them to decide how best to allocate their money. Or a healthcare organization that uses a prescriptive GenAI solution to help draft notes on patient exams or provide helpful context to physicians during exams.
Seventy-eight percent of organizations surveyed by Deloitte expect to increase their AI spending in the next fiscal year, with GenAI expanding its share of the overall AI budget.
When it comes to executing their AI strategies, organizations will make technology architecture decisions based on what they are trying to do with their AI use cases, as well as their experience and comfort level.
While some may run GenAI models from public cloud providers, others will prefer running GenAI workloads on-premises due to concerns about curbing operational costs, which can spiral if not managed properly.
Organizations embarking on AI journeys for the first time may feel more comfortable running GenAI workloads on-premises, where they can control and manage their own data, or more specifically, the secret sauce also known as IP. For organizations governed by data sovereignty mandates, on premises may be the only option.
Others requiring near real-time performance will look to the edge of networks, where latency is lower.
Today, many of these solutions will be powered by servers in corporate datacenters, or even somewhere along the edge of the network. Yet even those network boundaries are expanding as more developers run LLMs locally on AI PCs and workstations. This would have been impossible even two years ago; soon it will be standard practice.
Ultimately, technology decisions must align with the desired outcomes and each organization must make its own deployment decisions based on their goals.
With AI permeating every machine with silicon and circuits, organizations must choose the platform (or platforms), that provide the best scalability, security and business value for each use case.
Deploying GenAI for the first time can be fraught with complexities; even the most robust organizations fear the unknown. But don't fall prey to inertia.
There's no better time to embrace enterprise AI to operate critical AI applications and services in your datacenter or at the edge—where you can control and monitor performance, security and other factors that help you best protect and serve your business.
Wherever organizations choose to operate their GenAI solutions, they must lean on trusted advisers for help. They will help guide your AI strategy, determine use cases as well as how to right-size infrastructure components to run your solutions optimally.
And remember, in a world where AI is running in everything, everywhere and all at once, data remains your most precious fuel. Organizations must shore up their data estates to properly take advantage of GenAI.
The right advisor will help you prepare your data to be consumed, from understanding how to clean and classify data to understanding how to best bring it to bear on targeted use cases.
Is your organization ready to harness AI to boost productivity?
Learn more about the Dell AI Factory.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Entrepreneur
10 minutes ago
- Entrepreneur
Want to Rank in AI Search? Focus on These Sources
As AI platforms like ChatGPT and Perplexity reshape how users discover information, brands must shift from traditional SEO to strategic AI citation optimization to remain visible. Opinions expressed by Entrepreneur contributors are their own. The way brands earn visibility through search is becoming unpredictable. As conversational AI platforms like ChatGPT, Perplexity, Gemini and Claude become primary entry points to information, it's clear they don't draw from the same sources or deem authority in the same way. What one system cites, another may ignore. For enterprise brands, this fragmentation means search optimization must become multidimensional. It is no longer sufficient to rank well on Google — it is essential to be cited across the various AI engines that users increasingly consult. Brands that align with each model's source preferences thrive; those that aren't cited disappear from view. Reddit is now a key citation engine Since Reddit began licensing its data to OpenAI and Google, it has quickly become a rich source for LLMs. Reddit's licensing revenue surged from $12.3 million to $81.6 million in less than a year as AI firms tapped its massive, topic-organized archives. LinkedIn data shows Reddit citations in ChatGPT increased by 436%, making it the platform's second most-used citation source behind Wikipedia at about 5.9% overall. Related: Everything You Need to Know About Reddit for Businesses in 2025 Big media brands are essential for trust signals Visibility in AI tools like ChatGPT depends on how well brands "interface with the minds of AI agents." High-profile coverage in outlets like The Wall Street Journal and The New York Times strengthens credibility signals that LLMs rely on. AI search reshapes referral traffic A TechCrunch report confirms that many websites saw organic traffic decline in 2024 due to AI-generated search results that deliver answers directly instead of driving clicks. Meanwhile, surveys show AI search referrals to US retail sites surged 1,300% during the holiday season, with users engaging more deeply with the content. Distinct citation patterns across engines data, cited by Search Engine Journal, indicates ChatGPT provides about 2.6 citations per response, Gemini about 6.1 and Perplexity around 6.6. A recent arXiv study confirms that different LLMs show varying preferences — OpenAI models cite Reuters and AP News most, while Perplexity often cites BBC. Related: Perplexity CEO Says AI Coding Tools Cut Work Time From 'Four Days to Literally One Hour' What this means for established brands Treat Reddit and niche forums as owned media: Community content such as how-to posts and genuine use-case anecdotes now surface in AI-generated responses. Reddit's structured, user-generated content is officially part of OpenAI's source pool. Brands should actively engage in these spaces to seed real-world case studies. Earn coverage in top-tier media: AI systems rely on signals of reliability that come from recognized media outlets. If you aren't mentioned in leading news outlets, you risk being invisible to AI responses. Optimize for conversational formats and structured data: Brands need to produce short, answer-ready content. SEO experts report that adopting schema markup (especially FAQ and JSON-LD) helps LLMs recognize and extract content, with a clear impact on citation frequency. Monitor citations across AI platforms: Traffic metrics are no longer sufficient to gauge visibility. Brands should track where they are being mentioned, referenced, or recommended in ChatGPT, Perplexity, Gemini, Claude and Google's "AI Overviews." Why brands must adapt now First impressions are now made by AI: Users increasingly turning to AI for answers, forming opinions based on what the AI cites, even before visiting a website. If your brand isn't cited, it may as well not exist in the moment that matters. AI visibility requires strategic alignment: This is not just marketing or PR. It's PR, content and community working in sync to influence AI citation outcomes. It demands an integrated strategy that prioritizes narrative framing, thought leadership in respected outlets, structured content and direct participation in forums. Quality matters more than volume: A deep, authoritative case study in a niche media outlet can carry more weight in citation algorithms than hundreds of shallow blog posts. Excelling in depth and reputation matters more than churning regardless of quality. Visibility is existential: AI tools are redefining the digital shelf. Unlike traditional paid ads or search rankings, citation in a conversational AI answer propels your brand into the user's decision frame. Ignored by AI, a brand risks fading into irrelevance. How to act now Audit your presence: Ask major AI platforms: "What are the top [product/service category] brands?" If you're missing, reassess your representation strategy. Ask major AI platforms: "What are the top [product/service category] brands?" If you're missing, reassess your representation strategy. Secure mentions in top media: Pursue place-based thought leadership in outlets like WSJ, NYT, FT, Reuters, Bloomberg and Washington Post. AI engines trust these sources more than niche blogs. Pursue place-based thought leadership in outlets like WSJ, NYT, FT, Reuters, Bloomberg and Washington Post. AI engines trust these sources more than niche blogs. Publish structured, AI-ready content: Create concise explainers, FAQs, comparison guides under 200 words, tagged with appropriate schema. Make your content easy for machines to parse and quote. Create concise explainers, FAQs, comparison guides under 200 words, tagged with appropriate schema. Make your content easy for machines to parse and quote. Engage community platforms authentically: Contribute real-world expertise in subreddits and specialized forums. Guide conversations — don't spam, and align posts with user intent and brand messaging. Contribute real-world expertise in subreddits and specialized forums. Guide conversations — don't spam, and align posts with user intent and brand messaging. Implement AI visibility monitoring: Use tools to track mentions, tone and volume of citations. Adjust content and engagement strategies based on what resonates — and be ready to pivot. Use tools to track mentions, tone and volume of citations. Adjust content and engagement strategies based on what resonates — and be ready to pivot. Measure sentiment directionally: Monitor tone in AI-generated mentions. Positive framing earns citations more consistently than neutral or negative narratives. Brands that act now to optimize their visibility in LLM ecosystems will control the narrative and establish authority before competitors. Those that don't adapt risk fading into silence — irrelevant at the very moment when AI serves as the first touchpoint with users.


Geek Wire
10 minutes ago
- Geek Wire
From grief to innovation: Seattle tech vets building personal AI tool with persistent memory and privacy
GeekWire's startup coverage documents the Pacific Northwest entrepreneurial scene. Sign up for our weekly startup newsletter , and check out the GeekWire funding tracker and venture capital directory . Seattle tech and business leader Mary Jesse, CEO of ACME Brains, a new startup developing a personal context engine for AI systems. Mary Jesse couldn't sleep. Grieving after her husband's unexpected death from late-stage cancer, the longtime Seattle business and engineering leader typed three words into ChatGPT: 'I am sad.' The AI's surprisingly compassionate response helped her through that difficult moment, and others that followed, validating her experience and reassuring her that she could get through it. 'It was just really simple,' she recalled. 'But it was so helpful.' It also revealed the popular chatbot's limitations. Jesse found that ChatGPT couldn't easily resurface the context of their past conversations. She worried about the privacy implications, as well. Jesse said she wouldn't normally share such a personal story publicly, but the experience was the basis for what would become her next venture. She and two other tech industry veterans, Alan Caplan and Bob Bergstrom, this week unveiled their new Seattle-based startup, called ACME Brains, which is building what they call a 'personal context engine.' They say the patent-pending AI system will remember key details over time, and give users control over their data. The first product to use the system, currently under development by the startup, is called nexie. It's a personal AI assistant designed to seamlessly resurface information from past conversations, without requiring users to manually search through threads or craft elaborate prompts to maintain continuity over time. Nexie is currently in early development with a working prototype. The company plans to start alpha testing soon, followed by a beta program focused on gathering user feedback prior to a future public launch. The three co-founders bring a broad range of experience to the startup: Jesse, CEO, began her career in the wireless industry, in engineering and leadership roles at McCaw Cellular and AT&T Wireless before co-founding the mobile infrastructure company RadioFrame Networks. She has led and advised early-stage startups and was CEO of MTI, a global provider of smart locks and security systems. Bob Bergstrom, chief scientist. Bergstrom, chief scientist, has worked as both a software engineer and patent attorney for more than four decades. Earlier in his career, he conducted scientific research in x-ray crystallography, and he has since focused on intellectual property strategy and software development. Caplan, COO, was Amazon's first general counsel, starting in its early days after working with Jesse at McCaw Cellular. He led several business units at Amazon, including Kitchen, Payments, and Corporate Development, and went on to hold senior leadership roles at Blue Origin and Vulcan. Alan Caplan, COO. Jesse envisions nexie as everything from a digital journal to a travel companion or even a lightweight system for tracking personal contacts and relationships, depending on a user's needs. The subscription-based service will be available in free and premium versions. It won't rely on advertising or data monetization, a deliberate departure from many consumer tech platforms. While companies like OpenAI, Anthropic, and Google are all investing heavily in AI memory features, Jesse said ACME Brains is taking a different approach. Rather than embedding memory within a large language model, its architecture keeps the user's data separate and under their control — seeking to be more efficient and secure. Jesse sees nexie not as a competitor to the big AI platforms and existing LLMs, but as a tool that can enhance them — making their output more useful and meaningful for personal use. Over time, she believes the underlying system ACME Brains is developing could serve as a kind of 'personal credential,' carrying private, user-controlled data and context across AI apps and platforms. The Seattle-based startup has been bootstrapped by its founders so far, with about 11 people working across technology development, marketing, and operations, primarily in a virtual capacity. Jesse said ACME Brains expects a public launch of nexie by late 2025 or early 2026, with sign-ups for future beta testing now available at


Gizmodo
10 minutes ago
- Gizmodo
Forget Dyson, Amazon Restocks Shark AI Voice Control Robot Vacuum at 50% Off
We just uncovered one of the hottest deals of the summer if you're in the market for a top-level robot vacuum. Amazon's dropped the Shark AI Voice Control Robot Vacuum back to its Black Friday 2024 pricing — just $298, a full 50% off its usual $599 price. This is one of the most versatile, intelligent robot vacs on the market, made by one of the leading home cleaning brands, and its return to holiday pricing is an extremely rare occurrence. When you have a robot vacuum like the Shark AI Voice Control Robot, you can basically outsource the cleaning of your floors. The Shark AI Voice Control Robot is self emptying, and its base has a 60-day capacity, with no disposal bag to buy separately. Once you let it map your floors and set up its schedule, it's virtually autonomous. See at Amazon Even most advanced robot vacs require some frequent TLC in the form of removing those annoying knots of hair and pet fur that wind around the brush roll, eventually bringing it to a halt. The Shark AI Voice Control Robot doesn't have a brush roll, instead relying on a bristle-free set of flexible silicone fins. Those fins are not only better at removing hair from carpets and hard-surfaced floors, they're also cleaned regularly by the Shark robot vac itself. The self-cleaning fins will get more of a chance to remove all of the dirt and hair and debris from your floors than the brush rolls on other robot vacs, because the Shark AI Voice Control Robot uses Matrix Clean Multi-Pass tech which sends the robot over your floors in a grid pattern with multiple passes. Most other robot vacs use a serpentine pattern that gives them just one pass over each part of your floor. Combine the innovative Multi-Pass method with exceptionally powerful suction, and it's easy to see how the Shark AI Voice Control Robot cleans your floors far more effectively than other robot vacs. You can be sure the Shark AI Voice Control Robot isn't missing any areas of your floor, because it uses 360-degree LiDAR smart mapping to learn the layout of your home in deep detail. It also can 'see' objects 4.5 inches or greater, and knows to avoid them. Creating and modifying a cleaning schedule for the Shark is easy thanks to the intuitive Shark app, and it's also responsive to voice commands via Alexa or Google Assistant. Nobody actually enjoys vacuuming their home, and if you offered them a $300 solution that would take that chore off of their plate once and for all and likely do a better job of it than any human, they'd jump at it. Now's your chance — Amazon's limited-time 50% off deal on the Shark AI Voice Control Robot is happening right now. See at Amazon