logo
Brand First: How Retailers Win Hearts, Wallets & The Algorithm

Brand First: How Retailers Win Hearts, Wallets & The Algorithm

Forbes16-07-2025
Chatbot conversation Ai Artificial Intelligence technology online customer service.Digital chatbot, ... More robot application, OpenAI generate. Futuristic technology.Virtual assistant on internet.
Showing up where your shoppers actually look has always been tough—and today it's tougher. Search isn't what it used to be; you can't rely on keywords, links, or banner noise to climb the results. Discovery now flows across search engines, social feeds, retail media networks, marketplaces, and AI assistants that stitch signals together in real time. In this AI‑curated path to purchase, retailers win by being smarter—not louder—through structured data, credible content, and a clear, consistent brand that shoppers trust and algorithms surface.
Generative AI enters the conversation
Generative AI (gen AI) is reimagining the digital shopping experience by making the product search experience conversational. It used to be that consumers searched for a brand or a product by using brand names, or information such as pricing to pull up results. But now, they can ask for advice and recommendations.
Accenture's recently published Consumer Pulse Survey of more than 18,000 people across 14 countries, found that approximately half of consumers have made a purchase decision with the support of gen AI—making it the fastest-growing source of buying advice in the past year. And for active users— defined as people using gen AI tools at least weekly for personal and/or professional reasons, it's now the second highest source for product recommendations after physical stores.
The survey also found gen AI is no longer just a tool for speed and personalization—it's becoming a confidant and trusted advisor. Accenture's survey found that more than one-third (36%) of active gen AI users now consider the technology 'a good friend,' with a large proportion (93%) relying on it for personal development advice and 1 in 10 calling it their most trusted source for purchase decisions.
When consumers trust AI as they would a close friend, every interaction becomes an opportunity to deepen—or lose—that relationship.
Personalized recommendations
A great example of tapping AI to deeper consumer relationships is AI-powered multi-brand beauty startup, Noli, founded and backed by the L'Oréal Groupe. Noli—which stands for (No One Like I')—is on a mission to empower every beauty consumer with their own intelligent, trusted advisor.
Noli is reinventing how people discover and shop beauty products by addressing the number one pain point for beauty customers—too many options, and lack of unbiased, often conflicting, advice in the market.
Noli cuts through the beauty noise with AI diagnostics trained on 1M+ skin data points and thousands of product formulations. It decodes each user's beauty profile and delivers confident product picks to their door. Co‑Founder & CEO Amos Susskind says, 'Beauty is full of choice, opinions, claims, noise, and emotional stakes, making it the perfect category for personalization and expert guidance.'
Master the large language model ecosystem
It is easy to see how large language models (LLMs) are fast becoming the new influencers. According to Accenture's survey consumers are already using gen AI to inform purchase decisions, making it the fastest-growing source for recommendations.
To avoid being misrepresented, or excluded from consumer consideration entirely, retailers need to take an active role in the LLM ecosystem – a network of models, platforms, and data sources.
Winning now means optimizing for Generative Engine Optimization (GEO) as well as classic SEO—because AI assistants don't crawl, index, and rank like traditional search; they synthesize, summarize, and represent your brand.
Feed them structured, high‑quality, rights‑cleared content that's current and consistent; refresh it often; tag it so context is clear; and monitor AI surfaces to correct drift. Do that, and your brand story shows up accurately—and gets recommended—across AI chats, agents, and shoppable answers.
Prepare for Agentic Personal Shoppers
Then there's the newer member of the AI family, agentic AI, a technology that can act autonomously on behalf of consumers—making purchases without the traditional shopping journey. In fact, 75% of consumers told Accenture they are open to using a trusted AI-powered personal shopper that understands their needs.
When shoppers delegate decisions to agentic shopping agents, AI effectively becomes the buyer of record. Traditional retail media—banner ads, paid search slots, even your website—can be skipped as agents source the 'best fit' directly from data feeds, reviews, inventory, and price APIs. That's the threat: a frictionless race to the lowest acceptable price.
The opportunity? Make sure your brand carries machine‑readable reasons to choose you—quality signals, experience benefits, sustainability creds, fit/usage guidance, service guarantees—that matter to humans and to their agents. Give AI more to weigh than price, and you stay in the basket.
The trust imperative
Consumers are sceptical: 41% say AI content can feel inauthentic and 45% say it lacks a human touch (Accenture). Trust hinges on transparent, consent‑based data use—shoppers are used to tuning their own recommendation feeds and don't want their data repurposed in ways that surprise them.
Protect that trust with strong cybersecurity and data governance. Keep humans in the loop so AI reflects brand values and delivers the service customers expect. The win: AI experiences that are personal and trustworthy sustained by ongoing investment in tech, training, and teams that keep great retail brands real.
The time to act is now
The investments retailers and brands make today will determine whether they remain visible, relevant, and indispensable in an AI-driven world of tomorrow. While some retailers debate whether to embrace AI, others are already reshaping the industry by using AI to become more empathetic, more responsive, and more valuable to consumers than ever before. But they'll do so while maintaining the human touch—be that the store associate or customer service agent—that makes brands memorable and meaningful. The question is: will your brand be among them?
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta is shelling out big bucks to get ahead in AI. Here's who it's hiring
Meta is shelling out big bucks to get ahead in AI. Here's who it's hiring

CNN

time19 minutes ago

  • CNN

Meta is shelling out big bucks to get ahead in AI. Here's who it's hiring

Meta CEO Mark Zuckerberg is on a mission for his company to be the first to reach so-called artificial superintelligence — generally considered to mean AI that's better than all humans at all knowledge work. It's a nebulous and likely far-out concept that some analysts say may not immediately benefit the company's core business. Yet Zuckerberg is shelling out huge sums to build an all-star team of researchers and engineers to beat OpenAI and other rivals to it. Zuckerberg's recruiting spree, which has reportedly included multimillion-dollar pay packages to lure top talent away from key rivals, has kicked off a talent race within the AI industry. Last month, OpenAI CEO Sam Altman claimed Meta was offering his employees $100 million signing bonuses to switch companies. And just this week, Google CEO Sundar Pichai was asked during an earnings call about his company's status in the AI talent war, a sign that Wall Street is now also invested in the competition. The stakes are high for Zuckerberg — after Meta's pivot to the metaverse fell flat, he's reoriented the company around AI in hopes of being a leader in the next transformational technology wave. The company has invested billions in data centers and chips to power its AI ambitions that it's now under pressure to deliver on. Unlike other tech giants, Meta doesn't have a cloud computing business to generate immediate revenue from those infrastructure investments. And the company is coming from somewhat behind competitors, after reported delays in releasing the largest version of its new Llama 4 AI model. 'That's the Llama 4 lesson: You can have hundreds of thousands of (GPU chips), but if you don't have the right team developing the model, it doesn't matter,' said D.A. Davidson analyst Gil Luria. But more than anything, Zuckerberg appears to be in a circle of Silicon Valley 'AI maximalists' that believe the technology will change everything about how we live and work. Becoming a leader in the space is essential to Meta and other companies whose leaders follow that line of thinking, Luria said. 'For our superintelligence effort, I'm focused on building the most elite and talent-dense team in the industry,' Zuckerberg said in a Threads post earlier this month. Meta last month invested $14.3 billion in data labeling startup Scale AI. Scale founder and then-CEO Alexandr Wang joined the social media giant as part of the deal, along with several of Scale's other top employees. Wang is now leading the new Meta Superintelligence Lab, along with former GitHub CEO Nat Friedman. 'My job is to make amazing AI products that billions of people love to use,' Friedman said in an X post earlier this month. 'It won't happen overnight, but a few days in, I'm feeling confident that great things are ahead.' And in recent weeks, Meta has attracted top researchers and engineers from the likes of OpenAI, Apple, Google and Anthropic. Multiple news outlets, including Bloomberg, Wired and The Verge, have reported that Meta has, in some cases, offered pay packages worth hundreds of millions of dollars to new AI hires. It's a sign of just how far Zuckerberg is willing to go in his quest to win the AI superintelligence race, although the Meta chief has pushed back on some of the reporting around the compensation figures. It is with that mission that Meta's new team will be working to build superintelligence. Here are some of the most prominent recent hires to the team. This list was compiled based on public statements, social media profiles and posts, and news reports, and may not be exhaustive. Meta declined to comment on this story. Zuckerberg's drive to get ahead on AI may be rooted in part in his desire to own a foundational platform for the next major technology wave. Meta lost the race to control the operating systems for the mobile web era in the early 2000s and 2010s, which Apple and Google won. In recent years, he has not been shy about expressing his frustration with having to pay fees to app store operators and comply with their policies. Meta recently partnered with Amazon Web Services on a program to support startups that want to build on its Llama AI model, in an effort to make its technology essential to businesses emerging during the AI boom. Although AI has benefitted Meta's core advertising business, some analysts question how Zuckerberg's quest for 'superintelligence' will benefit the company. Emarketer senior analyst Minda Smiley said she expects Meta executives to face tough questions during the company's earnings call next week about how its superintelligence ambitions 'align with the company's broader business roadmap.' 'Its attempts to directly compete with the likes of OpenAI … are proving to be more challenging for the company while costing it billions of dollars,' Smiley said. But as its core business continues to grow rapidly, Meta has the money to spend to build its team and 'steal' from rivals, said CFRA Research analyst Angelo Zino. And, at least for now, investors seem to be here for it — the company's shares have risen around 20% since the start of this year. And if Zuckerberg succeeds with his vision, it could propel Meta far beyond a social media company. 'I think Mark's in a manifest destiny point of his career,' said Zack Kass, an AI consultant and former OpenAI go-to-market lead. 'He always wants to point to Facebook groups as being this way that he is connecting the world … And if he can build superintelligence that cures cancer, he doesn't have to talk about Facebook groups anymore as being his like lasting legacy.'

Why Do Some AI Models Hide Information From Users?
Why Do Some AI Models Hide Information From Users?

Time Business News

timean hour ago

  • Time Business News

Why Do Some AI Models Hide Information From Users?

In today's fast-evolving AI landscape, questions around transparency, safety, and ethical use of AI models are growing louder. One particularly puzzling question stands out: Why do some AI models hide information from users? Building trust, maintaining compliance, and producing responsible innovation all depend on an understanding of this dynamic, which is not merely academic for an AI solutions or product engineering company. Using in-depth research, professional experiences, and the practical difficulties of large-scale AI deployment, this article will examine the causes of this behavior. AI is an effective instrument. It can help with decision-making, task automation, content creation, and even conversation replication. However, enormous power also carries a great deal of responsibility. The obligation at times includes intentionally hiding or denying users access to information. Let's look into the figures: Over 4.2 million requests were declined by GPT-based models for breaking safety rules, such as requests involving violence, hate speech, or self-harm, according to OpenAI's 2023 Transparency Report. Concerns about 'over-blocking' and its effect on user experience were raised by a Stanford study on large language models (LLMs), which found that more than 12% of filtered queries were not intrinsically harmful but were rather collected by overly aggressive filters. Research from the AI Incident Database shows that in 2022 alone, there were almost 30 cases where private, sensitive, or confidential information was inadvertently shared or made public by AI models. At its core, the goal of any AI model—especially large language models (LLMs)—is to assist, inform, and solve problems. But that doesn't always mean full transparency. Large-scale datasets, such as information from books, websites, forums, and more, are used to train AI models. This training data can contain harmful, misleading, or outright dangerous content. So AI models are designed to: Avoid sharing dangerous information like how to build weapons or commit crimes. like how to build weapons or commit crimes. Reject offensive content , including hate speech or harassment. , including hate speech or harassment. Protect privacy by refusing to share personal or sensitive data. by refusing to share personal or sensitive data. Comply with ethical standards, avoiding controversial or harmful topics. As an AI product engineering company, we often embed guardrails —automatic filters and safety protocols—into AI systems. They are not arbitrary; they are required to prevent misuse and follow rules. Expert Insight: In projects where we developed NLP models for legal tech, we had to implement multi-tiered moderation systems that auto-redacted sensitive terms—this is not over-caution; it's compliance in action. In AI, compliance is not optional. Companies building and deploying AI must align with local and international laws, including GDPR and CCPA —privacy regulations requiring data protection. —privacy regulations requiring data protection. COPPA — Preventing AI from sharing adult content with children. — AI from sharing adult content with children. HIPAA—Safeguarding health data in medical applications. These legal boundaries shape how much an AI model can reveal. For example, a model trained in healthcare diagnostics cannot disclose medical information unless authorized. This is where AI solutions companies come in—designing systems that comply with complex regulatory environments. Some users attempt to jailbreak AI models to make them say or do things they shouldn't. To counter this: Models may refuse to answer certain prompts . . Deny requests that seem manipulative. that seem manipulative. Mask internal logic to avoid reverse engineering. As AI becomes more integrated into cybersecurity, finance, and policy applications, hiding certain operational details becomes a security feature, not a bug. Although the intentions are usually good, there are consequences. Many users, including academic researchers, find that AI models Avoid legitimate topics under the guise of safety. under the guise of safety. Respond vaguely , creating unproductive interactions. , creating unproductive interactions. Fail to explain 'why' an answer is withheld. For educators or policymakers relying on AI for insight, this lack of transparency can create friction and reduce trust in the technology. Industry Observation: In an AI-driven content analysis project for an edtech firm, over-filtering prevented the model from discussing important historical events. We had to fine-tune it carefully to balance educational value and safety. If an AI model refuses to respond to a certain type of question consistently, users may begin to suspect: Bias in training data Censorship Opaque decision-making This fuels skepticism about how the model is built, trained, and governed. For AI solutions companies, this is where transparent communication and explainable AI (XAI) become crucial. So, how can we make AI more transparent while keeping users safe? Models should not just say, 'I can't answer that.' They should explain why, with context. For instance: 'This question may involve sensitive information related to personal identity. To protect user privacy, I've been trained to avoid this topic.' This builds trust and makes AI systems feel more cooperative rather than authoritarian. Instead of blanket bans, modern models use multi-level safety filters. Some emerging techniques include: SOFAI multi-agent architecture : Where different AI components manage safety, reasoning, and user intent independently. : Where different AI components manage safety, reasoning, and user intent independently. Adaptive filtering : That considers user role (researcher vs. child) and intent. : That considers user role (researcher vs. child) and intent. Deliberate reasoning engines: They use ethical frameworks to decide what can be shared. As an AI product engineering company, incorporating these layers is vital in product design—especially in domains like finance, defense, or education. AI developers and companies must communicate. What data was used for training What filtering rules exist What users can (and cannot) expect Transparency helps policymakers, educators, and researchers feel confident using AI tools in meaningful ways. Recent work, like DeepSeek's efficiency breakthrough, shows how rethinking distributed systems for AI can improve not just speed but transparency. Mixture-of-Experts (MoE) architectures were used by DeepSeek to cut down on pointless communication. This also means less noise in the model's decision-making path—making its logic easier to audit and interpret. Traditional systems often fail because they try to fit AI workloads into outdated paradigms. Future models should focus on: Asynchronous communication Hierarchical attention patterns Energy-efficient design These changes improve not just performance but also trustworthiness and reliability, key to information transparency. If you're in academia, policy, or industry, understanding the 'why' behind AI information hiding allows you to: Ask better questions Choose the right AI partner Design ethical systems Build user trust As an AI solutions company, we integrate explainability, compliance, and ethical design into every AI project. Whether it's conversational agents, AI assistants, or complex analytics engines, we help organizations build models that are powerful, compliant, and responsible. In conclusion, AI models hide information for safety, compliance, and security reasons. However, trust can only be established through transparency, clear explainability, and a strong commitment to ethical engineering. Whether you're building products, crafting policy, or doing research, understanding this behavior can help you make smarter decisions and leverage AI more effectively. If you're a policymaker, researcher, or business leader looking to harness responsible AI, partner with an AI product engineering company that prioritizes transparency, compliance, and performance. Get in touch with our AI solutions experts, and let's build smarter, safer AI together. Transform your ideas into intelligent, compliant AI solutions—today. TIME BUSINESS NEWS

A new type of dealmaking is unnerving startup employees. Here are the questions to ask to make sure you don't get left out.
A new type of dealmaking is unnerving startup employees. Here are the questions to ask to make sure you don't get left out.

Business Insider

time2 hours ago

  • Business Insider

A new type of dealmaking is unnerving startup employees. Here are the questions to ask to make sure you don't get left out.

As a new kind of dealmaking is sweeping Silicon Valley, forcing employees to be vigilant about how much trust they are willing to put in startup founders. Over the past two years, instead of acquiring AI startups outright, Big Tech companies have been licensing their technology or making deals for top talent, with startup employees sometimes getting divided into separate camps of haves and have-nots. Those with the most desirable AI skills reap a windfall while those who remain are shrouded in uncertainty. That recently happened to Windsurf employees after the AI coding company was on the verge of being acquired by OpenAI for $3 billion, but was instead split in half. Google paid billions to hire Windsurf's CEO and top talent, and the hundreds of employees who remained were bought by another startup, Cognition. Unfortunately for startup employees, many investors expect these kinds of novel transactions to continue as the velocity of developments in AI makes companies unlikely to want to wait months or years for regulatory approval. Candidates need to ask tough questions about the founder Given traditional M&A has mostly gone out the window, it is more important than ever for startup employees to do their homework, advises Steve Brotman, managing partner at Alpha Partners. "In light of what we just saw with Windsurf, it's crucial to understand the ownership dynamics," Brotman said. " You don't want to be working 100-hour weeks only to realize your options are underwater or your exit upside is capped. And remember: companies that are transparent and deliberate about governance tend to be better long-term bets, both for your career and your equity." "Ask hard questions about runway, revenue, burn, and investor syndicate quality," Brotman continued. "Who's on the board? Are they structured for long-term growth or a quick flip?" The most important thing candidates should assess is how much they trust the founder, according to Deedy Das, an investor at Menlo Ventures. "Nobody wants to talk about the fact that founders control almost everything that happens in a company, including how you get paid, when you get paid, how the equity vests, and when you can sell the equity," said Das. "It's everything, so having trust in your founder to do the right thing by the team is extremely important." Just as investors would typically research a founder before writing a check to one of their many portfolio companies, prospective employees should ask around about founders whom they could be tied to for years, said Hari Raghavan, cofounder and CEO of Autograph. "They should be doing diligence on whether this is a standup person," said Raghavan. "Do your best to suss out, 'Are these guys going to take care of me?'" Raghavan suggests that founders should sign a written pledge agreeing to treat employees well in terms of stock options and exit scenarios. "These are things that any good founder should be doing, and the vast majority of good ones do, but I think even just establishing that set of rules is a good idea," he said. Prospective employees should not be afraid to "interrogate" a founder on how they are thinking about an exit, according to Jake Saper, a general partner at Emergence Capital. "Ask founders how they would weigh staying independent, a classic acquisition, or a licensing deal that carves out key people," Saper said. "Their answer tells you a lot about the journey you're signing up for." Scrutinizing the fine print has also become more important, said Saper. "Make sure offer letters and stock agreements spell out vesting acceleration, treatment of options, and retention bonuses if only 'substantially all' of the team moves," Saper said. "Those clauses mattered at Inflection and Windsurf, and they will matter again." In 2024, Microsoft hired the founder of Inflection AI, Mustafa Suleyman, and some of the startup's staff to help lead its AI efforts. In June, Meta paid $14 billion for a 49 percent stake in the data labeling company Scale AI and hired its founder, Alexandr Wang, to run its Superintelligence group. Meta also hired some of the startup's researchers. Last week, Scale AI laid off 14% of its workforce, or 200 employees, and revealed it is unprofitable. Finally, Saper says to take a hard look at the underlying business model of a startup to make sure it can last. "Startups with unique data feeds, embedded distribution or clear recurring revenue have leverage to stay independent," Saper said. "If a company's main asset is a brilliant but portable research team, you should assume Big Tech will come knocking."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store