logo
Experts reveal how AI Agents impact retail, shopping and customer loyalty

Experts reveal how AI Agents impact retail, shopping and customer loyalty

Techday NZ16-06-2025

AI agents are poised to become part of everyday life. Google's Gemini helps plan your week, while OpenAI's voice assistants manage tasks through natural conversation. A wave of startups and innovators are already building AI agent solutions for specific business needs using foundation models from leading providers.
Previously limited to their own data, these models now incorporate additional information and capabilities through special APIs and developments like Model Context Protocol (MCP), creating reliable connections to external sources.
AI agents are here, what does it mean for loyalty?
The marriage of AI and retail loyalty makes a lot of sense. Eagle Eye, for example, already has a powerful AI-driven personalisation engine and other predictive systems, which thrive on ingesting and processing data intelligently.
In addition to being able to ask questions, AI agent helpers can make decisions, compare prices and steer people to where to shop. This stands to change how retailers reach customers.
Four ways AI agents will reshape loyalty:
1. The rise of the personal loyalty concierge: Most loyalty programs today require customer effort; browsing offers, tracking points, redeeming rewards. AI agents reverse this dynamic. Acting as personal concierges, they understand your preferences, track rewards across programs, and proactively suggest ways to maximise benefits while shopping.
2. Mass-market offers become financially unsustainable: Blanket promotions available to all customers will become even less viable in an agent-driven marketplace. AI agents are built to optimise for value, identifying and exploiting the most generous public offers. This cherry-picking erodes margins and will render mass offers increasingly unprofitable.
3. A new era of offer optimisation: AI agents necessitate a step-change in how offers are structured and delivered. Offers must be real-time, personalised, and API-accessible for easy evaluation by AI assistants. Loyalty programs will need to evolve to support dynamic offer issuance, individual targeting, and instant redemption.
4. Trust and transparency become the currency: As AI agents mediate interactions, retailers won't just sell to customers, they'll negotiate with algorithms. Simplicity and genuine value will be rewarded, while complexity and trickery will be filtered out. Clear, fair loyalty programs will build the trust needed for this new landscape.
AI Agents: Opportunities and challenges
Dr Jason Pallant, Senior Lecturer of Marketing at RMIT, says he is intrigued by AI agents because of how they may empower consumers to tailor their shopping experience, as well as how brands will leverage this further with loyalty.
"We know consumers now want, and even expect, tech and AI to help them navigate purchase decisions, particularly complex ones," he says. "AI agents could be a really effective way to do that, helping consumers leverage AI insights without needing prompting skills. The opportunity for brands that get it right could be highly personalised and engaging shopping assistants delivered at scale. That's the promise and potential at least."
Pallant also notes however that brands will need to rethink how they interact with customers vs with agents to ensure both are nurtured correctly.
"Consumers still desire human interaction, particularly for complex purchases, and this actually increases the more technology advances," he says. "Just look at complaints around chatbots that lock consumers in and won't let them talk to humans. More 'intelligent' agents might simulate that human interaction better but there's still a level of technology in the middle.
"That interaction can also create a 'black box' effect, particularly with agents, where it's not always clear to consumers where an answer has come from or why. Brands need to make sure they stay transparent throughout the process to maintain consumer trust."
As with all technologies there can be upsides and downsides, which need to be navigated by brands to ensure they are maximising the good and not forgetting their customers.
"While AI agents might increase engagement and personalisation at scale, they risk losing the human element and competitive advantage of the brand if not used strategically," Pallant says.
Digital connections and [reparing for change
In the retail loyalty space, these connections will require a backend that can process loyalty transactions in real-time, deliver personalised offers at moments of decision, communicate seamlessly with AI systems through standardised protocols, and adapt rapidly as agent capabilities expand.
Remember, getting into a good position on AI isn't just about money. In the case of agentic AI, retailers will succeed if they understand how agents evaluate and present options to consumers, remember that behind the agents are humans who both demand efficiency and occasional acknowledgement, and design their loyalty experiences accordingly.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

'Pretty damn average': Google's AI Overviews underwhelm
'Pretty damn average': Google's AI Overviews underwhelm

RNZ News

time2 hours ago

  • RNZ News

'Pretty damn average': Google's AI Overviews underwhelm

Photo: JAAP ARRIENS Most searches online are done using Google. Traditionally, they've returned long lists of links to websites carrying relevant information. Depending on the topic, there can be thousands of entries to pick from or scroll through. Last year Google started incorporating its Gemini AI tech into its searches . Google's Overviews now inserts Google's own summary of what it's scraped from the internet ahead of the usual list of links to sources in many searches. Some sources say Google's now working towards replacing the lists of links with its own AI-driven search summaries. RNZ's Kathryn Ryan's not a fan. "Pretty damn average I have to say, for the most part," she said on Nine to Noon last Monday during a chat about AI upending the business of digital marketing. But Kathryn Ryan is not the only one underwhelmed by Google's Overviews. Recently, online tech writers discovered you can trick it into thinking that made up sayings are actually idioms in common usage that are meaningful. The Sydney Morning Herald 's puzzle compiler David Astle - under the headline 'Idiom or Idiot?' reckoned Google's AI wasn't about to take his job making cryptic crosswords anytime soon. "There is a strange bit of human psychology which says that we expect a very high bar from machines in a way that we don't from humans," the BBC's head of technology forecasting Laura Ellis told Mediawatch last month. "But if you've got a machine making a mistake, where does that accountability fall? We've just not tested this out yet." UK Sky News deputy political editor Sam Coates tried to make ChatGPT accountable after it made up an entire episode of his own politics podcast when he used it to help archive transcripts of it recently. "AI had told a lie that it had got the transcript. And rather than back down it invented an entire fake episode without flagging that it's fake." When challenged on this, the technology insisted Coates had created the episode himself. When ChatGPT can't find an answer or the right data to draw on, it can 'hallucinate' or just make up a misleading response. "Chat GTP is gaslighting me. No such thing exists. It's all a complete fake," Coates spluttered. After turning ChatGPT off and on again in 'conversation mode', it did eventually own up. "It said; 'Look, you're absolutely right to challenge that. I can't remember the exact time that you uploaded.' And then: 'What I can confirm is that I did it and you're holding me to account,'" Coates told viewers. He went on to challenge ChatGPT about its hallucinations getting worse. "The technology is always improving, and newer versions tend to do a better job at staying accurate," ChatGPT replied. But Coates - armed with data that suggested the opposite - asked ChatGPT for specific stats. The response: "According to recent internal tests from OpenAI, the newer models have shown higher hallucination rates. For instance, the model known as o3 had about a 33 percent hallucination rate, while the 04 mini model had around 48 percent." "I get where you're coming from, and I'm sorry for the mixed messages. The performance of these models can vary." When Coates aired his experience as a warning for journalists, some reacted with alarm. "The hallucination rate of advanced models... is increasing. As journos, we really should avoid it," said Sunday Times writer and former BBC diplomatic editor Mark Urban. But some tech experts accused Coates of misunderstanding and misusing the technology. "The issues Sam runs into here will be familiar to experienced users, but it illustrates how weird and alien Large Language Model (LLM) behaviour can seem for the wider public," said Cambridge University AI ethicist Henry Shevlin. "We need to communicate that these are generative simulators rather than conventional programmes," he added. Others were less accommodating on social media. "All I am seeing here is somebody working in the media who believes they understand how technology works - but [he] doesn't - and highlighting the dangers of someone insufficiently trained in technology trying to use it." "It's like Joey from Friends using the thesaurus function on Word." Mark Honeychurch is a programmer and long serving stalwart of the NZ Skeptics, a non profit body promoting critical thinking and calling out pseudoscience. The Skeptics' website said they confront practices that exploit a lack of specialist knowledge among people. That's what many people use Google for - answers to things they don't know or things they don't understand. Mark Honeychurch described putting overviews to the test in a recent edition of the Skeptics' podcast Yeah, Nah . "The AI looked like it was bending over backwards to please people. It's trying to give an answer that it knows that the customer wants," Honeychurch told Mediawatch . Honeychurch asked Google for the meaning of: 'Better a skeptic than two geese.' "It's trying to do pattern-matching and come out with something plausible. It does this so much that when it sees something that looks like an idiom that it's never heard before, it sees a bunch of idioms that have been explained and it just follows that pattern." "It told me a skeptic is handy to have around because they're always questioning - but two geese could be a handful and it's quite hard to deal with two geese." "With some of them, it did give me a caveat that this doesn't appear to be a popular saying. Then it would launch straight into explaining it. Even if it doesn't make sense, it still gives it its best go because that's what it's meant to do." In time, would AI and Google detect the recent articles pointing out this flaw - and learn from them? "There's a whole bunch of base training where (AI) just gets fed data from the Internet as base material. But on top of that, there's human feedback. "They run it through a battery of tests and humans can basically mark the quality of answers. So you end up refining the model and making it better. "By the time I tested this, it was warning me that a few of my fake idioms don't appear to be popular phrases. But then it would still launch into trying to explain it to me anyway, even though it wasn't real." Things got more interesting - and alarming - when Honeychurch tested Google Overviews with real questions about religion, alternative medicine and skepticism. "I asked why you shouldn't be a skeptic. I got a whole bunch of reasons that sounded plausible about losing all your friends and being the boring person at the party that's always ruining stories." "When I asked it why you should be a skeptic, all I got was a message saying it cannot answer my question." He also asked why one should be religious - and why not. And what reasons we should trust alternative medicines - and why we shouldn't. "The skeptical, the rational, the scientific answer was the answer that Google's AI just refused to give." "For the flip side of why I should be religious, I got a whole bunch of answers about community and a feeling of warmth and connecting to my spiritual dimension. "I also got a whole bunch about how sometimes alternative medicine may have turned out to be true and so you can't just dismiss it." "But we know why we shouldn't trust alternative medicine. It's alternative so it's not been proven to work. There's a very easy answer." But not one Overview was willing or able to give, it seems. Google does answer the neutral question 'Should I trust alternative medicine?' by saying there is "no simple answer" and "it's crucial to approach alternative medicine with caution and prioritise evidence-based conventional treatments." So is Google trying not to upset people with answers that might concern them? "I don't want to guess too much about that. It's not just Google but also OpenAI and other companies doing human feedback to try and make sure that it doesn't give horrific answers or say things that are objectionable." "But it's always conflicting with the fact that this AI is just trained to give you that plausible answer. It's trying to match the pattern that you've given in the question." Journalists use Google, just like anyone who's in a hurry and needs information quickly. Do journalists need to ensure they don't rely on the Overviews summary right at the top of the search page? "Absolutely. This is AI use 101. If you're asking something of a technical question, you really need to be well enough versed in what you're asking that you can judge whether the answer is good or not." Sign up for Ngā Pitopito Kōrero , a daily newsletter curated by our editors and delivered straight to your inbox every weekday.

Gigamon Launches AI Tools For Deep Observability
Gigamon Launches AI Tools For Deep Observability

Scoop

time2 days ago

  • Scoop

Gigamon Launches AI Tools For Deep Observability

Multi-phase AI strategy delivers intelligent visibility and automation, sets a new standard for hybrid cloud security and management Gigamon, a leader in deep observability, today announced the first phase of its multi-year AI strategy, introducing foundational innovations designed to help organizations better secure and manage hybrid cloud infrastructure. The initial offerings include Gigamon AI Traffic Intelligence, which delivers real-time visibility into GenAI and LLM traffic across 17 leading engines to enable data-driven enforcement and policy governance, and GigaVUE Fabric Manager (FM) Copilot, a GenAI-powered assistant that simplifies onboarding, configuration, management, and troubleshooting of Gigamon deployments. By embedding AI into the Deep Observability Pipeline, Gigamon expands its value to customers by eliminating blind spots, strengthening governance, and enhancing operational efficiency across modern hybrid environments. We're embedding AI directly into the Deep Observability Pipeline to help customers strengthen cybersecurity with practical, easy-to-implement capabilities that keep pace with the speed and complexity of AI adoption. As GenAI workloads multiply, organizations face surging data volumes, expanding attack surfaces, and growing security risks. One of the most fundamental challenges is simply knowing which AI services are in use. In the 2025 Hybrid Cloud Security Survey of over 1,000 global Security and IT leaders, one in three reported that network traffic has more than doubled due to AI workloads, while 55 percent said their tools are failing to detect modern threats. In response, 88 percent now consider deep observability—combining network-derived telemetry with log data—essential for securing and scaling AI deployments across hybrid cloud infrastructure. 'As GenAI use matures in organizations, we're focused on both AI for security and security for AI,' said Michael Dickman, chief product officer at Gigamon. 'It has never been more true that you cannot secure what you cannot see, making complete visibility into AI traffic and workloads, including shadow AI usage, critical for today's Security and IT teams. That is why we're embedding AI directly into the Deep Observability Pipeline to help customers strengthen cybersecurity with practical, easy-to-implement capabilities that keep pace with the speed and complexity of AI adoption.' Complete Visibility into AI and GenAI Network Traffic: A New Standard for Cybersecurity The Gigamon Deep Observability Pipeline efficiently delivers actionable network-derived telemetry, including packets, flows, and application metadata directly to cloud, security, and observability tools, bringing the complete picture into focus. With the new AI Traffic Intelligence capability, organizations gain real-time visibility into GenAI and LLM activity from 17 leading engines, including ChatGPT, Gemini, and DeepSeek. The capability also allows user-defined targeting of additional LLMs beyond the pre-defined set, extending flexibility and reach. For ease of integration, this intelligence is agentless and applies even to encrypted data in motion, surfacing shadow AI usage and enabling more effective, policy-driven governance. AI Traffic Intelligence enables organizations to: Gain real-time insights into GenAI and LLM traffic across public, private, virtual, and container environments Identify shadow AI, or unsanctioned AI usage, to reduce risk and improve oversight Track usage patterns to inform governance and manage AI-related costs Empower Security and IT teams with trusted, network-derived telemetry to drive informed decisions 'Gigamon has established itself as a trusted source of granular network data, providing comprehensive visibility across highly complex, distributed environments,' said Bob Laliberte, principal analyst at theCUBE Research. 'As AI increases the complexity and volume of network traffic, clear visibility into GenAI activity has become critical. Gigamon is well-positioned to meet these emerging challenges by delivering the requisite insights to monitor AI usage, regain control, and take decisive action.' 'AI is accelerating digital transformation, but it's also introducing security risks and data challenges across hybrid cloud infrastructure," said Chris Konrad, vice president, Global Cyber at World Wide Technology (WWT). 'By integrating AI into its Deep Observability Pipeline, Gigamon delivers the complete visibility and insights our customers need to detect threats, govern GenAI use, and strengthen cybersecurity best practices. At WWT, we're proud to partner with Gigamon to shape the future of hybrid cloud security by delivering the deep observability customers require.' GigaVUE-FM Copilot Simplifies Deployment and Day-to-Day Operations Gigamon also introduced GigaVUE-FM Copilot, a GenAI-powered assistant designed to help organizations onboard, configure, manage, and troubleshoot their Gigamon environments with greater speed and accuracy. Embedded directly within GigaVUE-FM, GigaVUE-FM Copilot enables Security and IT teams to reduce time to insight, simplify complex workflows, and improve productivity. Through a natural language interface, GigaVUE-FM Copilot securely connects users directly to the internal knowledge base and LLM contained within technical documentation, deployment guides, and release notes, delivering fast, context-aware answers. This capability empowers Security, IT, and DevOps teams to resolve issues independently, whether or not they are power users, and reduce reliance on Tier 3 support resources. With GigaVUE-FM Copilot, organizations can: Simplify configuration and management using GenAI-assisted support Accelerate onboarding and feature discovery to improve readiness Instantly search documentation to troubleshoot and apply best practices Reduce Tier 3 support escalations by enabling broader self-service Improve operational efficiency across teams and environments Availability and Roadmap The AI Traffic Intelligence capability is available now for all GigaVUE Cloud Suite customers. GigaVUE-FM Copilot is in early access for select customers, with general availability in 2H25. Additional AI-powered innovations are underway as part of the multi-phase strategy and will be spotlighted at the Gigamon Visualyze Bootcamp, the company's virtual customer conference taking place Sept. 9–11. For more information

Post-pandemic hiring finds footing as AI transforms tasks, not jobs
Post-pandemic hiring finds footing as AI transforms tasks, not jobs

Techday NZ

time2 days ago

  • Techday NZ

Post-pandemic hiring finds footing as AI transforms tasks, not jobs

The global labour market is emerging from several years of upheaval, but economists at LinkedIn and OpenAI say a new force—generative AI—is beginning to reshape the nature of work in ways that are uneven across sectors. Speaking at the OpenAI Forum, Dr Karin Kimbrough, Chief Economist at LinkedIn, said, "Right now, I would say the labour market's actually reflecting a lot of macro cyclical effects," rather than widespread AI-driven displacement. She noted the hiring boom of 2021-2022 has since cooled, with companies largely holding onto talent and new hiring remaining cautious. "The labour market's looking for normalcy," Kimbrough said. "There's more competition than there was before, but not so much that people can't find a job." However, this overall stabilisation masks significant disparities. "If you are in what is considered like the knowledge worker space, it might be a lot more competitive for you right now to find the role that you want," she explained. By contrast, "The world is your oyster if you work in [retail, healthcare, or construction] industries." Kimbrough attributes these trends to structural factors and post-pandemic realignments, not solely to AI. "We are seeing just a huge demand and increase in roles \[in healthcare]. So we're seeing the number of roles open up—it's not just that people are looking for a role and are able to find one." While job loss due to AI remains a common fear, Kimbrough emphasised that, at this stage, "It's not elimination so much that we're seeing. It's more just people are adjusting, desperately trying to upskill so they can stay relevant." What is being displaced, she explained, are specific tasks within roles. "It's how they're spending their time in that role that is changing." Ronnie Chatterji, Chief Economist at OpenAI, agreed, saying that workers are "rotating from certain tasks to other tasks" rather than seeing their roles eliminated entirely. LinkedIn's data supports this, with increased emphasis on "AI literacy" - the ability to use AI tools effectively. "One of the fastest growing in-demand skills on the platform by employers is conflict resolution," Kimbrough added. "They're rising in tandem with the demand for AI literacy." Surprisingly, it's workers in more disrupted sectors - such as communications and marketing - who are most aggressively updating their profiles to signal AI capabilities. "They need to signal it because the last thing they want to do is be fighting over a job either because an AI could already do half of it or because someone else can do the other half... with AI." For early-career professionals, the post-pandemic hiring slowdown has been particularly acute. "It's been materially harder for new grads in the last year or so to find a role," Kimbrough said. "Employers are looking at their talent roles being full. Again, no one has quit." However, historical data offers some reassurance. "What we found... was that people catch up," she said. "In the first couple of years of their career, maybe they don't progress as fast, but eventually they catch up." She advised graduates to focus on agility, a willingness to learn, and human skills like communication and collaboration. "Even more important [than AI literacy] is communication skills, collaboration skills... leadership." Kimbrough also spoke about international labour markets, pointing to India as particularly dynamic: "A very young, well-educated, and very dynamic, large domestic economy." Yet she highlighted the challenge of making AI tools locally relevant, citing "accessibility of the digital divide and the relevance" as critical barriers to adoption. Adoption also varies widely by sector. Education, for example, lags significantly behind industries like finance or healthcare in hiring AI-literate talent. "Even so, we see them growing at really rapid rates. Is itIt's just not at the same rate." Kimbrough revealed a shift in hiring strategies, noting that employers have reduced backfilling of vacated roles. Instead, they are prioritising new roles, particularly those related to AI. "If you are working in the AI space... you are just in a position of choice," she said, listing roles such as AI engineer, consultant, and researcher as the fastest-growing on LinkedIn. However, she noted that such an opportunity is not uniform: "If you are in other roles... it's been a little bit more challenging." The evolving job landscape also means career paths are becoming less linear. "It's far more organic, it is far more skills-based," Kimbrough said. "AI is so powerful, it's going to allow you to pivot into many more options for roles." Even at the executive level, LinkedIn data shows leaders now emerge from more diverse backgrounds. "So it's okay to have a broader base," she said. "If the thing you have your heart set on isn't working, go do something else and just get started." The discussion painted a nuanced picture: a stabilising labour market shaped more by macroeconomic tides than AI disruption, at least for now. But as AI adoption deepens, agility, skills signalling, and adaptability may well define success across all sectors.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store