
The future of marketing: a conversation with Dario Debarbieri and Raj Iyer
AI has radically changed the game for marketers. How is the marketing environment changing, and what does the future hold?
Dario Debarbieri: For marketers, the first lesson of the global pivot to artificial intelligence is that data is more vital than ever. AI is only as good as the data that fuels it, which makes data quality and data management critical, in marketing as elsewhere. And to manage data effectively requires platforms that are agile, flexible and resilient—platforms that can adapt without breaking, that are solid yet versatile. At a business level, these capabilities will mean the difference between success and failure.
To thrive in a world of pervasive AI, your data and your platform must be ready for a future of living marketing systems. A brand in the future will evolve like a living organism—because in an AI-powered world, nothing is static. The perception of your brand will depend on the data that fuels those AI tools and frameworks—dynamically, even metabolically.
That means your marketing systems must ensure consistency in the data captured by AI tools and systems in order to influence the output they generate. You must be massively consistent in the way you manage your data, for your customers as well as your brand.
Hyper-personalisation—an AI superpower for the next-gen marketer—also depends directly on the data that fuels it. The customer experience lifecycle will be transformed by an infusion of data-driven context and insights—allowing AI-generated, dynamically crafted and curated messaging, stories and transactions. In every instance, data will be the lifeblood of these systems.
What does all this mean for human marketers?
DD: None of this makes humans obsolete. The marketer of tomorrow will pivot to providing the human insights that make campaigns more effective, the ethics and brand culture that inform them and the emotional intelligence that brings them fully to life. This new marketer will be a complete storyteller—while being, in equal measure, a marketing system designer. Read More Malibu campaign gives famous artworks an AI makeover
What will that future marketer need?
DD: New tools—better, faster, more focused and flexible—and a clear knowledge of how they work and how to use them. Without that, marketers simply won't survive. This creative industrial revolution will feature a fusion of technologies that will shape the profile of every marketer: a convergence of GenAI, data, quantum-powered market simulations and blockchain-enabled brand trust—enabling secure, transparent transactions à la Web3.
Turning to the present, where are we with marketing technology today? What are our current capabilities?
Raj Iyer: The web browser made the internet accessible to everyone back in 1993-94, but from a business perspective, email was the first killer app. Marketers saw instantly that email gave them a powerful tool to reach a very large audience. Unica capitalised on that trend and redefined marketing technology in the process, making catalogue marketing electronic while creating the martech space as it is today. Thus was born the idea of segmentation at scale, using email to send the most prodigious campaigns cost-effectively.
Today, AI too is a killer tool for marketing—but at an exponentially higher level.
Until now, marketing has been all about the attention economy, relentlessly seeking better ways to rack up eyeballs and clicks. In fact, the whole idea of a marketing funnel came from the math game that results.
But as channels, digital technologies and social media proliferate, people are inundated with messages. The result is attention fatigue—and plummeting engagement, which worsens as AI-based content adds to the clamour for users' attention.
So how can we respond to this dilemma? Read More Irish workers take wage cut as executive pay booms - Oxfam
RI: As marketers, the goal is to build trust with customers and earn loyalty by delivering value—in short, a trust economy. The attention economy isn't going away, but if you can build trust and loyalty with customers, attention is a given.
So, how can we get there? In our view, the bridge to trust is the intelligence economy—an infusion of intelligence and intention that cuts through noise and distraction with data-driven context and deep insights into customers' wants and needs. Whereas digital experiences are often generic and irrelevant, brands can use the power of intent to deliver memorable experiences that build trust and strengthen relationships.
To achieve that, we need platforms and tools that can capture that intention, turn relevant data into useful insights and make marketing workflows manageable through intelligent orchestration. AI, data, quantum, blockchain – all these technologies belong to this intelligence economy. This is what we mean by Digital+.
What capabilities is the intelligence economy built on?
RI: We've touched on the what of the intelligence economy: the deep customer knowledge and data-driven insights that establish intent. Next is the how: the dance of engagement, the contextual customer journeys—guided by HCL Unica+ and the technology stack that powers it. Then comes the why: the memorable feelings and experiences that make customers choose you over competitors.
Now, these elements—insight, engagement, and experience—rely, in turn, on other key capabilities, one of which pertains to technology architecture. Tech architects today face a false choice between monolithic and microservices approaches. What the intelligence economy demands, however, is a composable architecture based on packaged business capabilities (PBCs)—modular software components that perform specific business functions and can be seamlessly integrated and removed.
Meanwhile, AI feeds into all three of these elements—feeding into insight, for example, with auto-segmentation in the customer data platform; engagement, by dynamically matching channels to customers; and experience, by crafting hyper-personalised journeys based on deep customer knowledge.
DD: The marketing platform of the future has other core attributes—data and AI, analytics, hyper-personalisation, security and scalability are all 'must-haves' when implementing the martech of the future.
The key takeaway for today's marketers, though, is simple: the platforms, tools and technologies of the intelligence economy are available now—and we're here to make that easy. The future of marketing may lie ahead, but the Digital+ future is now.
Click here to learn more.
Raj Iyer
Raj Iyer is the Executive Vice President and Portfolio General Manager at HCLSoftware, where he leads several product lines and played a key role in scaling the business from a few hundred million in revenue to $1.5 billion. With prior leadership roles at DXC and IBM and experience at three Silicon Valley start-ups, Raj brings deep expertise in enterprise software, AI/ML and global business strategy.
Dario Debarbieri
Dario, who studied Law and Economics at the Universidad de Buenos Aires, has more than 20 years of global leadership experience across technology and financial services, including roles as CEO, CMO and VP at firms such as IBM and Enterprise Outsourcing. Since August 2022, he has led Marketing at HCLSoftware, bringing deep expertise in AI, cloud, data, CX and emerging technologies, with a strong record of driving performance and international brand growth.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
23 minutes ago
- Forbes
Engineering Excellence In The Age Of AI
Abhi Shimpi is the Vice President of Software Engineering in a Financial Services organization. As engineering leaders, many of us are racing to integrate GenAI into our development life cycles. The tools are powerful, the potential is massive, but amid all the buzz about velocity and automation, I believe we're overlooking a critical element: Engineering Excellence. If we don't start reshaping our engineering culture for AI, AI will reshape it for us, and it might not be in our favor. I don't mean just a technical shift, but also a cultural one. If we lose sight of the foundational practices that make engineering sustainable, secure and scalable, then we're moving forward recklessly. What Engineering Excellence Used To Mean Before GenAI entered the scene, engineering excellence had a clear definition. We talked about code quality, test automation, secure development practices, peer reviews, resiliency, architecture rigor and continuous delivery. We had internal maturity models to measure and reinforce those principles. Those models gave teams an understanding of what 'good' looked like and how to build clean, maintainable and trustworthy software at scale. It was about process and discipline. We created feedback loops, fostered coaching and mentorship and we made space for design thinking and technical judgment. Now, GenAI is rewriting the rules, and we need to make sure we don't allow it to erase those fundamentals along the way. Speed Without Discipline AI has transformed the developer experience. Tools like GitHub Copilot, Google Gemini and Microsoft Copilot can generate code for entire functions or workflows in seconds. Non-technical users can build apps using natural language prompts. In theory, this is empowerment. In practice, it's often chaos. I've seen firsthand how easy it is to bypass core engineering principles in the rush to adopt GenAI and ship faster. A developer asks Copilot for a script, drops it into a PowerApps and deploys. No design review, no security scan and no consideration given to how security is handled or data is managed. It works, but it doesn't scale. It creates anti-patterns that violate the architectural standards we've spent years putting in place. And it's not just developers; citizen developers (those with minimal technical training) are building and deploying internal applications without understanding the implications. What kind of data are they handling? What access are they exposing? What guardrails are missing? And it's happening across industries. The real risk isn't that GenAI makes mistakes, it's that we stop asking questions. FOMO Is Not A Strategy Let's be honest: A lot of organizations are embracing GenAI out of fear of missing out. Once the floodgates opened, everyone rushed in. The intent was good, but the pace? Unsustainable. There's nothing wrong with moving fast if you're moving with intention, but if you don't know what you're measuring, you're just reacting. And when you prioritize output over outcome, you miss the real opportunity. This is why I keep emphasizing outcome over output. GenAI can help you generate more code. That doesn't mean it's better code. We need to slow down just enough to ask: Does this solution create long-term value? Is it secure? Is it explainable? Is it maintainable? Rebuilding Development Culture For AI Embedding AI into our workflows is not enough. We have to embed engineering judgment alongside it. That means reinvesting in the things that made us strong in the first place: coaching, mentorship, engineering excellence and craftsmanship. Peer reviews still matter, clean architecture still matters, release/maintenance still matters and code design is not optional. In one example from my experience, developers unfamiliar with a programming language were able to deliver time-sensitive solutions using GenAI tools faster. We layered in strong governance: design reviews, peer oversight, security assessment and architectural alignment. Without those guardrails, the same project could have introduced serious risks. Hence, AI doesn't eliminate the need for engineering culture. It amplifies the consequences of not having one. Redefining Maturity For An AI-First World We used to measure engineering maturity using KPIs like velocity, defect rates, time to market and code coverage. Those still matter, but they're no longer enough. Now we need to measure how efficiently and responsibly we're using AI. That includes measuring aspects, such as: • How much human oversight is required? • Are AI outputs explainable? • Are they aligned with our architectural patterns? • Do we trust the AI engine's recommendations? And if not, why? If we allow AI to review our code, we must also define a trust framework. What is the trust score? What patterns is the AI referencing? Do those patterns match what we've codified as best practice? Which LLM should be used? The maturity model must evolve and be assessed continuously. Otherwise, we're shooting in the dark. Psychological Safety And Performance In A Machine-Driven World There's another piece to this puzzle—psychological safety. When we're using AI, safety is about trust in systems. We need to build environments where developers feel safe questioning AI outputs, rejecting them when necessary and adding human judgment. Blind faith in GenAI is just as dangerous as blind rejection. At the same time, we need to hold teams accountable for performance and outcomes. The tools may change, but excellence still requires clarity, consistency and commitment. What Good Looks Like So, what does success look like? From our experience, it includes: • Less rework • Fewer defects • Lower tech debt • Faster and more efficient onboarding, even for junior engineers • Enhanced developer productivity and satisfaction In the example I shared earlier, we saw measurable gains using GenAI. Faster delivery, broader developer capacity and successful outcomes even when teams were new to the tech stack. But those benefits only came after we added extra oversight to ensure architectural compliance and secure development practices. Over time, that governance load decreased because the cultural foundation was strong. That's the path forward. Short-term governance for long-term gain. Shape Or Be Shaped The real test of GenAI is cultural. Tools will continue to evolve. But if we fail to adapt our engineering practices and mindsets, those tools will define our future for us. The future is about moving with purpose. If we can redefine our maturity models, enforce meaningful guardrails and keep engineering excellence at the center, AI will be a powerful ally. If we don't, it will become a force we no longer control. And by then, it might be too late. Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?
Yahoo
an hour ago
- Yahoo
AI proliferation in healthcare shines light on HIPAA shortcomings
The use of artificial intelligence (AI) and generative AI (GenAI) in the healthcare space is skyrocketing. GlobalData analysis reveals that the AI market in healthcare is projected to reach a valuation of around $19bn by 2027. While the White House recently unveiled plans to 'remove barriers to American leadership' with an AI action plan, for now, entrants into the healthcare space providing AI tools to healthcare providers (HCPs), must comply with the US's Health Insurance Portability and Accountability Act (HIPAA), a regulation from 1996 that outlines rules around protecting patient healthcare data. Aaron T. Maguregui, partner at law firm Foley & Lardner told Medical Device Network: 'HIPAA was intended to scale with time and with technology. What I don't think HIPAA ever contemplated was the fact that AI would be able to essentially take in data from multiple sources, match it together, and create the potential for the reidentification of data that was never intended to be used for reidentification.' Technology has far outpaced regulation, and while Maguregui does not view HIPAA as being incompatible 'in and of itself', he states that it needs updating to account for the growing technology and compute power that exists, and how data is now being used to train AI. 'An AI vendor that provides a service to a HCP that is regulated by HIPAA is a subcontractor, and their role in healthcare is very regulated, and this becomes a somewhat limiting force for AI vendors trying to innovate and move the needle with their product, because their permitted usage and disclosures of the data as regulated by HIPAA is very restrictive,' Maguregui explained. 'It's restricted to the services that the vendor has agreed to provide, so any additional innovation, including, for example, additional training provisions the vendor may need, usually requires the HCP, and sometimes patients', consent.' Navigating HIPAA for HCPs and vendors Maguregui advises clients to start with a privacy impact assessment and bake in data governance from day one. 'On the provider side, it's important to know the types of data you have, who you're sharing data with, and what your responsibilities with respect to that data are,' Maguregui said. 'With virtual health exploding, and clinical intake going virtual, there are chatbots and workflows that are collecting data and information almost constantly, and it is important to understand whether information is regulated by HIPAA or by state law.' Having an awareness of these factors is especially important for HCPs that want to leverage an AI vendor, because they have to be able to communicate to that vendor what they need to comply with, because it will be the same regulation that the HCP has to comply with. Maguregui continued: 'In some cases, from an AI vendor's perspective, this may seem a bit unfair, because they have to rely on another party's assertion that they are complying with all of the laws they are required to comply with. 'The vendor then has to figure out whether they can comply with the relevant regulation and provide their service in compliance with the law and legally use the data at hand for purposes that are going to make their product better.' The direction of HIPAA regulation According to Maguregui, if the US cannot get on board with a single federal privacy legislation, then HIPAA should be expanded to cover the other entities that interact with health information. 'We have a desegregated regime in the US where the Federal Trade Commission (FTC) tries to regulate when HIPAA does not regulate, and that leads to more confusion and results in uncertainty for vendors and HCPs alike in understanding what their roles and obligations are,' Maguregui said. 'My wish for HIPAA would be to expand and update it, to understand where technology has gone, where compute has gone, and to improve the ability for innovation, the ability for vendors to have better access to data that will help them create better products, and to ultimately improve the patient and provider experience, and healthcare overall.' "AI proliferation in healthcare shines light on HIPAA shortcomings" was originally created and published by Medical Device Network, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Sign in to access your portfolio
Yahoo
an hour ago
- Yahoo
Federal Reserve economists aren't sold that AI will actually make workers more productive, saying it could be a one-off invention like the light bulb
A new Federal Reserve Board staff paper concludes that generative artificial intelligence (genAI) holds significant promise for boosting U.S. productivity, but cautions that its widespread economic impact will depend on how quickly and thoroughly firms integrate the technology. Titled 'Generative AI at the Crossroads: Light Bulb, Dynamo, or Microscope?' the paper, authored by Martin Neil Baily, David M. Byrne, Aidan T. Kane, and Paul E. Soto, explores whether genAI represents a fleeting innovation or a groundbreaking force akin to past general-purpose technologies (GPTs) such as electricity and the internet. The Fed economists ultimately conclude their 'modal forecast is for a noteworthy contribution of genAI to the level of labor productivity,' but caution they see a wide range of plausible outcomes, both in terms of its total contribution to making workers more productive and how quickly that could happen. To return to the light-bulb metaphor, they write that 'some inventions, such as the light bulb, temporarily raise productivity growth as adoption spreads, but the effect fades when the market is saturated; that is, the level of output per hour is permanently higher but the growth rate is not.' Here's why they regard it as an open question whether genAI may end up being a fancy tech version of the light bulb. GenAI: a tool and a catalyst According to the authors, genAI combines traits of GPTs—those that trigger cascades of innovation across sectors and continue improving over time—with features of 'inventions of methods of invention' (IMIs), which make research and development (R&D) more efficient. The authors do see potential for genAI to be a GPT like the electric dynamo, which continually sparked new business models and efficiencies, or an IMI like the compound microscope, which revolutionized scientific discovery. The Fed economists did cautioning that it is early in the technology's development, writing 'the case that generative AI is a general-purpose technology is compelling, supported by the impressive record of knock-on innovation and ongoing core innovation.' Since OpenAI launched ChatGPT in late 2022, the authors said genAI has demonstrated remarkable capabilities, from matching human performance on complex tasks to transforming frontline work in writing, coding, and customer service. That said, the authors said they're finding scant evidence about how many companies are actually using the technology. Limited but growing adoption Despite such promise, the paper stresses that most gains are so far concentrated in large corporations and digital-native industries. Surveys indicate high genAI adoption among big firms and technology-centric sectors, while small businesses and other functions lag behind. Data from job postings shows only modest growth in demand for explicit AI skills since 2017. 'The main hurdle is diffusion,' the authors write, referring to the process by which a new technology is integrated into widespread use. They note that typical productivity booms from GPTs like computers and electricity took decades to unfold as businesses restructured, invested, and developed complementary innovations. 'The share of jobs requiring AI skills is low and has moved up only modestly, suggesting that firms are taking a cautious approach,' they write. 'The ultimate test of whether genAI is a GPT will be theprofitability of genAI use at scale in a business environment and such stories are hard to come by at present.' They know that many individuals are using the technology, 'perhaps unbeknownst to their employers,' and they speculate that future use of the technology may become so routine and 'unremarkable' that companies and workers no longer know how much it's being used. Knock-on and complementary technologies The report details how genAI is already driving a wave of product and process innovation. In healthcare, AI-powered tools draft medical notes and assist with radiology. Finance firms use genAI for compliance, underwriting, and portfolio management. The energy sector uses it to optimize grid operations, and information technology is seeing multiples uses, with programmers using GitHub Copilot completing tasks 56% faster. Call center operators using conversational AI saw a 14% productivity boost as well. Meanwhile, ongoing advances in hardware, notably rapid improvements in the chips known as graphics processing units, or GPUs, suggest genAI's underlying engine is still accelerating. Patent filings related to AI technologies have surged since 2018, coinciding with the rise of the Transformer architecture—a backbone of today's large language models. 'Green shoots' in research and development The paper also finds genAI increasingly acting as an IMI, enhancing observation, analysis, communication, and organization in scientific research. Scientists now use genAI to analyze data, draft research papers, and even automate parts of the discovery process, though questions remain about the quality and originality of AI-generated output. The authors highlight growing references to AI in R&D initiatives, both in patent data and corporate earnings calls, as further evidence that genAI is gaining a foothold in the innovation ecosystem. Cautious optimism—and open questions While the prospects for a genAI-driven productivity surge are promising, the authors warn against expecting overnight transformation. The process will require significant complementary investments, organizational change, and reliable access to computational and electric power infrastructure. They also emphasize the risks of investing blindly in speculative trends—a lesson from past tech booms. 'GenAI's contribution to productivity growth will depend on the speed with which that level is attained, and historically, the process for integrating revolutionary technologies into the economy is a protracted one,' the report concludes. Despite these uncertainties, the authors believe genAI's dual role—as a transformative platform and as a method for accelerating invention—bodes well for long-term economic growth if barriers to widespread adoption can be overcome. Still, what if it's just another light bulb? For this story, Fortune used generative AI to help with an initial draft. An editor verified the accuracy of the information before publishing. This story was originally featured on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data