logo
Will AI agents become the go-to for content creation, customer interactions?

Will AI agents become the go-to for content creation, customer interactions?

Campaign ME27-02-2025
AI has become a staple in every marketing and communications conversation, streamlining tasks and enhancing efficiency. From content creation to customer interactions, AI capabilities continue to expand.
However, alongside its benefits come concerns — privacy risks, content copyright issues, and instances of AI missteps have sparked debate.
With both the opportunities and challenges in mind, we asked a range of industry experts if we will see a greater move towards AI agents for tasks such as content creation and customer interactions?
Kartik Aiyar
Head of Creative, Tuesday Communications
The answer is layered. On one hand, the appeal of AI agents is undeniable. They excel at tasks requiring speed, precision and the ability to process massive amounts of data. Content generation tools such as ChatGPT can now produce blog posts in a fraction of a second. Similarly, in customer service, AI-driven chatbots are revolutionising brand-audience engagement by providing instant, round-the-clock support.
However, these advancements come with challenges. For instance, when creating campaign visuals, I sometimes question whether the AI is synthesising its output from existing work that's already out there. Privacy concerns are another pressing issue, as AI tools often rely on user data to deliver personalised outputs. The more accurate the data fed into these systems, the better the results – but this raises important questions about how much information we should input and how it's being handled.
Moreover, while AI tools are impressive, they cannot function without human intervention and ingenuity. They lack the human touch – the intuition, empathy, cultural awareness and contextual understanding that are critical in content creation and customer interactions. Quite often, I've seen AI misinterpret context or generate outputs that miss the mark entirely.
So, will we see a greater shift toward AI agents for these tasks? Absolutely. The efficiency and scalability that AI offers are undeniable, and its adoption will only grow as the tools become more advanced and accessible. However, this doesn't mean humans will be replaced. Instead, I envision a collaborative future where AI acts as an aid, handling repetitive, time-intensive tasks and freeing us to focus on strategy, creativity and emotional connection – the elements AI still can't replicate, at least for now.
Ahmed Noureldin
Head of Sales – Dubai, BackLite Media
AI is transforming marketing and communications, as seen in recent campaigns like Coca-Cola's AI-generated initiatives and AI-driven virtual assistants for customer satisfaction. AI will play a significant role in advancing marketing, making innovations like hyper-personalisation and sentiment analysis possible. However, we must also address ethical concerns related to privacy and copyright. AI is fundamentally a tool, and its impact depends on how humans use it – like a knife that can either aid food preparation or cause harm. To navigate these ethical dilemmas, companies should adopt ethical AI practices, governance must implement clear regulations, and audiences should hold companies accountable for unethical behaviour. While ethical challenges are important, I believe AI's benefits far outweigh its downsides. Instead of deterring us, these challenges should motivate us to use AI responsibly and creatively.
Bachir Zeidan
Head of Digital Media Services, BPN MENA
We're already there, with the shift towards AI agents for content creation and customer interactions rapidly accelerating. Businesses are increasingly adopting AI for efficiency, scalability, and personalisation. Companies such as OpenAI with ChatGPT, Writesonic and Copy.ai are already assisting in generating blog posts and content copies. In customer service, AI tools such as Zendesk's chatbots and Interactions LLC's virtual assistants are handling inquiries, reducing wait times and costs.
As AI continues to advance, we move closer to the concept of singularity or general artificial intelligence (GAI), when human intelligence converges with machine intelligence. This progression will drive adoption across industries, delivering seamless user experiences and transforming the way businesses operate.
Jamal Almawed
Founder and Managing Director, Gambit Communications
Unfortunately, yes, but that doesn't mean it's the right way forward. Artificial intelligence is undeniably useful for enhancing customer segmentation and targeting, programmatic advertising, search optimisation, chatbot effectiveness and data-driven personalisation of content. However, it remains far from being a viable replacement for humans in content creation and customer service – areas that demand a deep understanding of tonality and nuance.
The reality is that many agencies view AI as a significant cost-saving tool and are adopting it regardless. They may be in for a wake-up call in 2025.
Mark Gomis Abeysinghe
Content Manager, MCH Global
AI is reshaping the future of marketing and communications, delivering innovative solutions in an industry driven by efficiency and personalisation. Its ability to manage large-scale content creation, analyse trends, predict behaviours and deliver tailored outcomes has made it an indispensable tool for modern marketers.
While concerns around privacy and ethics remain valid, ongoing advancements in AI regulation and accountability are steadily addressing these challenges.
In creative industries, AI doesn't replace human talent but serves as a powerful ally, enhancing productivity, streamlining processes and enabling teams to focus on strategic, high-impact tasks. The future lies in hybrid models, where humans and AI collaborate to achieve creativity and authenticity in meaningful ways.
Benjamin Thomas
Creative Director, JWI
There are two questions here, and they require two very different answers.
AI for customer interactions? That's an easy yes. It's fast, cost-effective, and keeps customers satisfied with near-instant responses. While not every reply will be perfect, speed and scale ultimately win out.
Content creation, however, is a different story. AI is clever, but it's not original. Great content requires a human spark – the wit, empathy, and imagination needed to connect with real people. Platforms shift, trends evolve and audiences become savvier by the day. AI can assist, but it won't lead the charge. To inspire or spark something new, human creativity remains essential.
Ibrahim Hasan
Head of McCann Content Studios, MENA
AI is a powerful tool, but the real danger lies in how we, as marketers, approach it. Automation enhances efficiency, but it cannot replace the ownership of our creative process. If we rely too heavily on AI and stop holding ourselves accountable for originality, we risk losing the essence of creativity. That's not just dangerous, it's also negligent.
AI should enhance our work, not diminish the spark that makes it uniquely human. The key is balance: using AI to accelerate and amplify while ensuring it doesn't take over. Creativity thrives on ownership, and that's something no algorithm can replicate.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Future is 'Agentic' and already unfolding
Future is 'Agentic' and already unfolding

Tahawul Tech

time8 hours ago

  • Tahawul Tech

Future is 'Agentic' and already unfolding

Let me take you on a journey, not into some far-off sci-fi future, but into a tomorrow that's just around the corner. Imagine walking into your workplace and finding that some of your 'colleagues' are no longer human. They're not robots in the traditional sense, but autonomous software agents trained on vast datasets, equipped with decision-making power, and capable of performing economic, civic, and operational tasks at scale. These agents write policies, monitor supply chains, process health records, generate news, and even govern our digital interactions. This isn't a scene from a movie. A tectonic shift is heading our way, one that will transform how we work, how governments function, and even how communities operate. In this world, digital public infrastructure (DPI) will not be a convenience. It will be a lifeline. This shift is already progressing in the heart of the Middle East. Ambitious projects like NEOM in Saudi Arabia are exploring how agentic AI can be woven into the fabric of urban life. They aim to build an ecosystem of autonomous agents that redefines how cities are developed and managed. Sovereignty in the Age of Agents We like to say, 'Everyone has data.' But the real question is: Where is it? Who controls it? Who governs access to it? In a world run by agents, these are not purely technical questions but ones of power, accountability, and autonomy. A sovereign nation that cannot locate, trust, or manage its data risks losing control.. A government that cannot verify what its own agents have learned, or with whom they are communicating with, is no longer governing. To survive and thrive in this new ecosystem, DPI must evolve into Digital Shoring: a foundation for sovereign, trusted, and open environments built on four pillars: Open Data – depends on trust. It requires clear data lineage, verified provenance, and accountable governance. Knowing where your data came from and where it's going is essential for any system that relies on it.. Open Source Software – because critical infrastructure built on black boxes is neither secure nor sovereign. Open Standards – because without shared protocols, agents can't cooperate, institutions can't interoperate, and governments can't govern. Open Skills – because the capacity to read a balance sheet, or audit a neural net, shouldn't belong to a privileged few. This is the backbone of an agentic society that is fair, sovereign, and resilient. Agentic Intelligence: More Than Just Fancy Tools Let's talk about what agents actually are – and what they aren't. Imagine I hand a company's financial statement to two readers: a junior analyst and a seasoned economist. Both might understand the numbers, but only one can extract strategic insight. Similarly, agents can read, analyze, and reason. But the quality of their actions depends entirely on the skills they are equipped with. These skills can be trained, acquired, or, most importantly, shared. In public sector contexts, this presents an extraordinary opportunity. Why should every institution reinvent the same agent? Why can't the skills of a fraud detection agent used in one department be transferred, securely and ethically, to another? Just like people share their expertise, we need infrastructure for sharing agentic capabilities across digital institutions. This is where organizations like the UN can help, by setting the standards and helping everyone through the lens of the Global Digital Compact initiative. From 'Sovereign Cloud' to 'Sovereign AI Platforms' Right now, a lot of talk is around keeping data inside national borders. But in the world of agents, that is just not enough. What really matters is where and how models are trained, how they are managed, and how we keep them in check. We need Sovereign AI Platforms – akin to the way HR departments manage employees: verifying credentials, ensuring alignment, monitoring performance, and enabling collaboration. Companies such as Cloudera, are developing the scaffolding for such platforms: secure hybrid AI environments, open-source data pipelines, governance-first orchestration layers, and modular LLM serving infrastructure that respects national compliance frameworks. But no company can do this alone. This is a global mission. Open by Design. Governed by Default Governments around the world are already realising that private AI cannot be built on public cloud monopolies. Digital identity and agent oversight need to be open and transparent, not hidden, ad hoc, or opaque. So the future must be open by design – in code, in data, in protocols, and being governed by default. From Digital IDs that authenticate not only humans, but also agents and their behavior, to full knowledge graphs that maintain shared institutional knowledge across systems, together with audit trails that document every decision, every inference, every prompt. This goes beyond technology. It involves creating a new kind of digital society that is designed to empower states, safeguard citizens, and align intelligence with democratic values. The Path Forward This transformation will not be easy. It will require bold policy, sustained investment, cross-border cooperation, and, above all, technical leadership grounded in values. But make no mistake, digital cooperation is not optional. It is the condition for sovereignty in an agentic world. Without it, we are left with silos, vendor lock-in, and algorithmic drift. With it, we build a future where intelligence, human or machine, serves the public good. So let's move beyond the buzzwords. Let's build platforms, protocols, and public goods that are open, modular, and sovereign. Let's treat agents not just as tools, but as members of a digital society in need of governance, trust, and cooperation. And maybe, when we look back at today from the vantage point of tomorrow, we'll remember this moment not as a crisis, but as the moment we chose to govern the future together. This opinion piece is authored by Sergio Gago Huerta, CTO at Cloudera.

AI's impact expands beyond underwriting in insurance sector: Report
AI's impact expands beyond underwriting in insurance sector: Report

Arabian Business

time9 hours ago

  • Arabian Business

AI's impact expands beyond underwriting in insurance sector: Report

Artificial intelligence (AI) continues to reshape the insurance sector, extending its influence beyond underwriting and risk profiling to other critical areas of the insurance value chain, according to a new survey by GlobalData. Underwriting and risk profiling remain the areas most positively impacted by AI, with 45.8 per cent of industry professionals identifying them as the top beneficiaries. However, this represents a decline of nearly 10 percentage points since 2023, suggesting that insurers are increasingly applying AI in other functions. Claims management and customer service followed, with 20.3 per cent and 17.6 per cent of respondents, respectively, citing these areas as most influenced by AI. Customer service, in particular, has seen notable growth, increasing by 6.2 percentage points since the previous poll. Similarly, AI's role in product development more than tripled in recognition, rising from 1.9 per cent to 7.2 per cent. Charlie Hutcherson, Associate Insurance Analyst at GlobalData, said insurers are now broadening their AI applications beyond underwriting, despite challenges such as regulatory hurdles, data quality, and fairness in risk models. He highlighted the increasing traction AI has gained in customer service, where automation enables faster triage, more accurate responses, and higher satisfaction rates. Hutcherson also pointed out a rising impact of AI in product development, reflecting insurers' growing focus on trend analysis, identifying coverage gaps, and accelerating speed to market. He described the overall shift as a sign of a 'more mature and diversified approach,' with insurers recognising AI's transformative potential across multiple areas of their business. With rising competition, insurers face pressure to differentiate themselves by expanding AI capabilities not just in efficiency-driven processes but also in customer-facing and product innovation areas. Hutcherson stressed the need for a holistic deployment of AI, balancing efficiency gains with fairness, transparency, and regulatory compliance.' Those who can strike this balance will be best positioned to build long-term trust and value,' Hutcherson said.

Ozzy Osbourne AI tribute sparks 'digital resurrection' debate
Ozzy Osbourne AI tribute sparks 'digital resurrection' debate

The National

time12 hours ago

  • The National

Ozzy Osbourne AI tribute sparks 'digital resurrection' debate

Fans of Black Sabbath singer Ozzy Osbourne have criticised musician Tom Morello after he shared an AI-generated image of the rock star, who died this week at the age of 76. Osbourne bid farewell to fans earlier this month with a Black Sabbath reunion show in the British city of Birmingham. His death led to tributes from fans and musicians. They included Morello's post, which sparked anger among X users. The backlash over the stylised image – which included deceased rock stars Lemmy, Randy Rhodes and Ronnie James Dio – centred on what many saw as an exploitative and unsettling trend, with users questioning the ethics of sharing such visuals so soon after Osbourne's death. It is the latest flashpoint in a growing debate: when does using AI to recreate someone's likeness cross the line from tribute to invasion of privacy? While the tools behind these hyper-realistic images are evolving rapidly, the ethical frameworks and legal protections have not yet caught up. Deepfakes and grief in digital age Using AI to recreate the dead or the dying, sometimes referred to as "grief tech" or "digital resurrection", is becoming increasingly common, from fan-made tributes of celebrities to "griefbots" that simulate the voice or personality of a lost loved one. In an example of grief tech, Canadian Joshua Barbeau last year used Project December, a GPT-3-based chatbot created by Jason Rohrer, to recreate conversations with his dead fiancee from September 2020, eight years after her death. The chatbot's responses were so convincing that she "said" things like: "Of course it is me. Who else could it be? I am the girl that you are madly in love with." Mental health experts warn that such recreations can profoundly affect the grieving process. "The predictable and comforting responses of AI griefbots can create unrealistic expectations for emotional support, which could impact a person's ability to build healthy relationships in the future," said Carolyn Yaffe, a cognitive behaviour therapist at Medcare Camali Clinic in Dubai. "Some people find comfort and a sense of connection through them. In contrast, others might face negative effects, like prolonged denial, emotional pain, or even feelings of paranoia or psychosis." Interacting with AI likenesses can blur the lines between memory and reality, potentially distorting a person's emotional recovery, Ms Yaffe said. "These tools may delay acceptance and create a space where people stay connected to digital surrogates instead of moving on," she added. "Grief doesn't fit into neat algorithms." Lack of legal safeguards There is limited legal protection against these practices. In the Middle East, specific laws around AI-generated likenesses are still emerging. Countries including the UAE and Saudi Arabia address deepfakes under broader laws related to cyber crimes, defamation, or personal data protection. But there are still no clear regulations dealing with posthumous image rights or the AI-based recreation of people. Most laws focus on intent to harm, rather than on consent or digital legacy after death. In the UK, for example, there are no posthumous personality or image rights. Some states in the US, including California and New York, have begun to introduce limited protections, while others do not offer any. In China, draft legislation has begun to address AI deepfakes. Denmark, however, has been a pioneer on the issue, proposing a law that would grant people copyright-like control over their image, voice and likeness. The legislation, expected to pass this year, would allow Danish people to demand the removal of unauthorised deepfake content and seek civil damages, even posthumously, marking the first time such protections would be implemented in Europe. "Copyright does not protect someone's appearance or voice," said Andres Guadamuz, a reader in intellectual property law at the University of Sussex. "We urgently need to reform image and personality rights to address unauthorised AI depictions, particularly for vulnerable individuals, including the deceased or critically ill, where dignity, consent, and misuse risks are paramount." Consent, culture and control Ethical concerns about recreating the image or voice of someone who is critically ill or dead go beyond legal frameworks. Arda Awais, co-founder of UK-based digital rights collective Identity 2.0, believes that, even when AI tributes are carried out with good intentions, they carry significant risks. "Even with consent from the deceased, there could be ways a likeness is used which might not be 100 per cent in line with someone's wishes, too. Or how it's use evolves," Ms Awais said. She added that a one-size-fits-all approach may not be practical across different cultures, emphasising the need for more inclusive and diverse conversations when establishing ethical standards. While some families or individuals may welcome AI tributes as a means to preserve memories, others may view it as exploitative or harmful, particularly when it involves celebrities, whose images are frequently recycled without their permission. "Grief is such a personal experience," Ms Yaffe said. "For some, griefbots might provide a moment of relief. But they should be seen as a bridge, not the final destination." Experts warn that AI should never replace the emotional labour of mourning or the human connections that aid the healing process. "AI-generated responses can completely miss the point, not because the technology is harmful, but because it lacks the essential quality that grief requires – humanity," Ms Yaffe said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store