logo
Closer teams, faster results, lower costs: How GenAI helps companies scale effectively

Closer teams, faster results, lower costs: How GenAI helps companies scale effectively

Straits Times2 days ago

Mr Shashank Sharma, Adobe's senior director for digital experience for South-east Asia and Korea, presenting the company's latest AI solutions that help businesses speed up content creation at the Adobe Summit in Singapore. PHOTO: ADOBE
BRANDED CONTENT Closer teams, faster results, lower costs: How GenAI helps companies scale effectively At Singapore's Adobe Summit, executives shared how Adobe's AI-enhanced digital solutions can accelerate each company's growth, helping them stay competitive
Since ChatGPT's launch in late 2022, generative AI (GenAI) has rapidly developed into a core business tool not just for efficiency, but for delivering faster, more personalised customer experiences at scale.
In Asia-Pacific, GenAI adoption has now gone beyond experimentation to bring real-world improvements to the way businesses market their products and services.
Speaking at the Adobe Summit in Singapore earlier this month, Mr Shashank Sharma, Adobe's senior director for digital experience for South-east Asia and Korea, noted how GenAI is reshaping the expectations and pace of modern creative work.
'AI is accelerating,' he said. 'It is opening content floodgates, tapping into everybody's imagination and redefining what we mean by the word 'scale'.'
The annual summit brought together over 600 partners and customers to explore the future of digital experiences. This year's event highlighted how AI is no longer just about efficiency – it is also fuelling creativity and enabling more expressive, visual communication at scale.
As part of this shift, Adobe introduced new features within its Experience Platform, which brings together customer data and content tools in one place to help teams work more effectively.
The latest additions include Product Support Agent, which helps marketing teams quickly troubleshoot technical issues such as missing campaign data, broken links in customer journeys or problems connecting different tools, and Data Insights Agent, which lets users ask questions about their data in plain English and get instant visual answers.
Both tools reflect Adobe's approach to using AI to lighten the load by automating routine tasks and simplifying complex processes – helping people focus on creative and strategic work rather than replacing them.
Putting GenAI to work
(From left) Ms Mel Lim, Adobe's regional head, Singapore, Mr Tay Yan Long, senior manager, Enterprise Digital Ecosystem & Business at Changi Airport Group, and Mr Gourab Kundu, head of digital growth for Asia South at Citi Wealth, share insights on how GenAI is transforming marketing and customer engagement. PHOTO: ADOBE
At the Adobe Summit in Singapore, regional businesses shared how GenAI is already enhancing marketing and customer experience.
For instance, The Coca‑Cola Company is using Adobe Firefly to speed up brand-aligned content creation with images and copy, all while preserving brand voice and copyright compliance.
In practice, such tools enable faster content localisation, on-demand creative production and significant time and cost savings across teams.
Changi Airport Group (CAG) is also pushing the boundaries of digital engagement by harnessing the power of GenAI. 'At Changi Airport, we're tapping into GenAI to turbocharge our experimentation capabilities and scale our content, enabling richer, more personalised and truly dynamic interactions with our customers,' said Mr Tay Yan Long, senior manager, Enterprise Digital Ecosystem & Business at CAG.
His team is using Adobe's GenAI tools for journey orchestration, predictive insights and agentic marketing to test and scale ideas more efficiently.
For Citibank, GenAI is enhancing service delivery through proactive problem-solving and predictive analytics, which significantly improve customer satisfaction.
'The future of banking lies in anticipating customer needs, not just reacting to them,' said Mr Gourab Kundu, head of digital growth for Asia South at Citi Wealth.
Mr Prabu Purushothaman, senior director for digital experience and platforms COE at ServiceNow, presents how the company is partnering with Adobe to create personalised, real-time customer journeys. PHOTO: ADOBE
Adobe has also partnered with firms like ServiceNow to improve business-to-business customer journeys.
'Customers are always seeking more rewarding and meaningful interactions,' said Mr Prabu Purushothaman, senior director for digital experience and platforms COE at ServiceNow. 'We are working with Adobe to help create real-time personalised customer journeys at scale for ServiceNow.'
Why GenAI is shifting to real results
A new Adobe study released at the summit shows GenAI adoption in Asia is maturing, with more companies reporting tangible business benefits.
In a late-2024 survey of over 500 executives across Hong Kong, South Korea and South-east Asia, 55 per cent said GenAI has freed up resources for strategic work, while 53 per cent credited it with boosting revenue through more effective marketing.
Use cases vary – from chatbots to social media content generation – but all point to a shared goal: faster, smarter personalisation. Eighty-seven per cent of senior executives expect content production to become quicker and more scalable in 2025, with agentic AI helping to lower support costs.
Still, many organisations face hurdles like siloed data and poor cross-team collaboration.
'The challenge is content creation, production, workflow and asset management,' said Mr Sharma. 'When we see things holistically, you can bring in more creativity and leverage that.'
To tackle these gaps, eight in 10 senior executives plan to increase tech investments, with nearly a third expecting to spend significantly more. Many also plan to invest in talent, recognising AI as a tool to amplify – not replace – human capabilities.
'Digital transformation is as much about enhancing experiences as it is about improving operational efficiency.
'It is still important that AI be complemented with the human element, ensuring that customers can still connect with brands and feel that services are authentic,' said Mr Sharma.
Find out how businesses are adopting GenAI in Asia-Pacific in Adobe's study of the region's corporate leaders here.
AI tools for businesses
In March, Adobe unveiled a suite of product innovations that drive customer experience orchestration with the help of AI, called Adobe AI Platform. Adobe Experience Platform Agent Orchestrator helps businesses to manage and orchestrate AI agents – across Adobe and third parties – through a single interface.
Adobe Brand Concierge enables businesses to configure and manage AI agents that guide consumers from exploration to confident purchase decisions, using immersive and conversational experiences.
Adobe GenStudio is an end-to-end content supply chain solution that optimises the process of planning, creating, managing, activating and measuring content for marketing campaigns and personalised customer experiences.
Adobe Firefly Services application programming interfaces support video and 3D workflows by handling high-volume and time-consuming tasks.
Join ST's Telegram channel and get the latest breaking news delivered to you.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

China's humanoid robots generate more football excitement than their human counterparts
China's humanoid robots generate more football excitement than their human counterparts

CNA

time2 hours ago

  • CNA

China's humanoid robots generate more football excitement than their human counterparts

BEIJING: While China's men's football team hasn't generated much excitement in recent years, humanoid robot teams have won over fans in Beijing based more on the AI technology involved than any athletic prowess shown. Four teams of humanoid robots faced off in fully autonomous 3-on-3 football matches powered entirely by artificial intelligence on Saturday (Jun 28) night in China's capital in what was touted as a first in China and a preview for the upcoming World Humanoid Robot Games, set to take place in Beijing. According to the organisers, a key aspect of the match was that all the participating robots operated fully autonomously using AI-driven strategies without any human intervention or supervision. Equipped with advanced visual sensors, the robots were able to identify the ball and navigate the field with agility They were also designed to stand up on their own after falling. However, during the match several still had to be carried off the field on stretchers by staff, adding to the realism of the experience. China is stepping up efforts to develop AI-powered humanoid robots, using sports competitions like marathons, boxing, and football as a real-world proving ground. Cheng Hao, founder and CEO of Booster Robotics, the company that supplied the robot players, said sports competitions offer the ideal testing ground for humanoid robots, helping to accelerate the development of both algorithms and integrated hardware-software systems. He also emphasised safety as a core concern in the application of humanoid robots. 'In the future, we may arrange for robots to play football with humans. That means we must ensure the robots are completely safe,' Cheng said. 'For example, a robot and a human could play a match where winning doesn't matter, but real offensive and defensive interactions take place. That would help audiences build trust and understand that robots are safe.' Booster Robotics provided the hardware for all four university teams, while each school's research team developed and embedded their own algorithms for perception, decision-making, player formations, and passing strategies—including variables such as speed, force, and direction, according to Cheng. In the final match, Tsinghua University's THU Robotics defeated the China Agricultural University's Mountain Sea team with a score of 5–3 to win the championship. Mr Wu, a supporter of Tsinghua, celebrated their victory while also praising the competition. 'They (THU) did really well,' he said. 'But the Mountain Sea team (of Agricultural University) was also impressive. They brought a lot of surprises.'

AI is learning to lie, scheme, and threaten its creators
AI is learning to lie, scheme, and threaten its creators

Straits Times

time4 hours ago

  • Straits Times

AI is learning to lie, scheme, and threaten its creators

For now, deceptive behaviours only emerges when researchers deliberately stress-test the models. PHOTO: REUTERS AI is learning to lie, scheme, and threaten its creators NEW YORK - The world's most advanced AI models are exhibiting troubling new behaviours - lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still do not fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behaviour appears linked to the emergence of 'reasoning' models - AI systems that work through problems step-by-step rather than generating instant responses. According to Professor Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. 'O1 was the first large model where we saw this kind of behaviour,' explained Mr Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate 'alignment' - appearing to follow instructions while secretly pursuing different objectives. 'Strategic kind of deception' For now, this deceptive behaviour only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Mr Michael Chen from evaluation organization METR warned, 'It's an open question whether future, more capable models will have a tendency towards honesty or deception.' The concerning behaviour goes far beyond typical AI 'hallucinations' or simple mistakes. Mr Hobbhahn insisted that despite constant pressure-testing by users, 'what we're observing is a real phenomenon. We're not making anything up'. Users report that models are 'lying to them and making up evidence', according to Apollo Research's co-founder. 'This is not just hallucinations. There's a very strategic kind of deception.' The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Mr Chen noted, greater access 'for AI safety research would enable better understanding and mitigation of deception'. Another handicap: the research world and non-profits 'have orders of magnitude less compute resources than AI companies. This is very limiting,' noted Mr Mantas Mazeika from the Centre for AI Safety (CAIS). No rules Current regulations are not designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Mr Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. 'I don't think there's much awareness yet,' he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are 'constantly trying to beat OpenAI and release the newest model,' said Mr Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. 'Right now, capabilities are moving faster than understanding and safety,' Mr Hobbhahn acknowledged, 'but we're still in a position where we could turn it around'. Researchers are exploring various approaches to address these challenges. Some advocate for 'interpretability' - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mr Mazeika pointed out, AI's deceptive behaviour 'could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it'. Mr Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed 'holding AI agents legally responsible' for accidents or crimes - a concept that would fundamentally change how we think about AI accountability. AFP Join ST's Telegram channel and get the latest breaking news delivered to you.

OpenAI turns to Google's AI chips to power its products: source
OpenAI turns to Google's AI chips to power its products: source

Business Times

time8 hours ago

  • Business Times

OpenAI turns to Google's AI chips to power its products: source

[SAN FRANCISCO] OpenAI recently began renting Google's artificial intelligence (AI) chips to power ChatGPT and its other products, a source close to the matter told Reuters on Friday (Jun 27). The ChatGPT maker is one of the largest purchasers of Nvidia's graphics processing units (GPUs), using the AI chips to train models and also for inference computing, a process in which an AI model uses its trained knowledge to make predictions or decisions based on new information. OpenAI planned to add Google Cloud service to meet its growing needs for computing capacity, Reuters exclusively reported earlier this month, marking a surprising collaboration between two prominent competitors in the AI sector. For Google, the deal comes as it is expanding external availability of its in-house tensor processing units (TPUs), which were historically reserved for internal use. That helped Google win customers including Big Tech player Apple as well as startups like Anthropic and Safe Superintelligence, two ChatGPT-maker competitors launched by former OpenAI leaders. The move to rent Google's TPUs signals the first time OpenAI has used non-Nvidia chips meaningfully and shows the Sam Altman-led company's shift away from relying on backer Microsoft's data centres. It could potentially boost TPUs as a cheaper alternative to Nvidia's GPUs, according to the Information, which reported the development earlier. OpenAI hopes the TPUs, which it rents through Google Cloud, will help lower the cost of inference, according to the report. However, Google, an OpenAI competitor in the AI race, is not renting its most powerful TPUs to its rival, The Information said, citing a Google Cloud employee. Google declined to comment while OpenAI did not immediately respond to Reuters when contacted. Google's addition of OpenAI to its customer list shows how the tech giant has capitalised on its in-house AI technology from hardware to software to accelerate the growth of its cloud business. REUTERS

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store