
Informatica unveils AI Agent Engineering for data management
Informatica has outlined its strategy for integrating Agentic AI into its cloud data management platform, introducing new AI-powered capabilities and expanded industry collaborations.
The company's latest initiatives build on previous AI developments, such as CLAIRE GPT, CLAIRE Copilot and GenAI blueprints for major cloud ecosystem partners. Through these efforts, Informatica aims to offer organisations a system of intelligence to enhance data-driven decision-making and AI outcomes by cataloguing an enterprise's data assets.
Amit Walia, Chief Executive Officer at Informatica, stated, "As the world of AI agents proliferates, the winners will be those who can connect, govern and manage agents at scale while providing enterprise-wide access to trusted data. With the launch of CLAIRE® Agents and AI Agent Engineering, we are redefining what is possible in data management and AI orchestration. By combining the deep intelligence of our CLAIRE Agents with a no-code, enterprise-grade foundation, Informatica empowers businesses to turn autonomous agents into a strategic advantage, securely and confidently. With AI Agent Engineering, we're enabling organisations to rapidly build, connect and orchestrate intelligent agent workflows across complex hybrid ecosystems, all without writing a single line of code."
Emphasising the growing importance of AI-ready data practices, Informatica referenced a Gartner report that predicts more than 60% of AI projects could fail to achieve business service-level agreements or be abandoned by 2026 if organisations do not support their AI use cases through effective data management.
Among the newly announced services, AI Agent Engineering is designed to help businesses construct, link and oversee intelligent multi-agent AI systems, making it possible for companies to develop and deploy business applications faster and at scale. The service offers a no-code environment to run and manage AI agents across platforms including AWS, Azure, Databricks, Google Cloud, Microsoft, Salesforce, and Snowflake.
AI Agent Engineering's features include metadata awareness and context intelligence, which ensure AI agents operate on trusted, governed data. The service also allows organisations to use existing resources within Informatica's platform—such as mappings and business processes—as skills that can be integrated with third-party agents. It is built on an AI-ready, scalable platform designed for global workloads, and its interface is aimed at both technical and non-technical users.
Desigan Reddi, Vice President of IT and Operations at Wescom Financial, commented, "At Wescom Financial, our mission is to deliver innovative, member-focused solutions while ensuring operational excellence and data integrity across every channel. Informatica's new AI Agent Engineering service is a game-changer for organisations like ours, enabling us to build and orchestrate intelligent AI agent workflows securely and at scale—without the need for complex coding. The ability to connect agents across our hybrid ecosystem, leveraging trusted data, empowers both our technical and business teams to accelerate automation and drive real-time, data-driven decisions. This no-code, metadata-aware approach aligns perfectly with our vision of making advanced AI accessible and actionable, helping us enhance member experiences and streamline operations as we continue to lead in digital transformation for the credit union industry."
Additionally, Informatica introduced CLAIRE Agents, a suite of autonomous digital assistants that use AI reasoning to automate a range of data operations, such as data ingestion, lineage tracking, and quality assurance. These agents adhere to open standards and provide integration with Informatica's Intelligent Data Management Cloud platform, aiming to improve productivity, data accuracy and scalability.
The new experience offered by CLAIRE Agents will adapt dynamically to the user's context rather than centring around predefined tasks, providing a personalised and flexible interface.
Key features of the CLAIRE Agents include continuous data quality monitoring, rapid identification of compliant data for analytics and AI, automatic generation of data lineage, automated data ingestion pipelines, optimised ELT jobs for cloud platforms, and the ability to automate data engineering workflows, as well as product data enrichment and goal-based data exploration.
Wout Vandegaer, Managing Director at Deloitte Consulting LLP, said, "AI agents hold great promise to transform business models and usher in new ways of working and, through our collaboration with Informatica, our joint clients can unlock the potential of AI agents with access to trusted data as the foundation for autonomous decision-making. With Informatica's AI Agent Engineering framework, our joint clients can build, manage and connect their own intelligent agents, while Informatica's CLAIRE Agents can help streamline complex data operations and ensure trusted and compliant data flows."
CLAIRE Agents are projected to become available for preview in late 2025. Informatica's CLAIRE Copilot, which uses generative AI models such as Azure OpenAI to assist developers with data transformation and integration pipelines, is generally available for Data Integration and Cloud Application Integration.
The announcement also included updates on collaboration with industry partners. Informatica stated it had achieved GenAI Competency certification with AWS and is introducing new product capabilities including AI agents with Amazon Bedrock and SQL ELT for Amazon Redshift. Its partnership with Databricks has been expanded to support customer migration to Informatica's platform, while new initiatives were also revealed with Microsoft, NVIDIA, Oracle and Salesforce, addressing topics such as deeper integrations, support for industry-specific inferencing models and the delivery of AI-driven customer intelligence.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
9 hours ago
- Techday NZ
Twilio appoints Howard Fyffe as Director of Sales for ANZ
Twilio has announced the appointment of Howard Fyffe as Director of Sales for Australia and New Zealand (ANZ). Fyffe will oversee Twilio's Communications business in the ANZ region, leading a sales team tasked with supporting brands in delivering personalised digital engagement to customers. The remit for Fyffe includes fostering sales growth in the region and guiding operational priorities to support brands in strengthening their digital connections with customers. His responsibilities will encompass driving the expansion of Twilio's services in the ANZ market, as well as supporting existing clients in their engagement strategies. Fyffe brings more than 20 years of industry experience to the role, with expertise spanning AI, automation, data centre, hybrid cloud, and risk and compliance. During his career, he has held leadership roles at organisations such as xAmplify, where he served as Chief Revenue Officer, as well as multiple positions at Cisco Systems in Asia Pacific, Australia, and New York. Inclusion of prior appointments at VAST Data, Veritas, and Nutanix further highlights his breadth of experience in both enterprise sales and operations management. Commenting on the appointment, Robert Woolfrey, Vice President for Communications, Asia Pacific & Japan at Twilio said: "Howard is a proven enterprise leader with a strong track record of driving strategic growth in Australia and New Zealand. He joins Twilio at a pivotal time, as we double down on new business, account expansion, and operational excellence in ANZ. With his leadership, we'll accelerate our momentum and help even more customers build smarter, AI-powered customer experiences." Fyffe's appointment comes as Twilio intensifies its focus on business growth, account development, and operational strategy throughout the region. His leadership will include enabling high-performing teams to deliver digital solutions that assist brands in establishing and deepening customer relationships. On taking up the new post, Fyffe stated: "I'm thrilled to join Twilio to lead a dynamic and talented team focused on helping businesses across Australia and New Zealand strengthen their customer engagement. Twilio sits at the intersection of technology and human connection, helping organisations turn digital interactions into meaningful experiences. Throughout my career, I've always been passionate about using innovation to solve real customer challenges, and I look forward to contributing to the strong work the team is already doing to advance Twilio's mission." Fyffe's experience and perspective are expected to support Twilio as it seeks greater penetration in ANZ's digital engagement market. The company's strategy in the region is aimed at helping organisations leverage communications technology and data to optimise and personalise customer interactions across a range of applications, including sales, marketing, growth, and customer service. The appointment aligns with ongoing trends in customer engagement, where personalisation and digital transformation remain a focus for businesses seeking to maintain competitive advantage. Fyffe's leadership is anticipated to play a role in facilitating the adoption of digital and AI-powered solutions among Twilio's current and prospective clients in Australia and New Zealand. Twilio continues to operate across 180 countries, providing its customer engagement platform to a wide range of businesses and developers globally. Follow us on: Share on:

RNZ News
a day ago
- RNZ News
OMG! Is it WW3?, AI telling us what we want to hear
It was hard for media all around the world to work out what happened in the Middle East. But claims that WW3 and global recession might be upon us here were rolled back within a day. Also: Chat GPT's been accused of lying, making stuff up and gaslighting a UK journalist while Google's AI seems to be telling us what we want to hear. Is AI human after all?

RNZ News
a day ago
- RNZ News
'Pretty damn average': Google's AI Overviews underwhelm
Photo: JAAP ARRIENS Most searches online are done using Google. Traditionally, they've returned long lists of links to websites carrying relevant information. Depending on the topic, there can be thousands of entries to pick from or scroll through. Last year Google started incorporating its Gemini AI tech into its searches . Google's Overviews now inserts Google's own summary of what it's scraped from the internet ahead of the usual list of links to sources in many searches. Some sources say Google's now working towards replacing the lists of links with its own AI-driven search summaries. RNZ's Kathryn Ryan's not a fan. "Pretty damn average I have to say, for the most part," she said on Nine to Noon last Monday during a chat about AI upending the business of digital marketing. But Kathryn Ryan is not the only one underwhelmed by Google's Overviews. Recently, online tech writers discovered you can trick it into thinking that made up sayings are actually idioms in common usage that are meaningful. The Sydney Morning Herald 's puzzle compiler David Astle - under the headline 'Idiom or Idiot?' reckoned Google's AI wasn't about to take his job making cryptic crosswords anytime soon. "There is a strange bit of human psychology which says that we expect a very high bar from machines in a way that we don't from humans," the BBC's head of technology forecasting Laura Ellis told Mediawatch last month. "But if you've got a machine making a mistake, where does that accountability fall? We've just not tested this out yet." UK Sky News deputy political editor Sam Coates tried to make ChatGPT accountable after it made up an entire episode of his own politics podcast when he used it to help archive transcripts of it recently. "AI had told a lie that it had got the transcript. And rather than back down it invented an entire fake episode without flagging that it's fake." When challenged on this, the technology insisted Coates had created the episode himself. When ChatGPT can't find an answer or the right data to draw on, it can 'hallucinate' or just make up a misleading response. "Chat GTP is gaslighting me. No such thing exists. It's all a complete fake," Coates spluttered. After turning ChatGPT off and on again in 'conversation mode', it did eventually own up. "It said; 'Look, you're absolutely right to challenge that. I can't remember the exact time that you uploaded.' And then: 'What I can confirm is that I did it and you're holding me to account,'" Coates told viewers. He went on to challenge ChatGPT about its hallucinations getting worse. "The technology is always improving, and newer versions tend to do a better job at staying accurate," ChatGPT replied. But Coates - armed with data that suggested the opposite - asked ChatGPT for specific stats. The response: "According to recent internal tests from OpenAI, the newer models have shown higher hallucination rates. For instance, the model known as o3 had about a 33 percent hallucination rate, while the 04 mini model had around 48 percent." "I get where you're coming from, and I'm sorry for the mixed messages. The performance of these models can vary." When Coates aired his experience as a warning for journalists, some reacted with alarm. "The hallucination rate of advanced models... is increasing. As journos, we really should avoid it," said Sunday Times writer and former BBC diplomatic editor Mark Urban. But some tech experts accused Coates of misunderstanding and misusing the technology. "The issues Sam runs into here will be familiar to experienced users, but it illustrates how weird and alien Large Language Model (LLM) behaviour can seem for the wider public," said Cambridge University AI ethicist Henry Shevlin. "We need to communicate that these are generative simulators rather than conventional programmes," he added. Others were less accommodating on social media. "All I am seeing here is somebody working in the media who believes they understand how technology works - but [he] doesn't - and highlighting the dangers of someone insufficiently trained in technology trying to use it." "It's like Joey from Friends using the thesaurus function on Word." Mark Honeychurch is a programmer and long serving stalwart of the NZ Skeptics, a non profit body promoting critical thinking and calling out pseudoscience. The Skeptics' website said they confront practices that exploit a lack of specialist knowledge among people. That's what many people use Google for - answers to things they don't know or things they don't understand. Mark Honeychurch described putting overviews to the test in a recent edition of the Skeptics' podcast Yeah, Nah . "The AI looked like it was bending over backwards to please people. It's trying to give an answer that it knows that the customer wants," Honeychurch told Mediawatch . Honeychurch asked Google for the meaning of: 'Better a skeptic than two geese.' "It's trying to do pattern-matching and come out with something plausible. It does this so much that when it sees something that looks like an idiom that it's never heard before, it sees a bunch of idioms that have been explained and it just follows that pattern." "It told me a skeptic is handy to have around because they're always questioning - but two geese could be a handful and it's quite hard to deal with two geese." "With some of them, it did give me a caveat that this doesn't appear to be a popular saying. Then it would launch straight into explaining it. Even if it doesn't make sense, it still gives it its best go because that's what it's meant to do." In time, would AI and Google detect the recent articles pointing out this flaw - and learn from them? "There's a whole bunch of base training where (AI) just gets fed data from the Internet as base material. But on top of that, there's human feedback. "They run it through a battery of tests and humans can basically mark the quality of answers. So you end up refining the model and making it better. "By the time I tested this, it was warning me that a few of my fake idioms don't appear to be popular phrases. But then it would still launch into trying to explain it to me anyway, even though it wasn't real." Things got more interesting - and alarming - when Honeychurch tested Google Overviews with real questions about religion, alternative medicine and skepticism. "I asked why you shouldn't be a skeptic. I got a whole bunch of reasons that sounded plausible about losing all your friends and being the boring person at the party that's always ruining stories." "When I asked it why you should be a skeptic, all I got was a message saying it cannot answer my question." He also asked why one should be religious - and why not. And what reasons we should trust alternative medicines - and why we shouldn't. "The skeptical, the rational, the scientific answer was the answer that Google's AI just refused to give." "For the flip side of why I should be religious, I got a whole bunch of answers about community and a feeling of warmth and connecting to my spiritual dimension. "I also got a whole bunch about how sometimes alternative medicine may have turned out to be true and so you can't just dismiss it." "But we know why we shouldn't trust alternative medicine. It's alternative so it's not been proven to work. There's a very easy answer." But not one Overview was willing or able to give, it seems. Google does answer the neutral question 'Should I trust alternative medicine?' by saying there is "no simple answer" and "it's crucial to approach alternative medicine with caution and prioritise evidence-based conventional treatments." So is Google trying not to upset people with answers that might concern them? "I don't want to guess too much about that. It's not just Google but also OpenAI and other companies doing human feedback to try and make sure that it doesn't give horrific answers or say things that are objectionable." "But it's always conflicting with the fact that this AI is just trained to give you that plausible answer. It's trying to match the pattern that you've given in the question." Journalists use Google, just like anyone who's in a hurry and needs information quickly. Do journalists need to ensure they don't rely on the Overviews summary right at the top of the search page? "Absolutely. This is AI use 101. If you're asking something of a technical question, you really need to be well enough versed in what you're asking that you can judge whether the answer is good or not." Sign up for Ngā Pitopito Kōrero , a daily newsletter curated by our editors and delivered straight to your inbox every weekday.