
Tray.ai unveils Merlin Agent Builder 2.0 for enterprise AI scale
Tray.ai has released Merlin Agent Builder 2.0, offering enterprise teams a platform that delivers AI agents capable of executing tasks beyond merely answering questions.
The new version of Merlin Agent Builder introduces features designed to address persistent challenges in AI agent deployment within enterprises. Industry data indicates that while a majority of enterprises are investing over $500,000 per year on AI agents, many face difficulties in scaling and deriving value from these solutions. Key obstacles highlighted include lack of complete data, session memory limitations, challenges with large language model (LLM) configuration, and rigid deployment options.
Addressing deployment and adoption challenges
Rich Waldron, Co-Founder and Chief Executive Officer at Tray.ai, said: "Enterprise teams aren't short on ambition when it comes to AI agents - but they are short on results. This release clears the path from prototype to production by removing the blockers that stall adoption. We've built the only platform where enterprises can go from idea to working agent - fast - without compromising trust, flexibility or scale. That's how agent-led transformation actually happens."
According to Tray.ai, a significant gap exists between building and actual usage of AI agents in workplace settings. Agents that are not integrated with comprehensive and up-to-date knowledge often lose context, make unreliable decisions, and force users to repeat information, which undermines user trust and can lead to underutilisation. Meanwhile, IT and AI teams find it difficult to align LLMs with appropriate use cases, particularly where multiple agents operate in parallel, and encounter added complexity when deploying agents across various platforms.
To address these issues, Tray.ai's upgraded solution includes advancements in four key areas: integration of smart data sources for rapid knowledge preparation, built-in memory for maintaining context across sessions, multi-LLM support, and streamlined omnichannel deployment.
Smart data sources and session memory
The Merlin Agent Builder 2.0 offers a new smart data sources feature aimed at simplifying the connection and synchronisation of both structured and unstructured enterprise knowledge. Through a single interface, users can link data from sources like file uploads or Google Drive. This data is then automatically prepared and vectorised to ensure agents are informed with relevant and reliable information.
Alistair Russell, Co-Founder and Chief Technology Officer of Tray.ai, commented: "Merlin Agent Builder isn't a services wrapper. It's a fundamental part of our product and built for ease of use and scale. It handles chunking and embedding at the source, ensuring each data source is optimally segmented and vectorized so agents are grounded in high-signal, relevant context. That means fewer retrieval failures, more reliable decisions, and agents that reason and take action. It's how teams move fast - without trade-offs."
Addressing another common shortcoming of AI agents - context loss between interactions - Merlin Agent Builder 2.0 incorporates built-in memory capabilities. The platform enables agents to recall previous sessions, track conversation history, and manage both short-term and long-term memory requirements automatically. This aims to reduce the need for custom solutions and enhances continuity in user exchanges, improving adoption rates.
Flexible large language model support
As organisations deploy multiple agents to handle diverse business processes, the ability to configure each agent with the most suitable LLM becomes increasingly important. Merlin Agent Builder 2.0 supports multiple LLM providers, including OpenAI, Gemini, Bedrock, and Azure. Teams can assign specific models to individual agents with tailored configurations, avoiding proprietary lock-in and supporting privacy-driven workflows where necessary.
Unified deployment across channels
The updated release allows teams to build an agent once and deploy it seamlessly across communication and application environments such as Slack, web applications, and APIs, or for autonomous operations. The delivery configuration is incorporated directly into the agent setup process, which eliminates the need for repeated setup and technical adjustments for different channels.
With these updates, Tray.ai targets what it identifies as critical needs for enterprises: simplified data onboarding, session-aware agents, flexible modelling, and consistent deployment experiences. The company states that by providing these features in a unified platform, both IT and business teams are better positioned to transition from pilot projects to production-ready AI agents that are actively used by employees and customers alike.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
4 days ago
- Techday NZ
Tray.ai unveils Merlin Agent Builder 2.0 for enterprise AI scale
has released Merlin Agent Builder 2.0, offering enterprise teams a platform that delivers AI agents capable of executing tasks beyond merely answering questions. The new version of Merlin Agent Builder introduces features designed to address persistent challenges in AI agent deployment within enterprises. Industry data indicates that while a majority of enterprises are investing over $500,000 per year on AI agents, many face difficulties in scaling and deriving value from these solutions. Key obstacles highlighted include lack of complete data, session memory limitations, challenges with large language model (LLM) configuration, and rigid deployment options. Addressing deployment and adoption challenges Rich Waldron, Co-Founder and Chief Executive Officer at said: "Enterprise teams aren't short on ambition when it comes to AI agents - but they are short on results. This release clears the path from prototype to production by removing the blockers that stall adoption. We've built the only platform where enterprises can go from idea to working agent - fast - without compromising trust, flexibility or scale. That's how agent-led transformation actually happens." According to a significant gap exists between building and actual usage of AI agents in workplace settings. Agents that are not integrated with comprehensive and up-to-date knowledge often lose context, make unreliable decisions, and force users to repeat information, which undermines user trust and can lead to underutilisation. Meanwhile, IT and AI teams find it difficult to align LLMs with appropriate use cases, particularly where multiple agents operate in parallel, and encounter added complexity when deploying agents across various platforms. To address these issues, upgraded solution includes advancements in four key areas: integration of smart data sources for rapid knowledge preparation, built-in memory for maintaining context across sessions, multi-LLM support, and streamlined omnichannel deployment. Smart data sources and session memory The Merlin Agent Builder 2.0 offers a new smart data sources feature aimed at simplifying the connection and synchronisation of both structured and unstructured enterprise knowledge. Through a single interface, users can link data from sources like file uploads or Google Drive. This data is then automatically prepared and vectorised to ensure agents are informed with relevant and reliable information. Alistair Russell, Co-Founder and Chief Technology Officer of commented: "Merlin Agent Builder isn't a services wrapper. It's a fundamental part of our product and built for ease of use and scale. It handles chunking and embedding at the source, ensuring each data source is optimally segmented and vectorized so agents are grounded in high-signal, relevant context. That means fewer retrieval failures, more reliable decisions, and agents that reason and take action. It's how teams move fast - without trade-offs." Addressing another common shortcoming of AI agents - context loss between interactions - Merlin Agent Builder 2.0 incorporates built-in memory capabilities. The platform enables agents to recall previous sessions, track conversation history, and manage both short-term and long-term memory requirements automatically. This aims to reduce the need for custom solutions and enhances continuity in user exchanges, improving adoption rates. Flexible large language model support As organisations deploy multiple agents to handle diverse business processes, the ability to configure each agent with the most suitable LLM becomes increasingly important. Merlin Agent Builder 2.0 supports multiple LLM providers, including OpenAI, Gemini, Bedrock, and Azure. Teams can assign specific models to individual agents with tailored configurations, avoiding proprietary lock-in and supporting privacy-driven workflows where necessary. Unified deployment across channels The updated release allows teams to build an agent once and deploy it seamlessly across communication and application environments such as Slack, web applications, and APIs, or for autonomous operations. The delivery configuration is incorporated directly into the agent setup process, which eliminates the need for repeated setup and technical adjustments for different channels. With these updates, targets what it identifies as critical needs for enterprises: simplified data onboarding, session-aware agents, flexible modelling, and consistent deployment experiences. The company states that by providing these features in a unified platform, both IT and business teams are better positioned to transition from pilot projects to production-ready AI agents that are actively used by employees and customers alike.


Techday NZ
11-06-2025
- Techday NZ
Ivo launches AI tools to transform contract analysis & insight
Ivo has announced the launch of two AI-native products designed to automate contract analysis and streamline the extraction of insights from legal agreements. Legal teams have traditionally depended on contract lifecycle management (CLM) tools to store and manage agreements. However, extracting meaningful data from these systems often involved the manual review of large volumes of documents, with significant time and resource investment required. The latest offering from Ivo consists of Repository and Assistant, both powered by the company's proprietary AI Repository Engine (AiRE). The tools are positioned to provide visibility into organisation-wide contract portfolios and allow users to submit plain language queries to gain immediate access to relevant contract information. Product features Repository enables the creation of dashboards with custom AI-populated columns, presenting key business and legal insights rapidly. According to Ivo, the platform's AI is capable of clustering related documents, such as amendments linked to master agreements, and can assess the extent to which specific agreements diverge from established standard templates. The Assistant product offers the capability for both legal and business teams to query tens of thousands of contracts using natural language, retrieving comprehensive answers regardless of the contract file's storage location. The underlying AI can respond to detailed questions, for example, identifying all customer contracts with specific data security requirements, by understanding how each contract aligns or deviates from pre-determined standard positions. Min-Kyu Jung, Co-founder and Chief Executive Officer of Ivo, commented on the current challenges experienced by legal teams using traditional CLMs. "CLMs were supposed to solve the problem of extracting true intelligence from contracts, but have overpromised and underdelivered. We're solving the knowledge problem. Legal teams don't need another static system of record. They need intelligence at their fingertips — contextual, instant, and deeply reliable." The new Ivo solutions are designed to integrate directly with widely used document management systems, including Google Drive and SharePoint, as well as on-premise solutions. The Assistant will also operate with Ivo's Microsoft Word add-in to provide negotiation recommendations based on insights from previous contracts. Market adoption and scope Following its Series A funding round earlier this year, Ivo has been utilised by over 200 legal teams, including those at Canva, Quora, and Eventbrite. The company reports these organisations have been able to reduce contract review time by up to 75% while maintaining accuracy. With the introduction of Repository and Assistant, Ivo identifies a move from contract review to a broader application it terms 'AI contract intelligence', stating its intention to make CLMs unnecessary without directly replacing them. The company emphasises that its platform is suitable for enterprise-scale deployments, handling extensive portfolios without manual metadata tagging or requiring bulk uploads. "What used to take hours of combing through contracts can now happen in a single sentence. This isn't just faster. It's foundationally smarter," Jung stated, detailing the impact of this approach on business operations. Ivo says the products have been deployed by early access customers in legal, procurement, sales, and operations sectors. The company claims that the technology enables contracts to be transformed from passive records into strategic assets capable of informing business decision-making across departmental boundaries. Industry shift The announcement positions Ivo within a broader industry transition, moving from software systems designed solely for record-keeping to those which support understanding and immediate action. The company cites increased contract complexity and volume as key drivers for the need for improved visibility, precision, and speed. According to Ivo, Repository and Assistant allow for every agreement to become instantly searchable, every risk to be visible, and every contract to serve as a strategic asset for the business. The company states it is working towards building the infrastructure required for this paradigm shift in contract management and analysis.


Techday NZ
03-06-2025
- Techday NZ
Snowflake launches AI agents to ease enterprise data access
Snowflake has introduced new agentic AI features and expanded its enterprise-grade AI capabilities, aiming to enhance data analysis and machine learning (ML) workflows for businesses in Canada and worldwide. Snowflake Intelligence, set to enter public preview soon, provides business users and data professionals with a unified conversational interface driven by intelligent data agents. This development enables users to pose natural language questions and quickly access actionable insights from both structured and unstructured data. The company has also announced Data Science Agent, currently in private preview, which acts as an agentic companion designed to assist data scientists by automating routine ML model development tasks. These additions are intended to streamline AI and ML workflows, widen access to data within enterprises, and remove the technical barriers that traditionally slow business decision-making through natural language interactions within Snowflake. "AI agents are a major leap from traditional automation or chatbots, but in order to deploy them at scale, businesses need an AI-ready information ecosystem. This means enterprises must be able to unite data silos, maintain enterprise-grade security and compliance, and have easy ways to adopt and build agents. Snowflake Intelligence breaks down these barriers by democratizing the ability to extract meaningful intelligence from an organization's entire enterprise data estate — structured and unstructured data alike. This isn't just about accessing data, it's about empowering every employee to make faster, smarter decisions with all of their business context at their fingertips," Baris Gultekin, Head of AI at Snowflake, said, commenting on the evolution of AI agents. Organisations frequently face difficulties in decision-making due to fragmented data governance, separate data formats, and a lack of technical analysts. Snowflake Intelligence addresses these issues by enabling business teams and non-technical users to interact conversationally with their enterprise data, all without needing to write code. Snowflake Intelligence operates within the user's existing Snowflake environment, inheriting all established security controls, data masking, and governance policies. It consolidates data from multiple sources, including Snowflake, Box, Google Drive, Workday, and Zendesk, via Snowflake Openflow, allowing users to retrieve insights from spreadsheets, documents, images, and databases simultaneously. Data agents can create visualisations and help users act on insights through natural language prompts. The platform also provides access to external knowledge via Cortex Knowledge Extensions available on Snowflake Marketplace, with content provided by sources such as CB Insights, Packt, Stack Overflow, The Associated Press, and USA TODAY, to add further depth and context to responses. The system is powered by large language models from Anthropic and OpenAI and is built on Cortex Agents, currently in public preview. All are presented through a no-code interface that seeks to ensure transparency and explainability in the use of AI. "By integrating Claude's reasoning capabilities directly into Snowflake's platform, we're further eliminating the traditional barriers between data and insights. Business users can now have natural conversations with their enterprise data, while data scientists can automate complex ML workflows — all through simple natural language interactions. This demonstrates how Claude's advanced reasoning can democratize AI while maintaining the enterprise-grade security and governance that organizations require," Michael Gerstenhaber, VP of Product Management at Anthropic, said, highlighting the integration's potential. Snowflake Intelligence is aimed at moving organisations away from reliance on analytics teams for insights, enabling broader employee access to data. "At WHOOP, our mission is to unlock human performance and healthspan, and data is central to everything we do. Snowflake Intelligence marks a big step forward in our ability to be a data-first organisation, ensuring that all employees can access insights without relying on analytics teams as the intermediary. By eliminating the technical barriers to gleaning the insights we need for decision-making, our analytics teams can now shift from manual data retrieval tasks to more strategic, predictive, and value-generating work," Matt Luizzi, Sr. Director of Business Analytics at WHOOP, said. To support data scientists, Snowflake's Data Science Agent automates time-consuming tasks linked to ML workflows. The agent, also using Anthropic's Claude, segments ML workflow challenges into separate steps such as data analysis, preparation, feature engineering, and training. It leverages advanced reasoning, contextual understanding, and action execution to generate validated ML pipelines that can be run from a Snowflake Notebook. Users can iterate with suggested improvements or follow-ups, helping to reduce time spent on experimentation or debugging. Currently, more than 5,200 customers, including companies such as BlackRock, Luminate, and Penske Logistics, are using Snowflake Cortex AI as part of their business operations. Snowflake is introducing several new AI features, such as enhanced document processing, batch semantic search, and the new Cortex AISQL, now available in public preview, aiming to facilitate analysis of multi-modal data at scale and assist teams that may lack extensive AI engineering skills.