
Open ecosystems are powering the future of intelligent observability
Open source has been a driving force behind this shift for over two decades. However, as organisations transition to emerging software-led technologies, open source will be fundamental in what comes next for observability and how it supports modern organisations to achieve greater flexibility, scalability, and competitiveness.
Growing complexity of modern applications
Cloud-native technologies and DevOps have transformed software delivery, enabling faster innovation and better user experiences. But this shift has also increased complexity, with teams managing more tools, systems, and manual processes. The result? Wasted time, a higher risk of errors, and slower decision-making.
According to New Relic's 2024 Observability Forecast report, 56% of respondents in Australia and New Zealand (ANZ) were most likely to use more than five tools, well ahead of their peers in Europe (43%) and the Americas (35%).
Rather than accelerating innovation or improving metrics like mean time to detect (MTTD) and mean time to resolution (MTTR), the fragmented, piecemeal approach often introduces new challenges such as creating data silos, blind spots, poor data correlation, and added friction from licensing and costs, among other issues.
In fact, 39% of organisations in ANZ identified the volume of monitoring tools and siloed data as the key barrier to achieving full stack observability. The stakes are higher than ever, too, with a US$2.2 million median hourly cost for high-business-impact outages.
A unified ecosystem is the way forward
Additionally, organisations are relying on observability to achieve greater operational efficiency. Over a third (31%) of ANZ respondents indicated that integrating business apps like enterprise resource planning (ERP) and customer relationship management (CRM) into workflows was a key driver for observability in their organisations. It is evident that the traditionally fragmented view of systems from using isolated tools for monitoring leads to significant effort and costs in troubleshooting and preventing poor performance. By consolidating various data sources into a single platform, IT teams gain critical, contextual visibility into system performance, allowing them to understand what's really happening and address problems before they escalate.
An application-agnostic approach to observability enables all software engineers to instrument, create dashboards, and set alerts across the entire technology stack.
Unlocking true intelligence for AI
With AI adoption in full swing, IT teams need to address additional complexity as AI tools bring with them intricate data pipelines, model training and inference processes, and dynamic scaling based on real-time data.
While observability practices of the past focused on gathering and analysing telemetry data to understand and resolve performance issues, the integration of AI technologies will require observability to evolve and expand its capacity to track the specific behaviours and performance of AI components in high volumes.
To fully capitalise on AI, the future of observability will revolve around an open ecosystem of interconnected agents that communicate through natural language APIs. These agents will empower users to automate research and complete complex tasks, driving higher productivity. The system will also provide intelligence within the appropriate context, offering relevant, accurate insights, and recommendations to support better business decision-making.
Predictive analytics fuelled by machine learning can analyse trends in telemetry data to foresee potential system failures or performance bottlenecks before they occur. By identifying these issues in advance, teams can take proactive steps to ensure continuous system performance and reliability, such as scaling resources or adjusting configuration.
The next generation of open, intelligent observability will empower organisations to unlock deeper insights and greater value. An observability platform that integrates with best-in-class technologies will enable organisations to drive growth and accelerate developer productivity by seamlessly connecting workflows and delivering insights.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
7 days ago
- Techday NZ
Open ecosystems are powering the future of intelligent observability
Observability has come a long way. What began as a smart way to keep systems and software performing well has evolved into AI-strengthened platforms that accelerate innovation, improve productivity, and streamline operations. Open source has been a driving force behind this shift for over two decades. However, as organisations transition to emerging software-led technologies, open source will be fundamental in what comes next for observability and how it supports modern organisations to achieve greater flexibility, scalability, and competitiveness. Growing complexity of modern applications Cloud-native technologies and DevOps have transformed software delivery, enabling faster innovation and better user experiences. But this shift has also increased complexity, with teams managing more tools, systems, and manual processes. The result? Wasted time, a higher risk of errors, and slower decision-making. According to New Relic's 2024 Observability Forecast report, 56% of respondents in Australia and New Zealand (ANZ) were most likely to use more than five tools, well ahead of their peers in Europe (43%) and the Americas (35%). Rather than accelerating innovation or improving metrics like mean time to detect (MTTD) and mean time to resolution (MTTR), the fragmented, piecemeal approach often introduces new challenges such as creating data silos, blind spots, poor data correlation, and added friction from licensing and costs, among other issues. In fact, 39% of organisations in ANZ identified the volume of monitoring tools and siloed data as the key barrier to achieving full stack observability. The stakes are higher than ever, too, with a US$2.2 million median hourly cost for high-business-impact outages. A unified ecosystem is the way forward Additionally, organisations are relying on observability to achieve greater operational efficiency. Over a third (31%) of ANZ respondents indicated that integrating business apps like enterprise resource planning (ERP) and customer relationship management (CRM) into workflows was a key driver for observability in their organisations. It is evident that the traditionally fragmented view of systems from using isolated tools for monitoring leads to significant effort and costs in troubleshooting and preventing poor performance. By consolidating various data sources into a single platform, IT teams gain critical, contextual visibility into system performance, allowing them to understand what's really happening and address problems before they escalate. An application-agnostic approach to observability enables all software engineers to instrument, create dashboards, and set alerts across the entire technology stack. Unlocking true intelligence for AI With AI adoption in full swing, IT teams need to address additional complexity as AI tools bring with them intricate data pipelines, model training and inference processes, and dynamic scaling based on real-time data. While observability practices of the past focused on gathering and analysing telemetry data to understand and resolve performance issues, the integration of AI technologies will require observability to evolve and expand its capacity to track the specific behaviours and performance of AI components in high volumes. To fully capitalise on AI, the future of observability will revolve around an open ecosystem of interconnected agents that communicate through natural language APIs. These agents will empower users to automate research and complete complex tasks, driving higher productivity. The system will also provide intelligence within the appropriate context, offering relevant, accurate insights, and recommendations to support better business decision-making. Predictive analytics fuelled by machine learning can analyse trends in telemetry data to foresee potential system failures or performance bottlenecks before they occur. By identifying these issues in advance, teams can take proactive steps to ensure continuous system performance and reliability, such as scaling resources or adjusting configuration. The next generation of open, intelligent observability will empower organisations to unlock deeper insights and greater value. An observability platform that integrates with best-in-class technologies will enable organisations to drive growth and accelerate developer productivity by seamlessly connecting workflows and delivering insights.


Techday NZ
11-06-2025
- Techday NZ
Datadog launches domain-specific AI agents & LLM tools
Datadog has announced the addition of three domain-specific AI agents to its generative AI assistant, Bits AI, together with new tools for monitoring and managing large language model (LLM) and agentic AI deployments. New AI agents The company has introduced Bits AI SRE, Bits AI Dev Agent, and Bits AI Security Analyst, each configured to serve specific engineering, operations, and security functions. These agents are designed to support real-time incident response, DevOps tasks, and security workflows for development, security, and operations teams. The AI agents operate on a shared system of core tasks, including data querying, anomaly analysis, and infrastructure scaling. This architecture allows Datadog to roll out new agents efficiently while maintaining consistency in the user experience. The system integrates a broad set of observability data, enabling precise insights and actions for managing risks within cloud-based applications. Yanbing Li, Chief Product Officer at Datadog, commented on the company's approach: Datadog is uniquely positioned to deliver value with AI as a platform that has a wealth of clean, rich data—we process trillions of data points and are embedded in our customers' critical engineering, developer and security workflows. With these advancements in AI reasoning and multi-modality, we've gone beyond helping organizations understand their availability, security, performance and reliability. We now enable human-in-the-middle workflows by guiding customers on what to look for and where to start looking, and augment their ability to take action. Bits AI SRE, which is now in limited availability, acts as an on-call responder for incidents by performing early triage and providing investigation findings before human responders intervene. It allocates incidents, produces real-time summaries, and generates initial post-mortem drafts to save teams time. Bits AI Dev Agent, currently in preview, identifies code issues, suggests fixes, and can open pull requests directly within the source control management systems organisations use. Bits AI Security Analyst, also in preview, automatically investigates cloud security signals, conducts in-depth threat investigations, and produces actionable resolution recommendations, aiming to reduce response times for security incidents. Darren Trzynka, Senior Cloud Architect at Thomson Reuters, commented on Bits AI's impact: At Thomson Reuters, we're focused on maximizing operational efficiency and accelerating innovation at scale through generative AI solutions. Bits AI allows operations and downstream platform teams to receive the full context of the investigation—from the initial monitor trigger to conclusion—driving down resolution time significantly freeing them up to do more. Additional Applied AI features The updates include two new features in preview. Proactive App Recommendations analyses telemetry collected by Datadog to suggest performance improvements or actions, such as optimising slow queries and addressing code issues, before users are impacted. The APM Investigator helps engineers troubleshoot latency spikes by automating bottleneck identification and recommending fixes. LLM Observability suite announced Datadog has also released a suite of tools designed to provide observability for agentic AI—software agents built with LLMs and similar technologies—in production environments. The new products include AI Agent Monitoring, LLM Experiments, and AI Agents Console. Yrieix Garnier, Vice President of Product at Datadog, addressed the motivations behind these offerings: A recent study found only 25 percent of AI initiatives are currently delivering on their promised ROI—a troubling stat given the sheer volume of AI projects companies are pursuing globally. Today's launches aim to help improve that number by providing accountability for companies pushing huge budgets toward AI projects. The addition of AI Agent Monitoring, LLM Experiments and AI Agents Console to our LLM Observability suite gives our customers the tools to understand, optimize and scale their AI investments. AI Agent Monitoring, now generally available, provides a mapped overview of each agent's decision-making route, including inputs, tool calls, and outputs, displayed in an interactive graph. This enables engineers to diagnose latency spikes or unexpected behaviours and connect them to quality, security, and cost measures across distributed systems. Mistral AI's Co-founder and CTO, Timothée Lacroix, provided further industry perspective: Agents represent the evolution beyond chat assistants, unlocking the potential of generative AI. As we equip these agents with more tools, comprehensive observability is essential to confidently transition use cases into production. Our partnership with Datadog ensures teams have the visibility and insights needed to deploy agentic solutions at scale. LLM Experiments, in preview, enables users to compare the effects of changes to prompts or models using datasets from live or uploaded sources. This aims to support quantifiable improvements in cost, response accuracy, and throughput, and prevent unintended regressions in AI application performance. Michael Gerstenhaber, Vice President of Product at Anthropic, commented: AI agents are quickly graduating from concept to production. Applications powered by Claude 4 are already helping teams handle real-world tasks in many domains, from customer support to software development and R&D. As these agents take on more responsibility, observability becomes key to ensuring they behave safely, deliver value, and stay aligned with user and business goals. We're very excited about Datadog's new LLM Observability capabilities that provide the visibility needed to scale these systems with confidence. Datadog has also introduced AI Agents Console, currently in preview, to allow organisations to centrally oversee both in-house and third-party AI agents, track their usage and impact, and monitor for potential security or compliance issues as external agents are embedded into critical business workflows. Armita Peymandoust, Senior Vice President, Software Engineering at Salesforce, said: As enterprises scale digital labour, having clear visibility into how AI agents drive business impact has become mission critical. Customers are already seeing strong success with their AI deployments using Salesforce's Agentforce, which is built on a foundation of openness and trust. That foundation is further strengthened by our partner ecosystem that provides our customers even greater availability to tailored solutions that help them manage their AI agents confidently. Datadog's latest advances in deep observability will further support our vision and unlock another level of AI agent transparency and scale for organizations.


Techday NZ
10-06-2025
- Techday NZ
ChatGPT leads enterprise AI, but model diversity is surging
New Relic has published its first AI Unwrapped: 2025 AI Impact Report, presenting data from 85,000 businesses on enterprise-level adoption and usage trends in artificial intelligence models. ChatGPT's leading role The report reveals that developers are overwhelmingly favouring OpenAI's ChatGPT for general-purpose AI tasks. According to the findings, more than 86% of all large language model (LLM) tokens processed by New Relic customers involved ChatGPT models. Nic Benders, Chief Technical Strategist at New Relic, stated, "AI is rapidly moving from innovation labs and pilot programmes into the core of business operations. The data from our 2025 AI Impact Report shows that while ChatGPT is the undisputed dominant model, developers are also moving at the 'speed of AI,' and rapidly testing the waters with the latest models as soon as they come out. In tandem, we're seeing robust growth of our AI monitoring solution. This underscores that as AI is ingrained in their businesses, our customers are realising they need to ensure model reliability, accuracy, compliance, and cost efficiency." The report highlights that enterprises have been quick to adopt OpenAI's latest releases. ChatGPT-4o and ChatGPT-4o mini emerged as the primary models in use, with developers making near-immediate transitions between versions as new capabilities and improvements are launched. Notably, there has been an observed pattern of rapid migration from ChatGPT-3.5 Turbo to ChatGPT-4.1 mini since April, indicating a strong developer focus on performance improvements and features, often taking precedence over operational cost savings. Broadening model experimentation The findings also suggest a trend toward greater experimentation, with developers trying a wider array of AI models across applications. While OpenAI remains dominant, Meta's Llama ranked second in terms of LLM tokens processed among New Relic customers. There was a 92% increase in the number of unique models used within AI applications in the first quarter of 2025, underlining growing interest in open-source, specialised, and task-specific solutions. This diversification, although occurring at a smaller scale compared to OpenAI models, points to a potentially evolving AI ecosystem. Growth in AI monitoring As the diversity of model adoption increases, the need for robust AI monitoring solutions has also grown. Enterprises continue to implement unified platforms to monitor and manage AI systems, with New Relic reporting a sustained 30% quarter-over-quarter growth in the use of its AI Monitoring solution since its introduction last year. This growth reflects a drive among businesses to address concerns such as reliability, accuracy, compliance, and cost as AI systems become more embedded in day-to-day operations. Programming languages trends The report notes that Python solidifies its status as the preferred programming language for AI applications, recording nearly 45% growth in adoption since the previous quarter. follows closely behind Python in terms of both volume of requests and adoption rates. Java, meanwhile, has experienced a significant 34% increase in use for AI applications, suggesting a rise in production-grade, Java-based LLM solutions within large enterprises. Research methodology details The AI Unwrapped: 2025 AI Impact Report's conclusions are drawn from aggregated and de-identified usage statistics from active New Relic customers. The data covers activity from April 2024 to April 2025, offering a representative view of current AI deployment and experimentation trends across a substantial commercial user base. Follow us on: Share on: