Latest news with #DanTwing


Business Wire
24-06-2025
- Business
- Business Wire
Komodor Redefines Kubernetes Cost Optimization with Holistic Automation Based on Performance, Risk and Right-Sizing
TEL AVIV, Israel & SAN FRANCISCO--(BUSINESS WIRE)-- Komodor, the platform for automating Kubernetes operations, health, performance, and cost management, today announced it has added advanced new cost optimization capabilities to its Kubernetes management platform. These new features enable organizations to intelligently reduce cloud spend while maintaining performance and reliability across their entire Kubernetes estate. Cloud didn't make hardware free—it made it a metered cost. Komodor understands that Kubernetes management isn't just about scale anymore, it's about cost-aware control. Most teams are overspending on compute—yet manually right-sizing workloads is nearly impossible due to a lack of expertise, the large volume of factors to consider, and the potential risks of making changes. Meanwhile, traditional cost optimization tools overlook business impacts on application performance, developer velocity, and platform reliability. Komodor takes a platform-centric approach, enabling engineering teams to analyze and visualize Kubernetes resources, application runtime data, logs, changes, along with 3rd party integrations to automate smarter risk-aware decisions for cost optimization. 'Cloud didn't make hardware free—it made it a metered cost. Komodor understands that Kubernetes management isn't just about scale anymore, it's about cost-aware control,' Dan Twing, President & COO, Enterprise Management Associates. 'Their intelligent automation helps teams optimize spend without compromising performance, which is exactly what's needed in today's complex, cloud-native environments.' Containing Cost not Performance As Kubernetes workloads grow in size and complexity, so does cloud spend. Engineering teams often over-provision infrastructure 'just in case,' resulting in idle resource waste. Meanwhile, mission-critical workloads can't be evicted, limiting autoscaler efficiency. But optimizing for cost alone often leads to misconfigured workloads, scaling failures, or reduced application reliability. Without a unified view of cost across clusters, namespaces, and environments, teams struggle to understand where savings can be safely achieved. Meanwhile, open source autoscalers like Karpenter and Cluster Autoscaler are helpful, but limited—since they don't account for workload diversity, service criticality, or real-time performance metrics. The Komodor platform's latest enhancements extend and augment native autoscaling with intelligent pod placement for bin-packing optimization, as well as real-time workload right-sizing, delivering up to 40-60% in additional savings. All without compromising stability or speed. 'In large scale Kubernetes environments, cutting costs without visibility into application behavior is a recipe for downtime,' said Itiel Shwartz, Co-Founder & CTO of Komodor. 'What organizations need is a way to optimize cost and performance—across the full scope of infrastructure and application operations. That's what we've built.' New Cost Optimization Capabilities The new capabilities available in the Komodor platform help teams transition from static resource planning to dynamic, real-time cost optimization—empowering them to eliminate resource inefficiencies without increasing risk. These include: Real-Time Spend & Allocation Visibility Unified cost views across cloud, hybrid, and on-prem environments with drill-down filters for clusters, services, and namespaces—for improved team accountability and smarter decision-making. Intelligent Workload Right-Sizing AI-driven resource recommendations based on real-world usage across CPU, memory, throttling, and scheduling signals—help prevent both overprovisioning and underperformance. Advanced Bin-Packing & Pod Placement Komodor actively resolves placement blockers (e.g., Pod Disruption Budgets, affinity rules, etc.) and extends autoscaler functionality to improve node utilization, reduce fragmentation, and accelerate scaling. Autopilot Mode with Guardrails Continuous unattended and customizable optimization profiles (Conservative, Moderate, Aggressive) and safety thresholds, ensure changes are always safe, traceable, and aligned with business priorities. Smart Headroom Management Intelligently reserves and manages extra compute resources (CPU and memory) across nodes to reduce provisioning delays and improve responsiveness during spikes, deployments, or rollouts—without overprovisioning. Availability The Komodor platform with advanced cost optimization capabilities is available immediately from Komodor and its global partner network. To schedule a demo or learn more about how Komodor can help your organization achieve performance-aligned cost savings, visit About Komodor Komodor reduces the cost and complexity of managing large-scale Kubernetes environments by automating day-to-day operations, as well as health and cost optimization. The Komodor Platform proactively identifies risks that can impact application availability, reliability and performance, while providing AI-assisted root-cause analysis, troubleshooting and automated remediation playbooks. Fortune 500 companies in a wide range of industries including financial services, retail and more, rely on Komodor to empower developers, reduce TicketOps, and harness the full power of Kubernetes to accelerate their business. The company has received $67M in funding from Accel, Felicis, NFX Capital, OldSlip Group, Pitango First, Tiger Global, and Vine Ventures. For more information visit join the Komodor Kommunity, and follow us on LinkedIn and X.


Techday NZ
18-06-2025
- Business
- Techday NZ
Latent AI unveils platform to speed & secure edge AI rollout
Latent AI has announced the launch of Latent Agent, an edge AI platform designed to simplify the management and security of deploying artificial intelligence models at the edge. Built upon the Latent AI Efficient Inference Platform (LEIP), Latent Agent is designed to automate optimisation and deployment tasks, enabling developers to iterate, deploy, monitor, and secure edge AI models at scale. The company states that the new platform addresses the complexity issues that have made enterprise adoption of edge AI challenging. Complexity of traditional MLOps Traditional machine learning operations (MLOps) force developers to manually optimise models for specific hardware, often without a comprehensive understanding of device constraints. This can create pressure on teams, as optimisation workflows typically demand multiple specialists per hardware pipeline, and the complexity multiplies with each additional hardware target. According to Latent AI, this challenge has extended go-to-market timelines to as much as twelve weeks and led to substantial resource overhead for many organisations, particularly those looking to scale across diverse edge devices such as drones and sensors. "The rapid shift to edge AI has exposed gaps in traditional MLOps, slowing innovation and scalability," said Sek Chai, CTO and Co-founder of Latent AI. "Latent Agent eliminates the model-to-hardware guessing game, replacing weeks-long deployment cycles and scarce expertise with intelligent automation. This is a game-changer for enterprises racing to stay competitive." Platform features Latent Agent aims to streamline the lifecycle of edge AI, spanning exploration, training, development, and deployment across a range of hardware platforms. A key feature is its natural language interface, which lets developers set their AI requirements while receiving model-to-hardware recommendations from Latent AI Recipes. This knowledge base draws on 12TB of telemetry data compiled from over 200,000 device hours. Within the platform, a Visual Studio Code (VS Code) extension has been introduced to incorporate these agentic capabilities into developer workflows, providing an interface for requirement gathering and deployment. Other capabilities highlighted include an adaptive model architecture that can autonomously detect performance drift in deployed models and take remedial actions, such as retraining or over-the-air updates, without human intervention. Latent Agent's Recipes leverages automatically benchmarked model-to-hardware configurations, aiming to enable faster iteration and model deployment. The company states this accelerated approach will remove bottlenecks caused by manual processes and facilitate secure management of AI infrastructure at scale. "The biggest barrier to edge AI at scale has always been the complexity of optimising models for constrained hardware environments," said Dan Twing, President and COO of Enterprise Management Associates, and Principal Analyst for Intelligent Automation. "Latent Agent addresses that challenge head-on. It streamlines the hardest part of edge AI—getting high-performance models running on diverse devices—so teams can move faster and scale confidently." Business focus Latent Agent is being presented as a tool to accelerate development timelines, allow autonomous operations, and support scaling. By reducing the need for deep machine learning or hardware expertise, the company claims deployment times can be shortened from twelve weeks to a matter of hours. The agentic platform's compile-once, deploy-anywhere function is said to support any chip, operating system, or form factor, thereby assisting in the management of thousands of edge devices simultaneously. Furthermore, Latent Agent incorporates security measures such as model encryption, watermarking, and compliance with Department of Defence (DoD) security standards, designed to protect sensitive deployments. "At Latent AI, we've always believed that edge AI should be as simple to deploy as it is powerful to use," said Jags Kandasamy, CEO and Co-founder of Latent AI. "Latent Agent represents the natural evolution of our mission—transforming edge AI from a specialised engineering challenge into an accessible conversation. By combining our proven optimisation expertise with agentic intelligence, we're not just making edge AI faster; we're making it possible for any developer to achieve what previously required a team of ML experts." The new platform is now available to organisations seeking to improve deployment speed, operational autonomy, scalability, and security for edge AI models.