logo
#

Latest news with #LightbitsLabs

Lightbits Labs Closes Q1-2025 with Record-Breaking Growth
Lightbits Labs Closes Q1-2025 with Record-Breaking Growth

Yahoo

time15-04-2025

  • Business
  • Yahoo

Lightbits Labs Closes Q1-2025 with Record-Breaking Growth

Accelerated demand for infrastructure modernization modeling hyperscale architecture and AI-ready storage playing a pivotal role SAN JOSE, Calif., April 15, 2025--(BUSINESS WIRE)--Lightbits Labs (Lightbits®), the inventor of the NVMe® over TCP protocol offering next-gen software-defined storage for modern workloads, today announced record-breaking growth for Q1'25. This milestone reflects strong global demand for infrastructure modernization that models Cloud Operations enabled by high-performance block storage offering flexibility, reliability, and cost-efficiency at scale. The breakout quarter for Lightbits, marked by a 4.8X increase in software sales and a 2.9X increase in average deal size, is fueled by a surge in new customers and strong customer loyalty, as evidenced by an impressive 2X YoY license increase. New account growth was particularly strong among financial services, service providers, and e-commerce organizations who require high-performance, low-latency at massive scale. A strong standout use case for Lightbits was among AI Cloud service providers. Crusoe expanded its use of Lightbits to power high-performance, multi-petabyte-scale AI cloud services. Elastx implemented Lightbits to support secure, scalable, and sustainable OpenStack, Kubernetes, and AI cloud services. And cloud company Nebul uses Lightbits to underpin its high-performance, cost-efficient data platform for enterprise AI deployments. "The quarter close marked significant progress financially and strategically," said Eran Kirzner, CEO and co-founder of Lightbits Labs. "Our growth is a direct result of the trust and value customers place in Lightbits' solutions, delivering unmatched performance and efficiency for modern applications in containerized and virtualized environments at scale. We now service Fortune 500 financial institutions, as well as some of the world's largest e-Commerce platforms and AI cloud companies." Lightbits offers best-in-class software-defined storage that redefines performance and efficiency for open source environments like Kubernetes, KVM, OpenShift, and OpenStack, delivering the industry's best price/performance for AI/ML, analytics, and transactional workloads at scale. The storage software scales to hundreds of petabytes and delivers performance of up to 75 million IOPS and consistent sub-millisecond tail latency under a heavy load. The unique NVMe over TCP architecture utilizes resources more efficiently with less proprietary hardware, simplifying storage management and requiring 5X less hardware compared to Ceph Storage, which reduces energy consumption and satisfies sustainability strategies. To support mixed workload environments, a single Lightbits cluster provides multi-tenancy with Quality of Service capabilities to prevent resource hogging. "Legacy storage infrastructure can and will impact application performance of data-driven environments. Thus, storage is fundamental and must be the first consideration of any modernization effort," said Matt Kimball, Vice President & Principal Analyst at Moor Insights & Strategy. "As more organizations shift to AI and real-time data workloads, the importance of flexible, disaggregated storage solutions becomes critical. Scale matters, and performant scale is even more important. Companies like Lightbits Labs deliver performance, scale, and cost savings realized by some of the largest organizations." Lightbits Labs' success is further amplified by its growing global partner network, highlighted by several announcements last year: Lightbits Labs joined the Mirantis Partner Program, providing scalable and resilient high-performance storage for Kubernetes Lightbits Certified on Oracle Cloud Infrastructure "We're seeing a consistent pattern of engagement with customers finding that other software-defined storage can only accommodate low and middle-tier workloads. They adopt Lightbits for tier 1 workloads, and then we move downstream to their utility tier, as well. And customers seeking VMware alternatives like Lightbits for its seamless integrations with OpenShift and Kubernetes to enable their infrastructure modernization," added Rex Manseau, Chief Revenue Officer of Lightbits Labs. "The unmatched capabilities of Lightbits, combined with the compatibility with common orchestration environments, make it an ideal choice for organizations and service providers who are supporting diverse performance-sensitive workloads at scale to capitalize on rapidly expanding business opportunities." Recently, the company's solutions and commitment to excellence were given industry recognition through many prestigious awards, including: CRN's Storage 100 CRN's 50 Coolest Software-Defined Storage Vendors Data Breakthrough Award SDC and Cloud Award Lightbits also received analyst recognition from GigaOM, positioned as a Fast-Moving Leader for Primary Storage in their 2024 Primary Storage Report, further cementing its reputation in the market. Looking ahead, Lightbits Labs is poised to expand its global install base, prioritizing key markets across the Americas and Europe, and other high-growth regions. With more new products and partnership announcements in the pipeline for later this year, the company continues to innovate and expand its footprint in the storage market and intensify its focus on delivering cutting-edge solutions for broader workload coverage. In 2025, you can connect with Lightbits software-defined storage experts at these industry events: The Red Hat Summit, StackConf, and KubeCon. Visit our website to learn more about why leading organizations choose Lightbits software-defined storage or book a product demonstration today. About Lightbits Labs Lightbits Labs® (Lightbits) invented the NVMe over TCP protocol and offers best-in-class software-defined block storage that enables data center infrastructure modernization for organizations building a private or public cloud. Built from the ground up for low consistent latency, scalability, resiliency, and cost-efficiency, Lightbits software delivers the best price/performance for real-time analytics, transactional, and AI/ML workloads. Lightbits Labs is backed by enterprise technology leaders [Cisco Investments, Dell Technologies Capital, Intel Capital, Lenovo, and Micron] and is on a mission to deliver the fastest and most cost-efficient data storage for performance-sensitive workloads at scale. To learn more about Lightbits Labs, visit and follow Lightbits Labs on Linkedin, X, Facebook, Instagram, and YouTube. Lightbits and Lightbits Labs are registered trademarks of Lightbits Labs, Ltd. All trademarks and copyrights are the property of their respective owners. View source version on Contacts Lightbits PR Contact: Carol Platzpr@ Sign in to access your portfolio

20 Ways Engineering Teams Can Optimize Workloads For Energy Efficiency
20 Ways Engineering Teams Can Optimize Workloads For Energy Efficiency

Forbes

time09-04-2025

  • Business
  • Forbes

20 Ways Engineering Teams Can Optimize Workloads For Energy Efficiency

getty Energy efficiency is a growing priority in the engineering space, and teams must find creative ways to optimize workloads without compromising performance. Small changes, like redistributing work in real time or leveraging cloud-based solutions, can lead to significant reductions in energy consumption. To help you make your team more energy efficient, Forbes Technology Council members weigh in with effective approaches to reduce energy consumption and maximize performance. These expert insights can help improve team sustainability without sacrificing speed or functionality. Dynamic workload orchestration improves energy efficiency by distributing workloads based on real-time demand, hardware efficiency and power availability. Using AI-driven scaling, resource pooling and load balancing, teams can minimize idle compute power, reduce energy waste and maintain peak performance without compromising system reliability or responsiveness. - Nicola Sfondrini , PWC I'm seeing more organizations investigating the NVMe over TCP storage protocol, which is an enabler of cloud operations models. It provides high performance and consistently low latency at scale for performance-intensive workloads while reducing hardware and heat transfer in data centers, which ultimately improves energy efficiency. - Abel Gordon , Lightbits Labs Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify? One practical way is to employ intelligent workload scheduling and auto-scaling. Cloud-native solutions allow teams to scale compute resources up or down based on demand, ensuring optimal utilization and energy efficiency. Implementing efficient coding practices and serverless architectures can also lead to significant energy savings. - Preetpal Singh , Xebia 4. Develop An Energy Management System The goal is to develop a dynamic Energy Management System (EMS). In industry, EMS or BEMS collect energy consumption data, which, when analyzed with machine learning algorithms, optimizes process performance and efficiency. Additionally, this data supports predictive maintenance, preventing inefficiencies and ensuring seamless operations. - Ilker Kalali , Pirelli Tire North America 5. Break Down Projects Into Small Components Focus on simplicity and optimization. Engineering teams can leverage modular design, breaking down projects into smaller, more efficient components. This minimizes wasted resources and streamlines workflows, allowing for high performance without excess. Less is more—it's about doing more with less, which leads to both energy and resource efficiency. - Oleg Sadikov , DeviQA 6. Use Containerization And Automation Engineering teams can optimize workloads by using containerization, like Docker, to package applications for consistent deployment and orchestration, such as Kubernetes, to automate scaling and management. This approach enhances resource utilization, speeds up deployments and increases reliability through automated workload distribution. - Ambika Saklani Bhardwaj , Walmart Inc. 7. Implement A Resource Monitoring And Optimization Framework To maximize energy efficiency, companies must strategize the implementation of a resource monitoring and optimization framework, which their engineering teams can then adopt. This framework can provide AI-driven automated recommendations, such as right-sizing instances, standardized configurations and continuous feedback to ensure consistent and effective energy savings. - Sibin Thomas , Google 8. Optimize Storage And Scaling With AWS Our services run on AWS, optimizing workloads through dynamic resource allocation. AWS Auto Scaling ensures EC2 instances and RDS databases are rightsized, preventing over-provisioning and improving efficiency. For storage, we use S3 Intelligent-Tiering to automatically shift infrequently accessed data to lower-energy storage classes, reducing energy consumption without impacting performance. - Jason Penkethman , Simpro Group 9. Utilize GenAI For Quality Checks And Support Engineering teams can reduce workloads by using GenAI to automate time-consuming tasks like quality checks and development support. By automating these processes, you can supercharge your employees so they save time, use resources more efficiently and keep performance high without sacrificing the quality of their work or burning out. - Adam Lieberman , Finastra 10. Understand Your Technology's Purpose To Identify Key Metrics Engineers leverage observability but often struggle with efficiency. A "collect everything" approach creates a data deluge that slows tools, fuels false alarms and extends mean time to repair. Instead, start with your technology's purpose—why it exists—and drill down logically to identify the right metrics. A targeted approach delivers better insights, reduces noise and improves performance. - Bill Hineline , Chronosphere 11. Automate Metadata-Driven Data Tiering Metadata analysis can identify and automate data placement across storage tiers, reducing redundant storage and aligning resources with data value. This approach minimizes energy-intensive over-provisioning by archiving cold or obsolete data to cost-efficient, low-power tiers while ensuring access to critical data without sacrificing performance. - Carl D'Halluin , Datadobi 12. Optimize Network Traffic Start by optimizing traffic to your network, security and application tools in your hybrid multi-cloud infrastructure. By doing so, and increasing visibility into all network traffic, you can significantly reduce traffic volumes. This then reduces power consumption massively, improves efficiency, supports your bottom line and improves your carbon footprint. - Shane Buckley , Gigamon 13. Eliminate Redundant Computations Optimizing software to eliminate redundant computations and using energy-efficient hardware, like GPUs for parallel tasks, significantly enhances performance per watt. This reduces power consumption without compromising output. By aligning software and hardware optimization, systems can achieve high performance while minimizing energy use, ensuring both efficiency and sustainability. - Dhivya Nagasubramanian , U.S. Bank 14. Switch Long-Running Containers To Serverless Functions Switching from long-running containers to event-based serverless functions can not only improve energy efficiency but also give a speed boost while cutting costs. Today's WebAssembly-based serverless functions are much faster than AWS Lambda and the first-gen tooling. - Matt Butcher , Fermyon Technologies 15. Consolidate Data Into Fewer Streamlined Workflows Consolidate fragmented data operations into fewer, more streamlined workflows, reducing context switching, improving cache efficiency and lowering compute overhead with more orchestration. Not only does this reduce redundant processing, but it also gives you more visibility into performance bottlenecks so you can optimize them in detail. - Sandro Shubladze , Datamam 16. Implement Auto-Scaling With Dynamic Thresholds Implement machine-learning-driven auto-scaling with dynamic thresholds instead of static rules. By predicting workload patterns, teams can proactively adjust resources before they're needed while maintaining performance. This precise resource allocation reduces compute usage, cooling requirements and costs—all without sacrificing business outcomes. - Kim Bozzella , Protiviti 17. Forecast Resource Needs And Allocate Tasks Implementing proactive workload scheduling with AI-driven predictive analytics is key. By forecasting resource needs and dynamically allocating tasks to optimal time slots and hardware, engineering teams can minimize energy consumption during low-demand periods, maintaining top performance while significantly improving energy efficiency and reducing costs. - Aravind Nuthalapati , Microsoft 18. Find Low Carbon Footprint Regions On Google Cloud On Google Cloud, the carbon footprint of individual regions (each of which has unique power utilization efficiency and upstream power sources) is made easy to find with a little green leaf and a 'Low CO2' highlight. Pick those regions, and do everything the same, and you're making a positive difference. - Miles Ward , SADA, An Insight company 19. Use Cloud-Native Tools To Analyze Configurations And Metrics Engineering teams can use cloud-native tools like AWS Compute Optimizer to analyze resource configurations and utilization metrics. It provides rightsizing recommendations, balancing cost and performance while ensuring capacity needs. By leveraging insights from historical and projected usage data, teams can strategically adjust workloads to enhance efficiency without sacrificing performance. - John Anand Lourdusamy , Capital One 20. Find A Carbon-Aware Workload Scheduler One overlooked way to optimize workloads for energy efficiency is to adopt GreenOps, a cloud sustainability practice that integrates real-time carbon-aware workload scheduling. By dynamically shifting non-urgent tasks to off-peak hours or renewable-powered regions, engineering teams can cut emissions, lower energy costs and improve sustainability without sacrificing performance. - Jabin Geevarghese George , Tata Consultancy Services

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store