logo
#

Latest news with #inference

Axelera AI Accelerators Smoke Competitors In Machine Vision Research Study
Axelera AI Accelerators Smoke Competitors In Machine Vision Research Study

Forbes

time5 days ago

  • Business
  • Forbes

Axelera AI Accelerators Smoke Competitors In Machine Vision Research Study

Axelera CEO Fabrizio Del Maffeo Holds The Company's PCIe AI Accelerator As AI-accelerated workloads proliferate across edge environments—from smart cities to retail and industrial surveillance—choosing the right inference accelerator has become a mission-critical decision for many businesses. In a new competitive benchmark study conducted by our analysts at HotTech Vision and Analysis, we put several of today's leading edge AI acceleration platforms to the test in a demanding, real-world scenario: multi-stream computer vision inference processing of high-definition video feeds. The study evaluated AI accelerators from Nvidia, Hailo, and Axelera AI across seven object detection models, including SSD MobileNet and multiple versions of YOLO, to simulate a surveillance system with 14 concurrent 1080p video streams. The goal was to assess real-time throughput, energy efficiency, deployment complexity and detection accuracy of these top accelerators, which all speak to a product's overall TCO value proposition. Measuring AI Accelerator Performance In Machine Vision Applications All of the accelerators tested provided significant gains over CPU-only inference—some up to 30x faster—underscoring how vital dedicated hardware accelerators have become for AI inference. Among the tested devices, PCIe and M.2 accelerators from Axelera showed consistently stronger throughput across every model, especially with heavier YOLOv5m and YOLOv8l workloads. Notably, the Axelera PCIe card maintained performance levels where several other accelerators tapered off, and it consistently smoked the competition across all model implementations tested. SSD MobileNet v2 Machine Vision AI Model Inferencing Test Results Show Axelera In The Lead YOLOv5s Machine Vision AI Model Results Shows The Axelera PCIe Card Wins Hands-Down But Nvidia Is ... More Competitive That said, Nvidia's higher-end RTX A4000 GPU maintained competitive performance in certain tests, particularly with smaller models like YOLOv5s. Hailo's M.2 module offered a compact, low-power alternative, though it trailed in raw throughput. Overall, the report illustrates that inference performance can vary significantly depending on the AI model and hardware pairing—an important takeaway for integrators and developers designing systems for specific image detection workloads. It also shows how dominant Axelera's Metis accelerators are in this very common AI inference application use case, versus major incumbent competitors like NVIDIA. Power consumption is an equally important factor, especially in AI edge deployments, where thermal and mechanical constraints and operational costs can limit design flexibility. Using per-frame energy metrics, our research found that all accelerators delivered improved efficiency over CPUs, with several using under one Joule per frame of inferencing. SSD MobileNet v2 Power Efficiency Results Shows Axelera Solutions Win In A Big Way YOLOv5s Power Efficiency Results Show Axelera Solutions Ahead But Nvidia And Hailo Close The Gap Here, Axelera's solutions out-performed competitors in all tests, offering the lowest energy use per frame in all AI models tested. NVIDIA's GPUs closed the gap somewhat in YOLO inferencing models, while Hailo maintained respectable efficiency, particularly for its compact form factor. The report highlights that AI performance gains do not always have to come at the cost of power efficiency, depending on architecture, models and workload optimizations employed. Beyond performance and efficiency, our report also looked at the developer setup process—an often under-appreciated element of total deployment cost. Here, platform complexity diverged more sharply. Axelera's SDK provided a relatively seamless experience with out-of-the-box support for multi-stream inference and minimal manual setup. Nvidia's solution required more hands-on configuration due to model compatibility limitations with DeepStream, while Hailo's SDK was Docker-based, but required model-specific pre-processing and compilation. The takeaway: development friction can vary widely between platforms and should factor into deployment timelines, especially for teams with limited AI or embedded systems expertise. Here Axelera's solutions once again demonstrated simplicity in its out-of-box experience and setup that the other solutions we tested could not match. Our study also analyzed object detection accuracy using real-world video footage. While all platforms produced usable results, differences in detection confidence and object recognition emerged. Axelera's accelerators showed a tendency to detect more objects and draw more bounding boxes across test scenes, likely a result of its model tuning and post-processing defaults that seemed more refined. Still, our report notes that all tested platforms could be further optimized with custom-trained models and threshold adjustments. As such, out-of-the-box accuracy may matter most for proof-of-concept development, whereas other, more complex deployments might rely on domain-specific model refinement and tuning. Axelera AI's Metis PCI Express Card And M.2 Module AI Inference Accelerators Our AI research and performance validation report underscores the growing segmentation in AI inference hardware. On one end, general-purpose GPUs like those from NVIDIA offer high flexibility and deep software ecosystem support, which is valuable in heterogeneous environments. On the other, dedicated inference engines like those from Axelera provide compelling efficiency and performance advantages for more focused use cases. As edge AI adoption grows, particularly in vision-centric applications, demand for energy-efficient, real-time inference is accelerating. Markets such as logistics, retail analytics, transportation, robotics and security are driving that need, with form factor, power efficiency, and ease of integration playing a greater role than raw compute throughput alone. While this round of testing (you can find our full research paper here) favored Axelera on several fronts—including performance, efficiency, and setup simplicity—this is not a one-size-fits-all outcome. Platform selection will depend heavily on use case, model requirements, deployment constraints, and available developer resources. What the data does make clear is that edge AI inference is no longer an exclusive market GPU acceleration. Domain-specific accelerators are proving they can compete, and in some cases lead, in the metrics that matter most for real-world deployments.

SambaNova launches first Turnkey AI inference solution for data centers, deployable in 90 days
SambaNova launches first Turnkey AI inference solution for data centers, deployable in 90 days

Zawya

time5 days ago

  • Business
  • Zawya

SambaNova launches first Turnkey AI inference solution for data centers, deployable in 90 days

PARIS --(BUSINESS WIRE/AETOSWire)-- SambaNova, a leader in next-generation AI infrastructure, today announced SambaManaged, the industry's first inference-optimized data center product offering, deployable in just 90 days — dramatically faster than the typical 18 to 24 months. Designed for rapid deployment, this modular product enables existing data centers to immediately stand up AI inference services with minimal infrastructure modification. As global AI inference demands soar, traditional data centers grapple with lengthy deployment timelines of 18–24 months, extensive power requirements, and costly facility upgrades. SambaManaged addresses these critical barriers, enabling organizations to quickly launch profitable AI inference services leveraging existing power and network infrastructure. 'Data centers are struggling with power, cooling, and expertise challenges as AI demand grows,' said Abhi Ingle, Chief Product and Strategy Officer at SambaNova. 'SambaManaged delivers high-performance AI with just 10kW of air-cooled power and minimal infrastructure changes — making rapid deployment simple for any data center.' Key Advantages for Data Centers and Cloud Providers: Unmatched Efficiency: Sets a new industry benchmark for performance per watt, maximizing return on investment and reducing total cost of ownership. Rapid Deployment: Launch a fully managed AI inference service in as little as 90 days, minimizing integration challenges and accelerating time to value. Open Model Flexibility: Achieve lightning-fast inference with leading open-source models, ensuring no vendor lock-in and future-proof operations. Modular, Scalable Design: Scale from small to large deployments with ease, including the capability to build a 1 MW 'Token Factory' (100 racks or 1,600 chips) or larger that scales with evolving business needs. Managed or Self-Service Options: Choose a fully managed service or take over operations as internal expertise grows, supported by a customizable developer/enterprise UI and flexible pricing models. SambaManaged is already being adopted by a major US public company with a large power footprint. The platform will deliver the highest throughput on DeepSeek and similar models, empowering them to maximize inference revenue while optimizing Power Usage Effectiveness (PUE). 'While others talk about the future of AI, we're delivering it — today,' said Rodrigo Liang, CEO and co-founder of SambaNova. 'SambaManaged is a game-changer for organizations that want to accelerate their AI initiatives without compromising on speed, scale, or efficiency. Anywhere you have power and networking, we can bring your AI infrastructure online in record time.' About SambaNova SambaNova enables enterprises to rapidly deploy state-of-the-art generative AI capabilities. Headquartered in Palo Alto, California, SambaNova was founded in 2017 by industry veterans from Sun/Oracle and Stanford University. The company is backed by top-tier investors including SoftBank Vision Fund 2, BlackRock, Intel Capital, GV, Walden International, Temasek, GIC, Redline Capital, Atlantic Bridge Ventures, and Celesta.

SambaNova launches first turnkey AI inference solution for datacenters, deployable in 90 days
SambaNova launches first turnkey AI inference solution for datacenters, deployable in 90 days

Zawya

time5 days ago

  • Business
  • Zawya

SambaNova launches first turnkey AI inference solution for datacenters, deployable in 90 days

Dubai, UAE — SambaNova, a leader in next-generation AI infrastructure, today announced SambaManaged, the industry's first inference-optimized datacenter product offering, deployable in just 90 days — dramatically faster than the typical 18 to 24 months. Designed for rapid deployment, this modular product enables existing datacenters to immediately stand up AI inference services with minimal infrastructure modification. As global AI inference demands soar, traditional datacenters grapple with lengthy deployment timelines of 18–24 months, extensive power requirements, and costly facility upgrades. SambaManaged addresses these critical barriers, enabling organizations to quickly launch profitable AI inference services leveraging existing power and network infrastructure. 'Data centers are struggling with power, cooling, and expertise challenges as AI demand grows,' said Abhi Ingle, Chief Product and Strategy Officer at SambaNova. 'SambaManaged delivers high-performance AI with just 10kW of air-cooled power and minimal infrastructure changes—making rapid deployment simple for any data center.' Key Advantages for Data Centers and Cloud Providers: Unmatched Efficiency: Sets a new industry benchmark for performance per watt, maximizing return on investment and reducing total cost of ownership. Rapid Deployment: Launch a fully managed AI inference service in as little as 90 days, minimizing integration challenges and accelerating time to value. Open Model Flexibility: Achieve lightning-fast inference with leading open-source models, ensuring no vendor lock-in and future-proof operations. Modular, Scalable Design: Scale from small to large deployments with ease, including the capability to build a 1 MW 'Token Factory' (100 racks or 1,600 chips) or larger that scales with evolving business needs. Managed or Self-Service Options: Choose a fully managed service or take over operations as internal expertise grows, supported by a customizable developer/enterprise UI and flexible pricing models. SambaManaged is already being adopted by a major US public company with a large power footprint. The platform will deliver the highest throughput on DeepSeek and similar models, empowering them to maximize inference revenue while optimizing Power Usage Effectiveness (PUE). 'While others talk about the future of AI, we're delivering it — today,' said Rodrigo Liang, CEO and co-founder of SambaNova. 'SambaManaged is a game-changer for organizations that want to accelerate their AI initiatives without compromising on speed, scale, or efficiency. Anywhere you have power and networking, we can bring your AI infrastructure online in record time.' For media enquiries: Emad Abdo Middle East Media Pro Emad@ About SambaNova SambaNova enables enterprises to rapidly deploy state-of-the-art generative AI capabilities. Headquartered in Palo Alto, California, SambaNova was founded in 2017 by industry veterans from Sun/Oracle and Stanford University. The company is backed by top-tier investors including SoftBank Vision Fund 2, BlackRock, Intel Capital, GV, Walden International, Temasek, GIC, Redline Capital, Atlantic Bridge Ventures, and Celesta. For more information, visit or contact info@

Groq launches its first European data centre in Helsinki
Groq launches its first European data centre in Helsinki

Yahoo

time6 days ago

  • Business
  • Yahoo

Groq launches its first European data centre in Helsinki

US-based artificial intelligence (AI) chipmaker Groq has unveiled a new data centre in Helsinki, Finland. The new facility will cater to the growing demand for AI inference services in the European region. Additionally, it will offer reduced latency, quicker response times, and enhanced data governance. The new data centre, Groq's first in Europe, has been established in partnership with US-based Equinix, a global digital infrastructure company. It further solidifies the association between both companies, building on their previous site in Dallas, Texas, US. Jonathan Ross, CEO and founder of Groq, remarked that the new European data centre offers the lowest possible latency and ready infrastructure. "As demand for AI inference continues at an ever-increasing pace, we know that those building fast need more – more capacity, more efficiency, and with a cost that scales,' he further added. Regina Donato Dahlstrom, managing director for the Nordics at Equinix, reaffirmed Finland as a standout choice for hosting the new data centre. She attributed it to the country's sustainable energy policies, free cooling, and reliable power grid. "Combining Groq's cutting-edge technology with Equinix's global infrastructure and vendor-neutral connectivity solutions enables efficient AI inference at scale,' she added. Headquartered in the US state of California, Groq builds AI accelerator application-specific integrated circuits (ASICs). It has around 500 employees globally. Last year, it announced an expansion in Saudi Arabia. "Groq launches its first European data centre in Helsinki" was originally created and published by Investment Monitor, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.

SambaNova launches its AI Platform in AWS Marketplace
SambaNova launches its AI Platform in AWS Marketplace

Zawya

time30-05-2025

  • Business
  • Zawya

SambaNova launches its AI Platform in AWS Marketplace

Dubai, United Arab Emirates — SambaNova, the AI inference company delivering fast, efficient AI chips and high-performance models, today announced that its AI platform is now available in AWS Marketplace, a digital catalog that helps you find, buy, deploy, and manage software, data products, and professional services from thousands of vendors. This availability allows organizations to seamlessly purchase and deploy SambaNova's fast inference services alongside their existing infrastructure in AWS. This new availability marks a significant milestone in SambaNova's mission to make private, production-grade AI more accessible to enterprises, removing traditional barriers like vendor onboarding and procurement delays. By leveraging existing AWS relationships, organizations can now begin using SambaNova's advanced inference solutions with a few simple clicks —accelerating time to value while maintaining trusted billing and infrastructure practices. 'Enterprises face significant pressure to move rapidly from Ai experimentation to full-scale production, yet procurement and integration challenges often stand in the way,' said Rodrigo Liang, CEO and co-founder of SambaNova. 'By offering SambaNova's platform in AWS Marketplace, we remove those obstacles, enabling organizations to access our industry leading inference solutions instantly, using the procurement processes and cloud environment they already trust. Accelerating Access to High-Performance Inference SambaNova's listing in AWS Marketplace gives customers the ability to: Procure through existing AWS billing arrangements — no new vendor setup required. Leverage SambaNova's inference performance — fast and efficiently, running open source models like Llama 4 Maverick and DeepSeek R1 671B. Engage securely via private connectivity — possible through AWS PrivateLink for low-latency, secure integration between AWS workloads and SambaNova Cloud. 'With the SambaNova platform running in AWS Marketplace, organizations gain access to secure, high-speed inference from the largest open-source models. Solutions like this will help businesses move from experimentation to full production with AI,' said Michele Rosen Research Manager, Open GenAI, LLMs, and the Evolving Open Source, IDC. This tight integration enables customers to deploy high-performance, multi-tenant inference solutions without the need to purchase or manage custom hardware—expanding SambaNova's reach into enterprise environments where time-to-value and IT friction have historically limited adoption. Making High-Performance Inference More Accessible With this listing in AWS Marketplace, SambaNova is meeting enterprise customers where they already are—within their trusted cloud environments and procurement frameworks. By removing onboarding friction and offering seamless integration, SambaNova makes it easier than ever for organizations to evaluate, deploy, and scale high-performance inference solutions. 'This makes it dramatically easier for customers to start using SambaNova—no new contracts, no long onboarding — just click and go,' said Liang. Availability SambaNova's inference platform is available immediately in AWS Marketplace. Enterprise customers can visit the SambaNova listing in AWS Marketplace to get started. About SambaNova Customers turn to SambaNova to quickly deploy state-of-the-art generative AI capabilities within the enterprise. Our purpose-built enterprise-scale AI platform is the technology backbone for the next generation of AI computing. Headquartered in Palo Alto, California, SambaNova Systems was founded in 2017 by industry luminaries, and hardware and software design experts from Sun/Oracle and Stanford University. Investors include SoftBank Vision Fund 2, funds and accounts managed by BlackRock, Intel Capital, GV, Walden International, Temasek, GIC, Redline Capital, Atlantic Bridge Ventures, Celesta, and several others.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store