
Shipping Slow? Here's How European Entrepreneurs Can Pick Up The Pace in An AI-First World.
Opinions expressed by Entrepreneur contributors are their own.
You're reading Entrepreneur Europe, an international franchise of Entrepreneur Media.
In 2025, even the largest software companies are recognizing that speed is the name of the game.
With innovation in the U.S. and China leaps and bounds ahead of Europe, speed of is also being looked at across the continent as a lever to accelerate the pace of AI adoption. And in 2025, European policymakers are grappling with how to uphold the "EU's values-based regulatory model while catalyzing a homegrown AI industry."
This is particularly true in the AI-first world we live in, with consumers and business customers drawn to the enterprises who are quick to offer the latest innovations and bring new products to market ahead of the competition.
Companies also need to master the art of fast software deployment to maintain and improve live software products as quickly as possible once a performance issue or bug has been identified.
For software engineers, DORA metrics are widely viewed as the gold standard. The performance framework provides a baseline for software productivity, but these metrics aren't being measured in a silo. It also considers how improvements are likely to drive business improvements and improve ROI.
Here, it recognizes that elite engineering teams with fast-moving teams aim to balance speed with resilience, in environments where deploying 20 times a day is no longer rare. Until recently, deployment frequency was a footnote in board decks. Today, it sits next to ARR on the dashboard. When pipelines sputter, the cost isn't just velocity—it's revenue protection.
If software development workflows aren't operating smoothly and efficiently, it has a direct impact on the bottom line and stands to undermine the company's competitive edge.
For the region, faster software release times also promise to unlock overall economic improvements. For instance, AI companies on the continent received over €11Bn, helping to "drive automation and cost savings" across the continent with their technological developments.
Yet releasing high-quality, bug-free software products in shorter time frames is much more challenging than one might imagine. It's important to get to grips with why this is the case.
Here's how to address some of the most common reasons for slow software release cycles to recoup time and gain a first-mover advantage against others in the market.
The problems caused by environmental sprawl
First, we need to recognize the challenges associated with something we call environmental sprawl.
Put simply, it refers to the significant increase in how many development environments are involved when building a software application today.
In our experience, we frequently see engineering teams tasked with the maintenance of more than five disconnected pre-production environments, including staging and quality assurance (QA), to mention just a couple of examples.
As a result, code updates have to hop back and forth between these various environments with distinct tests taking place at each stage before we can even think about deploying code into the production stage.
Further, microservices communicate via a well-defined interface using lightweight APIs. However, API behavior is inherently complex, and it's unrealistic to place the burden on the shoulders of software teams with only manual methods.
Not only is it almost impossible for them to keep pace with the volume of tests, but it also diverts their time and resources away from activities that can drive value for the organizations. Instead, AI-powered screening and automation tools can be leveraged to reduce the burden associated with API tests and environmental sprawl to keep microservices working smoothly and seamlessly.
In summary, environmental sprawl is a direct contributor to slow release cycles. In addition, maintaining this messy and complex system is a drain on resources that only multiplies with app growth.
Why code tests are draining bandwidth
Since the early 2020s when cloud computing adoption rates and SaaS offerings exploded, computing infrastructures had to adjust. This marked the moment we largely moved from monolithic infrastructures to microservices, in which each application is built as an independent, lightweight component.
What this did was allow organizations to adopt new services as needed, and for software engineers to have an infrastructure that supports flexibility, scalability and ease of maintenance.
However, testing across these microservices can now be a mammoth task for teams, with a hidden productivity tax.
This is a particular burden when we remember that the region has an ongoing shortage of AI software engineers and tech talent
The key aspect is that a "shift left" testing approach is essential when it comes to interconnected microservices. Without this, testing happens late leading to long feedback cycles as code gets reworked after being tested in various environments.
Current product release cycles demand that each developer has to wait for a deployment window, fight for staging access or work out which of the many changes broke the build.
Further, microservices communicate via a well-defined interface using lightweight APIs. However, API behavior is inherently complex, and it's unrealistic to place the burden on the shoulders of software teams with only manual methods.
In turn, it's created a complex and fragmented pipeline where multiple teams are involved in the process, from cloud engineers and software developers to DevSecOps.
Not only does this make it harder to manage as there is no single team responsible, but every transition increases the likelihood that delays slip into the process, adding days or even weeks to the overall timeline.
Reducing complexity to boost delivery speeds
The good news is that there are proven ways to improve the speed of software release cycles and tackle the problems outlined above.
When it comes to environment sprawl, modern stacks like Kubernetes and Service Mesh make it easier to implement multi-tenant environments.
Multi-tenancy address the compartmentalized nature of current environments. Instead, developers can validate each change in isolation before adding to the main build and do fewer environments.
AI should also be employed here to reduce the burden associated with test maintenance, debugging test failures and also helping decide which tests to even run.
At Meta, predictive testing is a key component of continuous integration. When deployed in production the strategy was found to guarantee that "over 95% of individual test failures and over 99.9% of faulty changes are still reported back to developers."
Developers can also use AI to rapidly test smaller changes without waiting for other teams or disrupting the overall product. AI-Powered Smart Testing, a recent product from the company I founded Signadot, is one such solution.
Modern solutions for software deployment
Although the industry has been grappling with a disjointed process, AI is here to help developers automate testing, handle validation, and move deployment cycles from weeks to days.
The approaches outlined here can help to save costs, increase velocity and improve software quality.
By employing these new strategies and techniques, companies can boost their competitive streak by getting quality software updates into the hands of users in record time.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Bloomberg
27 minutes ago
- Bloomberg
Starbucks Draws Interest for Controlling Stake in China Unit
By and Manuel Baigorri Save Starbucks Corp. has received proposals from prospective investors in its China business, most of whom are eyeing a controlling stake in the operation, said people familiar with the matter. The Seattle-based company is now in the process of sifting through proposals and shortlisting a group of potential investors for a next round of bidding, the people said, asking not to be identified because the matter is private. The company may share financial and operating details with those bidders to help them assess the valuation of its Chinese assets, the people said.


Forbes
36 minutes ago
- Forbes
Axelera AI Accelerators Smoke Competitors In Machine Vision Research Study
Axelera CEO Fabrizio Del Maffeo Holds The Company's PCIe AI Accelerator As AI-accelerated workloads proliferate across edge environments—from smart cities to retail and industrial surveillance—choosing the right inference accelerator has become a mission-critical decision for many businesses. In a new competitive benchmark study conducted by our analysts at HotTech Vision and Analysis, we put several of today's leading edge AI acceleration platforms to the test in a demanding, real-world scenario: multi-stream computer vision inference processing of high-definition video feeds. The study evaluated AI accelerators from Nvidia, Hailo, and Axelera AI across seven object detection models, including SSD MobileNet and multiple versions of YOLO, to simulate a surveillance system with 14 concurrent 1080p video streams. The goal was to assess real-time throughput, energy efficiency, deployment complexity and detection accuracy of these top accelerators, which all speak to a product's overall TCO value proposition. Measuring AI Accelerator Performance In Machine Vision Applications All of the accelerators tested provided significant gains over CPU-only inference—some up to 30x faster—underscoring how vital dedicated hardware accelerators have become for AI inference. Among the tested devices, PCIe and M.2 accelerators from Axelera showed consistently stronger throughput across every model, especially with heavier YOLOv5m and YOLOv8l workloads. Notably, the Axelera PCIe card maintained performance levels where several other accelerators tapered off, and it consistently smoked the competition across all model implementations tested. SSD MobileNet v2 Machine Vision AI Model Inferencing Test Results Show Axelera In The Lead YOLOv5s Machine Vision AI Model Results Shows The Axelera PCIe Card Wins Hands-Down But Nvidia Is ... More Competitive That said, Nvidia's higher-end RTX A4000 GPU maintained competitive performance in certain tests, particularly with smaller models like YOLOv5s. Hailo's M.2 module offered a compact, low-power alternative, though it trailed in raw throughput. Overall, the report illustrates that inference performance can vary significantly depending on the AI model and hardware pairing—an important takeaway for integrators and developers designing systems for specific image detection workloads. It also shows how dominant Axelera's Metis accelerators are in this very common AI inference application use case, versus major incumbent competitors like NVIDIA. Power consumption is an equally important factor, especially in AI edge deployments, where thermal and mechanical constraints and operational costs can limit design flexibility. Using per-frame energy metrics, our research found that all accelerators delivered improved efficiency over CPUs, with several using under one Joule per frame of inferencing. SSD MobileNet v2 Power Efficiency Results Shows Axelera Solutions Win In A Big Way YOLOv5s Power Efficiency Results Show Axelera Solutions Ahead But Nvidia And Hailo Close The Gap Here, Axelera's solutions out-performed competitors in all tests, offering the lowest energy use per frame in all AI models tested. NVIDIA's GPUs closed the gap somewhat in YOLO inferencing models, while Hailo maintained respectable efficiency, particularly for its compact form factor. The report highlights that AI performance gains do not always have to come at the cost of power efficiency, depending on architecture, models and workload optimizations employed. Beyond performance and efficiency, our report also looked at the developer setup process—an often under-appreciated element of total deployment cost. Here, platform complexity diverged more sharply. Axelera's SDK provided a relatively seamless experience with out-of-the-box support for multi-stream inference and minimal manual setup. Nvidia's solution required more hands-on configuration due to model compatibility limitations with DeepStream, while Hailo's SDK was Docker-based, but required model-specific pre-processing and compilation. The takeaway: development friction can vary widely between platforms and should factor into deployment timelines, especially for teams with limited AI or embedded systems expertise. Here Axelera's solutions once again demonstrated simplicity in its out-of-box experience and setup that the other solutions we tested could not match. Our study also analyzed object detection accuracy using real-world video footage. While all platforms produced usable results, differences in detection confidence and object recognition emerged. Axelera's accelerators showed a tendency to detect more objects and draw more bounding boxes across test scenes, likely a result of its model tuning and post-processing defaults that seemed more refined. Still, our report notes that all tested platforms could be further optimized with custom-trained models and threshold adjustments. As such, out-of-the-box accuracy may matter most for proof-of-concept development, whereas other, more complex deployments might rely on domain-specific model refinement and tuning. Axelera AI's Metis PCI Express Card And M.2 Module AI Inference Accelerators Our AI research and performance validation report underscores the growing segmentation in AI inference hardware. On one end, general-purpose GPUs like those from NVIDIA offer high flexibility and deep software ecosystem support, which is valuable in heterogeneous environments. On the other, dedicated inference engines like those from Axelera provide compelling efficiency and performance advantages for more focused use cases. As edge AI adoption grows, particularly in vision-centric applications, demand for energy-efficient, real-time inference is accelerating. Markets such as logistics, retail analytics, transportation, robotics and security are driving that need, with form factor, power efficiency, and ease of integration playing a greater role than raw compute throughput alone. While this round of testing (you can find our full research paper here) favored Axelera on several fronts—including performance, efficiency, and setup simplicity—this is not a one-size-fits-all outcome. Platform selection will depend heavily on use case, model requirements, deployment constraints, and available developer resources. What the data does make clear is that edge AI inference is no longer an exclusive market GPU acceleration. Domain-specific accelerators are proving they can compete, and in some cases lead, in the metrics that matter most for real-world deployments.


Bloomberg
42 minutes ago
- Bloomberg
UniCredit Boosts Its Equity Stake in Commerzbank to About 20%
UniCredit SpA, the Italian lender that hopes to acquire Commerzbank AG, increased its equity stake in the German firm and became its largest shareholder after converting derivatives into stock. The move gives Milan-based UniCredit about 20% of the effective voting rights and it intends to eventually convert its remaining 'synthetic position' to boost its Commerzbank holding to roughly 29%, according to a statement Tuesday. UniCredit announced the change after receiving approvals from the European Central Bank, German antitrust authorities and the US Federal Reserve.