
The Hidden Cost Of Bad Software Practices
getty
Software isn't just a tool; it's the backbone of modern business. Yet, poor software practices silently drain billions of dollars from organizations every year, crippling innovation, inflating budgets and derailing projects.
The numbers speak for themselves:
• $2.41 trillion was the estimated cost of poor software quality in the U.S. alone.
• In poorly executed projects, 50% of software development budgets are wasted on bug fixes instead of delivering business value.
• Late-stage defect detection can be 100 times more expensive than catching bugs early in development.
• 70% of digital transformation initiatives fail, often attributed to mismanaged software execution and quality issues.
When software fails, the consequences extend far beyond lost revenue. A single undetected error can trigger product recalls, security breaches and irreversible reputational damage.
The Samsung Galaxy Note 7 recall is a prime example: late-stage defects in its battery management system caused devices to overheat and catch fire, forcing Samsung to recall millions of units. The recall and production halts resulted in a $17 billion loss.
The lesson is clear: Bad software is expensive, and the later you catch defects, the higher the cost. But what's the solution?
Fixing software problems starts before a single line of code is written—it starts with hiring the right people. Poor hiring decisions don't just impact payroll; they derail projects, slow innovation and inflate long-term costs.
• A bad hire can cost as much as 30% of that employee's first-year salary.
• An employee who underperforms takes 70% more time to manage than a high-performing one.
• 60% of bad hires will negatively affect the performance of other team members.
This is why top organizations care so much about hiring A-players—the top 10% of engineers—who don't just write code but solve problems before they escalate. They proactively identify risks, build scalable solutions and ensure that software is reliable from the start.
To consistently hire A-players, companies should adopt a structured, repeatable approach to identifying top talent. A key element of this process is creating clear scorecards that go beyond standard job descriptions, defining the role's mission, key outcomes and required competencies. This ensures alignment with business objectives and team dynamics, while also enabling consistent evaluations across candidates, especially useful when multiple interviewers are involved, helping to compare apples to apples.
Structured interviews can then be used to assess candidates based on real career experiences rather than hypothetical scenarios. By exploring past roles, achievements and challenges, companies can uncover patterns of success, adaptability and problem-solving skills.
Finally, rigorous reference checks provide an additional layer of validation. Instead of generic inquiries, they should focus on performance patterns and insights from former managers and colleagues. Cross-referencing a candidate's statements with past supervisors' perspectives can highlight consistency and credibility. Beyond the standard questions, it's essential to dig deeper into how the candidate responded to feedback, influenced team dynamics and handled setbacks. Asking for specific examples of their problem-solving approach, how they navigated conflicts and what their manager might have changed about their performance can reveal crucial insights.
This structured hiring approach, inspired by principles from Who: The A Method for Hiring and insights from The Manager's Handbook, enhances consistency and increases the likelihood of securing high-performing talent. However, hiring great people isn't enough. Without strong engineering standards, even the best engineers can't deliver consistent, high-quality results.
Many companies mistakenly believe that hiring top-tier engineers automatically leads to high-quality software, but even the best engineers can't thrive in a chaotic environment. Talent without the right guardrails leads to inconsistency, while guardrails without talent lead to stagnation.
To create scalable, high-quality software, organizations must establish clear engineering standards that ensure everyone follows a cohesive approach. While the specific methodologies will vary between teams, companies should aim to implement industry-proven best practices that help drive reliability and efficiency. Some examples include:
• Shift-Left Testing: Catch defects early by prioritizing testing in the design and development phases, reducing late-stage rework.
• Continuous Integration/Continuous Deployment (CI/CD): Automate testing and deployments to improve release velocity while maintaining stability.
• Automated Testing: Use tools like Selenium, Jest or Cypress to detect issues before they reach production.
• Static Code Analysis: Tools like SonarQube help spot vulnerabilities and anti-patterns before they become production problems.
• Security Best Practices (OWASP): Enforce secure coding standards to prevent costly security breaches.
Organizations can also enhance their software development efficiency by adopting proven frameworks like DORA (DevOps Research and Assessment). DORA provides a data-driven approach to measuring and improving engineering performance by benchmarking teams against elite engineering organizations. It focuses on four key metrics that directly impact software delivery and operational efficiency:
• Deployment frequency
• Lead time for changes
• Change failure rate
• Mean time to recovery (MTTR)
By tracking these metrics, companies can identify bottlenecks, optimize workflows and measure progress against industry-leading teams. While DORA is a widely adopted framework, it is just one of many approaches that organizations can use to continuously refine their engineering processes and drive long-term results. Success comes from cultivating a measurable environment where talent and best practices align with business goals.
Building high-quality software takes both talent and strong standards. One without the other simply isn't enough. Organizations must invest in both:
• A-players with strong technical skills who understand the business impact of their work. They proactively drive quality, communicate clearly and consistently raise the bar by delivering meaningful outcomes—not just completing tasks.
• Engineering best practices designed to ensure consistent performance, eliminate inefficiencies and align with industry benchmarks.
This balance separates high-performing engineering teams from those stuck in a cycle of technical debt, rework and stagnation. Companies that get this right don't just build better software—they save time, reduce operating costs and improve productivity.
Ultimately, engineering excellence isn't just about writing code—it's about building a system where top talent and best-in-class processes consistently deliver exceptional software in a predictable, repeatable manner.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tom's Guide
5 hours ago
- Tom's Guide
I tested the AI transcription tools for iPhone vs Samsung Galaxy vs Google Pixel — here's the winner
This article is part of our AI Phone Face-Off. If you're interested in our other comparisons, check out the links below. Long before AI was a buzzword included in every handset's marketing material, a few lucky phones already offered automatic transcripts of voice recordings. But the arrival of on-device AI has extended that feature to more phones and more apps, including the Phone app itself, while also adding auto-generated summary features to the mix. All three of the major smartphone makers — Apple, Google and Samsung — offer some type of voice recording app on their flagship phones with real-time transcription as part of the feature set. Those phones now record and transcribe phone calls, too. And summary tools that tap into AI to produce recaps of conversations, articles, recordings and more have become commonly available on iPhones, Pixels and Galaxy S devices alike. But which phone offers the most complete set of transcription and summarization tools? To find out, I took an iPhone 15 Pro, Pixel 9 and Galaxy S25 Plus loaded with the latest available version of their respective operating systems, and put each device through a series of tests. If you need a phone that can turn your speech into text or cut through a lengthy recording to bring you the highlights, here's which phone is most up to the job. I wrote out a scripted phone call, handed one copy to my wife and then scurried outside to call her three separate times from the iPhone, Pixel and Galaxy S device. By scripting out our conversation, we could see which on-board AI provided a more accurate transcript. And after each call, I took a look at the AI-generated summary to see if it accurately followed our discussion of rental properties in the San Francisco Bay Area. The iPhone's transcript was the most muddled of the three, with more instances of incorrect words and a lack of proper punctuation. The biggest misstep, though, was mixed up words that my wife and I had said, as if we had been talking over each other. (We had not.) Because I was calling someone in my Contacts, though, the iPhone did helpfully add names to each speaker — a nice touch. The transcripts from the Pixel 9 and Galaxy S25 Plus were equally accurate when compared to each other. Samsung displays its transcripts as if you're looking at a chat, with different text bubbles representing each speaker. Google's approach is to label the conversation with 'you' and 'the speaker.' I prefer the look of Google's transcript, though I appreciate that when my wife and I talked expenses, Galaxy AI successfully put that in dollar amounts. Google's Gemini just used numbers without dollar designations. As for the summaries, the one provided by iPhone accurately summed up the information I requested from my wife. The Galaxy AI summary was accurate, too, but left out the budget amount, which was one of the key points of our discussion. Google's summary hit the key points — the budget, the dates and who was going on the trip — and also put the summary in second person ('You called to ask about a rental property…"). I found that to be a personal touch that put Google's summary over the top. I will point out that the iPhone and Galaxy S25 Plus summaries appeared nearly instantly after the call. It took a bit for the Pixel 9 to generate its summary — not a deal-breaker, but something to be aware of. Winner: Google — The Pixel 9 gave me one of the more accurate transcripts in a pleasing format, and it personalized a summary while highlighting the key points of the conversation. I launched the built-in recording apps on each phone all at the same time so that they could simultaneously record me reading the Gettysburg Address. By using a single recording, I figured I could better judge which phone had the more accurate transcript before testing the AI-generated summary. The transcript from Samsung's Voice Recorder app suffered from some haphazard capitalization and oddly inserted commas that would require a lot of clean-up time if you need to share the transcript. Google Recorder had the same issue and, based on the transcript, seemed to think that two people were talking. The iPhone's Voice Memos app had the cleanest transcript of the three, though it did have a handful of incorrectly transcribed words. All three recording apps had issues with me saying 'nobly advanced,' with the Galaxy S25 Plus thinking I had said 'nobleek, advanced' and the iPhone printing that passage as 'no league advanced.' Still, the iPhone transcript had the fewest instances of misheard words. As for summaries, the Galaxy AI-generated version was fairly terse, with just three bullet points. Both the Pixel and the iPhone recognized my speech as the Gettysburg Address and delivered accurate summaries of the key points. While getting a summary from the iPhone takes some doing — you have to share your recording with the iOS Notes app and use the summary tool there — I preferred how concise its version was to what the Gemini AI produced for the Pixel. Winner: Apple — Not only did the iPhone have the best-looking transcript of the three phones, its summary was also accurate and concise. That said, the Pixel was a close second with its summarization feature, and would have won this category had it not heard those phantom speakers when transcribing the audio. Why keep testing the transcription feature when we've already put the recording apps through their paces? Because there could come a time when you need to record a meeting where multiple people are talking and you'll want a transcript that recognizes that. You may be in for a disappointing experience if the transcripts of me and my wife recreating the Black Knight scene from 'Monty Python and the Holy Grail' are anything to go by. Both the Galaxy and Pixel phones had problems recognizing who was speaking, with one speaker's words bleeding into the next. The Pixel 9 had more than its share of problems here, sometimes attributing an entire line to the wrong speaker. The Galaxy had more incorrectly transcribed words, with phrases like 'worthy adversary' and 'I've had worse' becoming 'where the adversary is' and '5 had worse,' respectively. The Pixel had a few shockers of its own, but its biggest issue remained the overlapping dialogue At least, those phones recognized two people were talking. Apple Intelligence's transcript ran everything together, so if you're working off that recording, you've got a lot of editing in your future. With this test, I was less interested in the summarization features, though the Pixel did provide the most accurate one, recognizing that the dialogue was 'reminiscent' of 'Monty Python and the Holy Grail.' The Galaxy AI-generated summary correctly deduced that the Black Knight is a stubborn person who ignores his injuries, but wrongly concluded that both speakers had agreed the fight was a draw. The iPhone issued a warning that the summarization tool wasn't designed for an exchange like this and then went on to prove it with a discombobulated summary in which the Black Knight apparently fought himself. Winner: Samsung — Galaxy AI had easier-to-correct errors with speakers' lines bleeding into each other. The Gemini transcript was more of a mess, but the summary nearly salvaged this test for Google. Of all the promised benefits of AI on phones, few excite me more than the prospect of a tool that can read through email chains and surface the relevant details so that I don't have to pick through each individual message. And much to my delight, two of the three phones I've tested stand out in this area. I'm sad to say it isn't the Galaxy S25 Plus. I found the feature a bit clunky to access, as I had to use the built-in Internet app to go to the web version of Gmail to summarize an exchange between me and two friends where we settled on when and where to meet for lunch. Galaxy AI's subsequent summary included the participants and what we were talking about, but it failed to mention the date and location we agreed upon. Both the Pixel and the iPhone fared much better. Gemini AI correctly listed the date, time and location of where we were going to meet for lunch. It even spotted a follow-up email I had sent en route warning the others that I was running late. Apple Intelligence also got this feature right in the iPhone's built-in Mail app. I think the Pixel has the better implementation, as getting a summary simply requires you to tap the Gemini button for all the key points to appear in a window. iOS Mail's summary feature lives at the top of the email conversation so you've got to scroll all the way up to access your summary. Winner: Google — The Pixel and the iPhone summarized the message chain equally well, but Google's implementation is a lot easier to access. In theory, a summary tool for web pages would help you get the key points of an article quickly. The concern, though, is that the summary proves to be superficial or, even worse, not thorough enough to recognize all the key points. So how do you know how accurate the summary is? I figured to find out, I'd run one of my own articles through the summary features of each phone — this article about the push to move iPhone manufacturing to the U.S., specifically. I mean, I know what I wrote, so I should be in a good position to judge if the respective summary features truly got the gist of it. Galaxy AI did, sort of, with its summary consisting of two broadly correct points that the Trump administration wants to move phone manufacturing to the U.S. and that high labor costs and global supply chain automation are the big roadblocks. That's not horribly inaccurate, but it is incomplete, as the article talked more about the lack of dedicated assembly plants and equipment in the U.S. The iPhone's summary — appearing as a tappable option in the menu bar of Safari — was a little bit more detailed on the key roadblock, while also noting the potential for rising prices of U.S.-built phones. However, the summary provided via Gemini AI is far and away the most substantive. It specifically calls out a push for reshoring, notes what Apple already produces in the U.S., and highlights multiple bullet points on the difficulties of U.S. phone manufacturing. Winner: Google — Summaries don't always benefit from being brief, and the Galaxy AI-generated summation of my article hits key points without sacrificing critical details and explanations. You can read that summary and skip my article — please don't, it would hurt my feelings — and still get a good grip on what I had written. Sometimes, notes can be so hastily jotted down, you might have a hard time making sense of them. An ideal AI summary tool would be able to sort through those thoughts and produce a good overview of the ideas you were hoping to capture. If you remember from our AI Writing Tools test, I had some notes on the new features in iOS 26 that I used to try out auto-formatting features provided by each phone's on-device AI. This time around, I tried out the summary features and found them to be generally OK, with one real standout. Both Galaxy AI and Apple Intelligence turned out decent summaries. When I selected the Key Points options in Writing Tools for iOS Notes, the iPhone featured a good general summation of changes in iOS 26, with particular attention paid to the Safari and FaceTime enhancements. Other descriptions in the Apple Intelligence-produced summary were a bit too general for my tastes. I did like the concise descriptions in the Galaxy AI summary, where my lengthy notes were boiled down to two bullet points summing up the biggest additions. It's not the most detailed explanation, but it would work as an at-a-glance synopsis before you dive into the meat of the notes themselves. Gemini AI on board the Pixel 9 struck the best overall mix between brevity and detail. Google's AI took the bullet points of my original notes and turned them into brief descriptions of each feature — a helpful overview that gets to the heart of what I'd be looking for in a summary. Winner: Google — While Galaxy AI scores points for getting right to the point in its summary, the more useful recap comes from Gemini AI's more detailed write-up. If we had restricted these tests to transcripts, it might have been a closer fight, as both Apple and Samsung held their own against Google in converting recordings to text. But throw summaries into the mix, and Google is the clear winner, taking the top spot in four of our six tests. Even in the tests where the Pixel was bested by either the iPhone or the Galaxy S25 Plus, it didn't lag that far behind. Some of this comes down to what you prefer in a summarization tool. If it's concise summaries, you may be more favorably inclined to Galaxy AI than I was. Apple Intelligence also shows some promise that would benefit from fine-tuning to make its tools easier to access. But for the best experience right now, Google is clearly the best at transcription and summarization.
Yahoo
10 hours ago
- Yahoo
Samsung Elec Q2 profit likely to drop 39% on weak AI chip sales
By Heekyong Yang SEOUL (Reuters) -Samsung Electronics is expected to forecast a 39% plunge in second-quarter operating profit on Tuesday, weighed down by delays in supplying advanced memory chips to artificial intelligence chip leader Nvidia. The world's biggest maker of memory chips is projected to report an April-June operating profit of 6.3 trillion won ($4.62 billion), its lowest income in six quarters and fourth consecutive quarterly decline, according to LSEG SmartEStimate. The prolonged weakness in its financial performance has deepened investor concerns over the South Korean tech giant's ability to catch up with smaller rivals in developing high-bandwidth memory (HBM) chips used in artificial intelligence data centres. Its key rivals, SK Hynix and Micron, have benefited from robust demand for memory chips needed for AI, but Samsung's gains have been subdued as it relies on the China market, where sales of advanced chips have been restricted by the U.S. Its efforts to get the latest version of its HBM chips to Nvidia certified by Nvidia are also moving slowly, analysts said. "HBM revenue likely remained flat in the second quarter, as China sales restrictions persist and Samsung has yet to begin supplying its HBM3E 12-high chips to Nvidia," said Ryu Young-ho, a senior analyst at NH Investment & Securities. He said Samsung's shipments of the new chip to Nvidia are unlikely to be significant this year. Samsung, which expected in March that meaningful progress over its HBM chip could come as early as June, declined to comment on whether its HBM 3E 12-layer chips had passed Nvidia's qualification process. The company, however, has started supplying the chip to AMD, the U.S. firm said in June. Samsung's smartphone sales are likely to remain solid, helped by demand for stock ahead of potential U.S. tariffs on imported smartphones, analysts said. Many of its key businesses including chips, smartphones and home appliances continue to face business uncertainty from various U.S. trade policies including President Donald Trump's proposal for a 25% tariff on non-US-made-smartphones and the July 9 deadline for "reciprocal" tariffs against many of its trading partners. The U.S. is also considering revoking authorisations granted to global chipmakers including Samsung, making it more difficult for them to receive U.S. technology at their plants in China. Shares in Samsung, the worst performing stock among major memory chipmakers this year, have climbed about 19% this year, underperforming a 27.3% rise in the benchmark KOSPI. ($1 = 1,363.3600 won)
Yahoo
10 hours ago
- Yahoo
Samsung Elec Q2 profit likely to drop 39% on weak AI chip sales
By Heekyong Yang SEOUL (Reuters) -Samsung Electronics is expected to forecast a 39% plunge in second-quarter operating profit on Tuesday, weighed down by delays in supplying advanced memory chips to artificial intelligence chip leader Nvidia. The world's biggest maker of memory chips is projected to report an April-June operating profit of 6.3 trillion won ($4.62 billion), its lowest income in six quarters and fourth consecutive quarterly decline, according to LSEG SmartEStimate. The prolonged weakness in its financial performance has deepened investor concerns over the South Korean tech giant's ability to catch up with smaller rivals in developing high-bandwidth memory (HBM) chips used in artificial intelligence data centres. Its key rivals, SK Hynix and Micron, have benefited from robust demand for memory chips needed for AI, but Samsung's gains have been subdued as it relies on the China market, where sales of advanced chips have been restricted by the U.S. Its efforts to get the latest version of its HBM chips to Nvidia certified by Nvidia are also moving slowly, analysts said. "HBM revenue likely remained flat in the second quarter, as China sales restrictions persist and Samsung has yet to begin supplying its HBM3E 12-high chips to Nvidia," said Ryu Young-ho, a senior analyst at NH Investment & Securities. He said Samsung's shipments of the new chip to Nvidia are unlikely to be significant this year. Samsung, which expected in March that meaningful progress over its HBM chip could come as early as June, declined to comment on whether its HBM 3E 12-layer chips had passed Nvidia's qualification process. The company, however, has started supplying the chip to AMD, the U.S. firm said in June. Samsung's smartphone sales are likely to remain solid, helped by demand for stock ahead of potential U.S. tariffs on imported smartphones, analysts said. Many of its key businesses including chips, smartphones and home appliances continue to face business uncertainty from various U.S. trade policies including President Donald Trump's proposal for a 25% tariff on non-US-made-smartphones and the July 9 deadline for "reciprocal" tariffs against many of its trading partners. The U.S. is also considering revoking authorisations granted to global chipmakers including Samsung, making it more difficult for them to receive U.S. technology at their plants in China. Shares in Samsung, the worst performing stock among major memory chipmakers this year, have climbed about 19% this year, underperforming a 27.3% rise in the benchmark KOSPI. ($1 = 1,363.3600 won) Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data