logo
#

Latest news with #StackOverflow

Developers are now relying on AI tools more than ever, but confidence in them is dropping: Study
Developers are now relying on AI tools more than ever, but confidence in them is dropping: Study

Indian Express

time12 hours ago

  • Indian Express

Developers are now relying on AI tools more than ever, but confidence in them is dropping: Study

AI tools have become everyday companions for software developers. Whether it's writing code faster or learning a new language, tools like GitHub Copilot and Cursor are now part of the developer toolkit. But even as their usage grows, developers are beginning to trust these tools less and less. In Stack Overflow's latest annual survey of 49,000 developers, a whopping 80% said they now use AI tools as part of their workflow. But here's the surprising part: only 29% trust the accuracy of AI-generated code. That number used to be 40%. This mismatch of high usage but low trust shows the complex relationship developers have with AI. 'AI-generated solutions that seem mostly correct but contain subtle flaws' are now the biggest frustration for developers, with 45% highlighting this issue. These near-correct answers can be much worse than clearly wrong ones, as they introduce bugs that are hard to spot, especially for younger or less experienced coders. As a result, many developers are finding themselves spending more time debugging than they did before. It's not unusual for developers to turn to Stack Overflow after AI code fails them. In fact, more than a third said they visit the site because of problems caused by AI-generated code. Some are even pushing back against casual AI coding. Around 72% of developers reject the idea of 'vibe coding ' or blindly pasting AI-suggested code into production. 'That 'close-but-not-quite' problem is here to stay,' the report notes. It's tied to how predictive text generation works, where tools guess what code should come next but don't always understand the logic behind it. But even with all these flaws, most developers aren't giving up on AI tools. Many say they see clear benefits when the tools are used mindfully. Some companies even encourage their teams to use AI assistants. Experts are now suggesting that developers should change how they think about these tools. Developers should treat AI like a 'sparring partner,' not a silent copilot. It's there to help challenge your thinking, not to replace it entirely. When used this way, AI can even help developers learn faster. It gives targeted help, explains unfamiliar concepts and complements traditional resources like documentation or Stack Overflow itself. But as AI changes how developers seek support, even platforms like Stack Overflow are adjusting. 'Although we have seen a decline in traffic, in no way is it as dramatic as some would indicate,' said Jody Bailey, Stack Overflow's Chief Product and Technology Officer. 'That shift is causing Stack Overflow to critically reassess how it gauges success in the modern digital age,' they added. AI isn't going anywhere in software development. But blind trust isn't the answer either. The smartest developers aren't those who skip the hard work but those who know when to ask AI for help and when to double-check everything it says. (This article has been curated by Kaashvi Khubyani, who is an intern with The Indian Express.)

The best-compensated tech roles in Germany
The best-compensated tech roles in Germany

Local Germany

time4 days ago

  • Business
  • Local Germany

The best-compensated tech roles in Germany

The highest-paid tech roles in Germany are engineering and project managers, according to a new survey of software developers published by Stack Overflow. Stack Overflow, an interactive online platform for people learning how to code, collected survey responses from its users around the world. The survey reported that users in the United States, Germany and India responded at the highest rates. The survey asked many other questions about working in software development, including developers' attitudes toward AI tools. The average salary for an engineering manager in Germany is $118,335, or €103,453, according to the survey, and the average salary for a project manager is $110,214, or €96,353. (The Stack Overflow survey listed all of its salary figures in US dollars, The Local converted them to euros at time of writing.) The global average for engineering managers was higher at $130,000, or €113,651, per year. But worldwide, project managers earned significantly less an average of $68,924, or €60,256. Wages for tech jobs in Germany trend lower in comparison to the United States or the United Kingdom, but they are significantly higher than the average salary in Germany, which is around €50,000 per year . READ ALSO: What's considered a good salary for foreigners in Cologne and Düsseldorf? Tech workers in the U.K. also tend to make more than their counterparts in Germany. The average salary for an engineering manager in the U.K. is $136,141, or €119,019. Visitors are seen at the opening of the Hannover Messe industrial trade fair for mechanical and electrical engineering and digital industries. (Photo by RONNY HARTMANN / AFP) However, German salaries are higher than those in France and India, according to this survey. The highest paid tech workers in France were senior executives, who earned an average of $104,413, or €91,282. Engineering managers in France earn an average of $92,812 yearly, or €81,139. Salaries are much lower in India, where senior executives earn an average of $55,795 per year, or €48,778. The second highest-paid tech workers in India are engineering managers, who earn an average of $52,308, or €45,729. Advertisement The worst-paid roles in tech development in Germany according to this survey were students, system administrators and academic researchers. Despite the prestige and high-level of education required to work in an academic research setting, researchers only made an average of $71,349, or €62,376, per year. Across all five countries highlighted in this section of the report, students – unsurprisingly – made the least money on average. EXPLAINED: What to study in Germany to land a high-paying job

Developers adopt AI tools but trust issues persist, survey finds
Developers adopt AI tools but trust issues persist, survey finds

Techday NZ

time6 days ago

  • Business
  • Techday NZ

Developers adopt AI tools but trust issues persist, survey finds

Stack Overflow has released the results of its 2025 Developer Survey, detailing the perceptions and habits of more than 49,000 technologists across 177 countries. The AI trust gap The survey indicates a significant disparity between AI adoption and trust among developers. While 84% of respondents use or plan to use artificial intelligence tools in their workflow, nearly half (46%) report that they do not trust the accuracy of AI-generated output. This marks a substantial rise from 31% indicating a lack of trust in the previous year. This year's expanded artificial intelligence section included 15 new questions, addressing topics such as the utility of AI agent tools, the impact of AI on developers' jobs, and the phenomenon of "vibe coding". "The growing lack of trust in AI tools stood out to us as the key data point in this year's survey, especially given the increased pace of growth and adoption of these AI tools. AI is a powerful tool, but it has significant risks of misinformation or can lack complexity or relevance. With the use of AI now ubiquitous and 'AI slop' rapidly replacing the content we see online, an approach that leans heavily on trustworthy, responsible use of data from curated knowledge bases is critical. By providing a trusted human intelligence layer in the age of AI, we believe the tech enthusiasts of today can play a larger role in adding value to build the AI technologies and products of tomorrow," said Prashanth Chandrasekar, CEO of Stack Overflow. The survey found that 75% of users do not trust AI-generated answers, and 45% find debugging AI-generated code time-consuming. Ethical and security concerns are prevalent, with 61.7% citing these as reasons for hesitancy, while 61.3% wish to maintain full understanding of their code. AI use and productivity Despite low overall adoption, AI agents are associated with productivity improvements. Only 31% of developers currently use AI agents, but among those, 69% report increased workplace productivity. Meanwhile, 17% are planning to adopt such tools, while 38% are not planning to use them at all. A majority (64%) of developers do not see AI as a threat to their employment, though this figure has declined slightly from the previous year's 68%. Platforms and tools Visual Studio Code and Visual Studio remain the most used Integrated Development Environments (IDEs). New AI-enabled IDEs have entered the market, with Cursor at an 18% usage rate, Claude Code at 10%, and Windsurf at 5% among respondents. Among large language models (LLMs), OpenAI's GPT series is the most popular, used by 81% of developers surveyed. Claude Sonnet received 43% usage, and Gemini Flash 35%. Vibe coding and new ways of learning 'Vibe coding', defined as generating software from LLM prompts, was explored for the first time. While AI tools are being adopted for learning and development, nearly 77% of developers indicated that vibe coding is not part of their professional workflow. The trend is more relevant for less experienced developers seeking a rapid start, but it comes with a trade-off in the level of trust and confidence in the output. Community platforms continue to play an important role. Stack Overflow is the most common platform, used or planned to be used by 84% of respondents, followed by GitHub at 67%, and YouTube at 61%. Notably, 35% of respondents reported consulting Stack Overflow when confronted with AI-related issues. The survey shows that 69% of developers have learned a new technology or programming language in the past year, with 36% focusing specifically on AI-enabled tools. Usage of AI tools for learning to code has risen to 44%, up from 37% last year. Top resources remain technical documentation (68%), online resources (59%), and Stack Overflow (51%). For those learning AI-specific skills, 53% used AI tools. Gen Z developers (aged 18-24) are more likely to engage with coding challenges, with 15% participating compared to an overall average of 12%. Additionally, a higher proportion of this age group prefers chat-based and challenge-based learning approaches than other cohorts. International responses and technology adoption The United States, Germany, India, United Kingdom, France, Canada, Ukraine, Poland, Netherlands, and Italy were the top ten countries by survey participation. Trust in AI tools differs by region; India saw the highest proportion of developers expressing some or significant trust in AI at 56%, followed by Ukraine at 41%. Other countries showed lower levels of trust, including Italy (31%), Netherlands and United States (28%), Poland (26%), Canada and France (25%), United Kingdom (23%), and Germany (22%). Python continues to gain in popularity, with a seven percentage point increase since 2024. JavaScript (66%), HTML/CSS (62%), and SQL (59%) remain popular programming languages. Docker usage grew by 17 percentage points to 71%, marking it as a widely adopted tool in cloud and infrastructure development. PostgreSQL holds the position as the most sought-after database technology, with 47% planning to use it in the next year or continuing usage, marking its third year at the top in this category. For documentation and collaboration, GitHub leads at 81%, followed by Jira (46%) and GitLab (36%).

The risks of using AI in the software development pipeline
The risks of using AI in the software development pipeline

Techday NZ

time16-07-2025

  • Techday NZ

The risks of using AI in the software development pipeline

The unveiling of a new technology is often accompanied by much fanfare about the significant positive impact it will have on society. Think back to events such as the creation of the internet, the mobile phone, cloud computing, and now artificial intelligence. Each was lauded as a big step forward for daily life. However, the disruption caused by such advances doesn't always come down to the technology itself but rather how it is utilised by the end user. Unfortunately, a positive outcome isn't always guaranteed. A recent StackOverflow survey[1] revealed approximately 76% of developers are using (or are planning to use) AI tooling in the software development process. This represents a rapid, seismic shift in how software is created, especially at the enterprise level. In just three years, it seems many development teams have shifted from gradual changes in the software development life cycle (SDLC), opting for enormous productivity gains and instant output. However, these gains come at a price that business leaders should not be willing to pay. The rampant, plentiful security bugs plaguing every major artificial intelligence and large language model (AI/LLM) coding assistant represent a code-level security risk for an organisation. Indeed, the best-performing tools are still only accurate around half the time. These tools - in the hands of a developer with low security awareness - simply expedite a volume of vulnerabilities entering the codebase, adding to the ever-growing mountain of code under which security professionals are buried. AI coding assistants are not going away, and the upgrade in code velocity cannot be ignored. However, security leaders must act now to manage their use safely. The growing appeal of AI-assisted coding Today, software developers are expected to perform a wide range of tasks, and that list is growing in scope and complexity. It stands to reason that, when an opportunity for assistance presents itself, your average overworked developer will welcome it with open arms. The issue, however, is that developers will choose whatever AI model will do the job fastest and cheapest, and that may not be in the best interests of their organisation. Take DeepSeek as an example. By all accounts it's an easy, highly functional tool that is (above all), free to use. However, despite the initial hype, it would appear the tool has significant security issues[2], including insecure code output, backdoors that leak sensitive data, and guardrails around creating malware that are far too easy to clear. The challenge of insecure code development Attention has recently been focused on so-called 'vibe coding'. The term refers to coding undertaken exclusively with agentic AI programming tools like Cursor AI. The developers use prompt engineering rather than writing and continue to prompt an LLM until the desired result is achieved. Naturally, this process places complete trust in the LLM to deliver functioning code, and the way in which many of these tools are programmed is to process answers with unwavering confidence in their accuracy. Independent benchmarking from BaxBench[3] reveals that many popular AI/LLM tools capable of acting as coding assistants produce insecure code. This has led BaxBench to the conclusion that none of the current flagship LLMs are ready for code automation from a security perspective. With 86% of developers indicating they struggle to practice secure coding[4], this should be a deep concern to enterprise security leaders. While it is absolutely true that a security-skilled developer paired with a competent AI tool will see gains in productivity this does not represent the skill state of the general developer population. Developers with low security awareness will simply supercharge the delivery of poor-quality, insecure code into enterprise code repositories, exacerbating the problems the AppSec team is already ill-equipped to address. Skilling the next generation of software developers Vibe coding, agentic AI coding, and whatever the next iteration of AI-powered software development will be are not going away. Indeed, they have already changed the way developers approach their jobs. The solution is not to ban the tools outright and possibly create a monster in the form of unchecked, 'shadow AI' within development teams. Rather, the next generation of developers must be shown how to leverage AI effectively and safely. It must be made clear why and how AI/LLM tools create acceptable risk, with hands-on, practical learning pathways delivering the knowledge required to manage and mitigate that risk as it presents itself. Organisations that don't follow this path risk opening themselves up to security holes that could cause widespread disruption and loss.

Sorry, DevOps: Garbage Data Can Only Generate Garbage AI Outcomes
Sorry, DevOps: Garbage Data Can Only Generate Garbage AI Outcomes

Forbes

time26-06-2025

  • Business
  • Forbes

Sorry, DevOps: Garbage Data Can Only Generate Garbage AI Outcomes

Savinay Berry is the Executive Vice President and Chief Product Officer for OpenText. As artificial intelligence (AI) continues to evolve, we've seen more and more of its capabilities and limitations. We've witnessed AI perform tasks once deemed futuristic, reminiscent of scenes from sci-fi pop culture, such as Hanna-Barbera's iconic 1960s cartoon, The Jetsons. In this animated series, nearly every aspect of life is automated, from housework and meal prep to fashion, depicting a world where AI plays a central role in daily activities. While exaggerated, the show prompted me to contemplate exactly what role AI can—and should—play in helping product leaders deliver the next generation of innovation. Specifically, I began to wonder: How can AI be integrated into DevOps strategies as more than just a tool, but as a strategic ally—an extension of software development teams. The benefits of AI in DevOps—from automating testing and deployment to improving resource management and enhancing security—are driving increased investment. The "2024 Developer Survey" from Stack Overflow revealed that 76% of developers are using or planning to use AI tools, up from 70% last year. Notably, 81% cited productivity gains as the biggest benefit, while 62% valued accelerated skill development. As an innovator, I've already seen AI refine operations, simulating human intelligence across workflows. AI and automation offer unparalleled opportunities to reimagine software delivery. It streamlines resource-intensive areas such as testing, code-writing and deployment, all while ensuring compliance and security. AI and automation can also autogenerate test scripts, adding ready-to-go test scenarios to plans in a single click, as well as creating anticipatory project reports to pinpoint potential risks that could jeopardize software quality. And this is only scratching the surface of what is possible. So, What's The Issue? The risks of not adopting AI are significant and go beyond the DevOps community—it is a larger CIO issue. According to a recent survey from OpenText, 96% of respondents are using, testing, or planning to explore AI across their organization. Moreover, 78% believe failing to leverage internal data effectively will squander AI's potential. This seems promising, but the reality is more nuanced. In DevOps, the intersection of AI and operations goes beyond implementing advanced algorithms; it demands a robust foundation of organized, high-quality data. Without this foundation, achieving desired AI outcomes becomes a formidable challenge, and we risk stumbling at the first hurdle. Cleaning Up Your Data Is More Than Fixing Spreadsheets Understanding the role of information management means recognizing that AI thrives on high-quality data. The expression "garbage in, garbage out" applies here. If data isn't managed for accuracy and accessibility, desired AI results won't be achieved. High-quality data, on the other hand, sets the stage for success. Consider a car engine: Removing deposits and sludge (inaccurate, outdated, irrelevant and incorrect information) reduces friction, while clean oil (large language models and AI) ensures smooth performance. Much like good car engine maintenance, effective information management ensures longevity and optimal performance, generating superior and enduring AI results. Remember, even the most skilled data scientists and developers can't achieve optimal results without reliable data. And while many businesses understand AI's dependence on quality data, few feel ready to act on it. It's a shame because only organizations that govern, unify and protect their information will unlock AI's full, transformative potential. Optimize Your AI Initiatives With Quality Data Is your AI engine running on premium data? For DevOps teams, maximizing value starts with a solid information management foundation. Begin with these five steps: 1. Audit Your Data: Conduct a comprehensive review of all data assets (data logs, appdev dashboards, metadata, etc.) across cloud and on-premises storage. Identify data sources, formats and quality to build a knowledge base for AI workflows that can support faster application delivery, automated testing and intelligent code suggestions. Clean historical data also improves time-to-market predictions. 2. Set Data-Governance Standards: Adhering to data governance standards ensures that the applications they build comply with privacy regulations and industry standards. Establish clear data-governance protocols to safeguard privacy and ensure proper data management. Consistent, high-quality data flows are essential for reliable AI results, making effective governance paramount. 3. Implement Continuous Data Integration: Embrace an ongoing process to integrate disparate datasets into a unified format suitable for AI analysis. This continuous aggregation ensures AI assistants have a relevant and useful foundation for effective functionality. This can reduce the burden on not only application developers, but also quality assurance testers and managers, as accurate results can be assured every time. 4. Secure Data Flows: Prioritize data security and align your practices with regulatory requirements and industry standards. Implement proactive validation checks and AI-driven threat detection to maintain data integrity and security. Developers must incorporate security best practices, such as encryption and access controls, into their data handling processes. 5. Enhance Data Accessibility: Enable conversational search interfaces and AI assistants to access relevant datasets from multiple knowledge bases, when your information is clean. These tools can empower developers to optimize the software delivery cycle and reduce delivery times. With tools like this, clients I've worked with were able to maximize test coverage in less time and on (or under) budget, while enjoying greater access to high-level insights. Elevating DevOps With AI Through Information Management AI and automation have already shown their potential to enhance operations and productivity. However, the next step is sustainable and scalable AI. This evolution is essential for reimagining business information ecosystems and elevating people to become innovative leaders, rather than late adopters. As technology leaders, we stand at the forefront of the AI revolution, and we must recognize that information management is not merely a support function, but the catalyst for AI excellence. Effective information management not only boosts AI capabilities, but it also encourages innovative, targeted and successful application development, delivery, execution and measurement. Embracing information management as a cornerstone of AI-driven progress is the key to achieving the level of product excellence that once seemed like science fiction. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store