
The risks of using AI in the software development pipeline
Think back to events such as the creation of the internet, the mobile phone, cloud computing, and now artificial intelligence. Each was lauded as a big step forward for daily life.
However, the disruption caused by such advances doesn't always come down to the technology itself but rather how it is utilised by the end user. Unfortunately, a positive outcome isn't always guaranteed.
A recent StackOverflow survey[1] revealed approximately 76% of developers are using (or are planning to use) AI tooling in the software development process. This represents a rapid, seismic shift in how software is created, especially at the enterprise level.
In just three years, it seems many development teams have shifted from gradual changes in the software development life cycle (SDLC), opting for enormous productivity gains and instant output.
However, these gains come at a price that business leaders should not be willing to pay. The rampant, plentiful security bugs plaguing every major artificial intelligence and large language model (AI/LLM) coding assistant represent a code-level security risk for an organisation. Indeed, the best-performing tools are still only accurate around half the time.
These tools - in the hands of a developer with low security awareness - simply expedite a volume of vulnerabilities entering the codebase, adding to the ever-growing mountain of code under which security professionals are buried.
AI coding assistants are not going away, and the upgrade in code velocity cannot be ignored. However, security leaders must act now to manage their use safely.
The growing appeal of AI-assisted coding
Today, software developers are expected to perform a wide range of tasks, and that list is growing in scope and complexity. It stands to reason that, when an opportunity for assistance presents itself, your average overworked developer will welcome it with open arms.
The issue, however, is that developers will choose whatever AI model will do the job fastest and cheapest, and that may not be in the best interests of their organisation.
Take DeepSeek as an example. By all accounts it's an easy, highly functional tool that is (above all), free to use. However, despite the initial hype, it would appear the tool has significant security issues[2], including insecure code output, backdoors that leak sensitive data, and guardrails around creating malware that are far too easy to clear.
The challenge of insecure code development
Attention has recently been focused on so-called 'vibe coding'. The term refers to coding undertaken exclusively with agentic AI programming tools like Cursor AI. The developers use prompt engineering rather than writing and continue to prompt an LLM until the desired result is achieved.
Naturally, this process places complete trust in the LLM to deliver functioning code, and the way in which many of these tools are programmed is to process answers with unwavering confidence in their accuracy.
Independent benchmarking from BaxBench[3] reveals that many popular AI/LLM tools capable of acting as coding assistants produce insecure code. This has led BaxBench to the conclusion that none of the current flagship LLMs are ready for code automation from a security perspective.
With 86% of developers indicating they struggle to practice secure coding[4], this should be a deep concern to enterprise security leaders. While it is absolutely true that a security-skilled developer paired with a competent AI tool will see gains in productivity this does not represent the skill state of the general developer population.
Developers with low security awareness will simply supercharge the delivery of poor-quality, insecure code into enterprise code repositories, exacerbating the problems the AppSec team is already ill-equipped to address.
Skilling the next generation of software developers
Vibe coding, agentic AI coding, and whatever the next iteration of AI-powered software development will be are not going away. Indeed, they have already changed the way developers approach their jobs.
The solution is not to ban the tools outright and possibly create a monster in the form of unchecked, 'shadow AI' within development teams. Rather, the next generation of developers must be shown how to leverage AI effectively and safely.
It must be made clear why and how AI/LLM tools create acceptable risk, with hands-on, practical learning pathways delivering the knowledge required to manage and mitigate that risk as it presents itself.
Organisations that don't follow this path risk opening themselves up to security holes that could cause widespread disruption and loss.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
3 days ago
- Techday NZ
The risks of using AI in the software development pipeline
The unveiling of a new technology is often accompanied by much fanfare about the significant positive impact it will have on society. Think back to events such as the creation of the internet, the mobile phone, cloud computing, and now artificial intelligence. Each was lauded as a big step forward for daily life. However, the disruption caused by such advances doesn't always come down to the technology itself but rather how it is utilised by the end user. Unfortunately, a positive outcome isn't always guaranteed. A recent StackOverflow survey[1] revealed approximately 76% of developers are using (or are planning to use) AI tooling in the software development process. This represents a rapid, seismic shift in how software is created, especially at the enterprise level. In just three years, it seems many development teams have shifted from gradual changes in the software development life cycle (SDLC), opting for enormous productivity gains and instant output. However, these gains come at a price that business leaders should not be willing to pay. The rampant, plentiful security bugs plaguing every major artificial intelligence and large language model (AI/LLM) coding assistant represent a code-level security risk for an organisation. Indeed, the best-performing tools are still only accurate around half the time. These tools - in the hands of a developer with low security awareness - simply expedite a volume of vulnerabilities entering the codebase, adding to the ever-growing mountain of code under which security professionals are buried. AI coding assistants are not going away, and the upgrade in code velocity cannot be ignored. However, security leaders must act now to manage their use safely. The growing appeal of AI-assisted coding Today, software developers are expected to perform a wide range of tasks, and that list is growing in scope and complexity. It stands to reason that, when an opportunity for assistance presents itself, your average overworked developer will welcome it with open arms. The issue, however, is that developers will choose whatever AI model will do the job fastest and cheapest, and that may not be in the best interests of their organisation. Take DeepSeek as an example. By all accounts it's an easy, highly functional tool that is (above all), free to use. However, despite the initial hype, it would appear the tool has significant security issues[2], including insecure code output, backdoors that leak sensitive data, and guardrails around creating malware that are far too easy to clear. The challenge of insecure code development Attention has recently been focused on so-called 'vibe coding'. The term refers to coding undertaken exclusively with agentic AI programming tools like Cursor AI. The developers use prompt engineering rather than writing and continue to prompt an LLM until the desired result is achieved. Naturally, this process places complete trust in the LLM to deliver functioning code, and the way in which many of these tools are programmed is to process answers with unwavering confidence in their accuracy. Independent benchmarking from BaxBench[3] reveals that many popular AI/LLM tools capable of acting as coding assistants produce insecure code. This has led BaxBench to the conclusion that none of the current flagship LLMs are ready for code automation from a security perspective. With 86% of developers indicating they struggle to practice secure coding[4], this should be a deep concern to enterprise security leaders. While it is absolutely true that a security-skilled developer paired with a competent AI tool will see gains in productivity this does not represent the skill state of the general developer population. Developers with low security awareness will simply supercharge the delivery of poor-quality, insecure code into enterprise code repositories, exacerbating the problems the AppSec team is already ill-equipped to address. Skilling the next generation of software developers Vibe coding, agentic AI coding, and whatever the next iteration of AI-powered software development will be are not going away. Indeed, they have already changed the way developers approach their jobs. The solution is not to ban the tools outright and possibly create a monster in the form of unchecked, 'shadow AI' within development teams. Rather, the next generation of developers must be shown how to leverage AI effectively and safely. It must be made clear why and how AI/LLM tools create acceptable risk, with hands-on, practical learning pathways delivering the knowledge required to manage and mitigate that risk as it presents itself. Organisations that don't follow this path risk opening themselves up to security holes that could cause widespread disruption and loss.

RNZ News
09-07-2025
- RNZ News
Nvidia becomes world's first $US4 trillion company
Nvidia's headquarters in Santa Clara, California. Photo: AFP Nvidia's market capitalisation hit US$4 trillion (NZ$6.68t) on Wednesday, making it the first public company in the world to reach that milestone . The computer chip designer's shares rose by as much as 2.5 percent to an all-time high of US$164 (NZ$274), in line with an ongoing surge in demand for artificial intelligence technologies. Nvidia's market value increased more than1500 percent in the past five years. The stock's recent rally followed a sluggish start to the year, when the emergence of a Chinese discount artificial intelligence model developed by DeepSeek shook confidence in stocks linked to the sector. Nvidia achieved a US$1t (NZ1.67t) market value for the first time in June 2023 and tripled in value about a year later, which was faster than Apple and Microsoft. At the time they were the only other US firms with a market value of more than US$3 trillion. However, the surge in demand for AI services is also putting a strain on energy supplies. America's largest power grid is under strain as data centres and AI chatbots consume power faster than new plants can be built. Electricity bills are projected to surge by more than 20 percent this summer in 13 US states - from Illinois to Tennessee and Virginia to New Jersey - serving 67 million customers in a region with the most data centres in the world. Sign up for Ngā Pitopito Kōrero , a daily newsletter curated by our editors and delivered straight to your inbox every weekday.


Techday NZ
26-06-2025
- Techday NZ
The risky race to AI: How DeepSeek is reshaping the AI landscape
The recent release of DeepSeek-R1 by the Chinese startup DeepSeek has garnered a lot of interest and could spark a new wave of innovation, potentially rebalancing the state of AI supremacy between the major competitors on the global stage. By introducing unique features and improving upon existing AI capabilities, DeepSeek-R1 set new standards in large language models (LLM) performance. DeepSeek introduced cost-effective, open-source LLMs, drastically reducing AI development expenses. Its models cost approximately USD$0.10 per million tokens, compared to OpenAI's $4.40. Making high-quality AI tools more accessible and affordable also democratises technology access. This inclusivity challenges existing models that are often exclusive or expensive, which broadens the user base for advanced AI solutions and encourages other companies to enhance their offerings, fostering a cycle of rapid advancements in the field. Since DeepSeek released its open and resource friendly but very competitive model, we are now standing at the advent of a potential open model boom. However, there are significant caution signs that point to security and data privacy risks and political bias and censorship. Nation-state influences In the context of geopolitical tensions, DeepSeek represents a homegrown solution for China that reduces reliance on foreign technologies. This self-reliance is crucial for maintaining technological sovereignty and enhancing national security. Organisations in Western countries are increasingly viewing DeepSeek as a security risk and are consequently blocking access to it. The primary concerns include data privacy, security and censorship. DeepSeek's data collection policies indicate that user information is stored on servers located in China. This raises fears that sensitive data could be accessed by the Chinese government under national security laws. Experts also warn that DeepSeek could be exploited for espionage or influence operations. The app's association with China Mobile, a state-owned telecommunications company, has heightened these concerns as researchers discovered code linking DeepSeek to China Mobile, suggesting potential data privacy issues. DeepSeek has also been observed censoring topics sensitive to the Chinese government, such as the Tiananmen Square incident and discussions about Taiwan. This behaviour raises concerns about the spread of disinformation and the suppression of free speech. Security risks and repercussions Following the rise in popularity of DeepSeek's R1 model, the New York-based cybersecurity firm Wiz examined the company's security measures. The firm's investigation revealed that DeepSeek had left its database infrastructure unsecured, allowing public internet access without any password protection. This oversight exposed a substantial amount of sensitive information, including chat histories, backend data and other confidential details. In response to these concerns, several countries and organisations have taken action. The Australian government, for example, has banned DeepSeek from all government devices, citing unacceptable security risks. Italy's data protection authority has blocked DeepSeek after the company failed to provide information about its data processing practices, sparking Belgian and Irish data protections authorities to open probes requesting information from DeepSeek on the processing and storing of their citizens' data. Multiple U.S. federal agencies, including the Navy and NASA, have restricted the use of DeepSeek due to privacy and security concerns. These actions reflect a growing apprehension in the West regarding the potential risks associated with DeepSeek's data practices and its connections to the Chinese government. DeepSeek also suffered the consequences of its sudden global prominence. Based on information from the Chinese Qi Anxin's XLab security firm, the Global Times, an English-language Chinese newspaper under the People's Daily, reported that DeepSeek has faced escalating cyberattacks since early January 2025, beginning with volumetric DDoS attacks leveraging SSDP and NTP reflection and amplification. The attacks, which further escalated to more sophisticated HTTP proxy and botnet-based DDoS attacks by late January, impacted DeepSeek's service and its ability to register new users. The Beijing-headquartered security firm NSFocus also monitored the attacks and concluded that "this highly coordinated and precise attack suggests that the incident was not accidental, but likely a well-planned and organised cyberattack executed by a professional team." Xlab also observed a significant volume of password brute-force attacks targeting DeepSeek's login page, with "a notable portion originating from U.S. IP addresses." It is important to recognise that surges in account takeover (ATO) attempts are not uncommon on platforms that impose restrictions on new registrations. When the availability of new accounts is constrained, they gain value as a tradeable commodity within underground markets, ultimately drawing the attention of malicious actors and increasing illicit traffic. Proceed with caution Despite the significant pushback against DeepSeek's service, the disruptive potential of the open-source model remains intact, as it is freely accessible for anyone to download, experiment with, and innovate upon. Organisations and users can leverage their own data as ground truth within the privacy of their own premises. This not only mitigates many of the drawbacks associated with DeepSeek's service but also fuels innovation across a spectrum of use cases, both benign and malicious.