
Kaspersky warns AI-generated passwords expose users to attacks
The increased prevalence of online accounts has led to a surge in password re-use and reliance on predictable combinations of names, dictionary words, and numbers. According to Kaspersky, many people are seeking shortcuts by using AI-based tools like LLMs to create passwords, assuming that AI-generated strings offer superior security due to their apparent randomness.
However, concerns have been raised over the actual strength of these passwords. Alexey Antonov, Data Science Team Lead at Kaspersky, examined passwords produced by ChatGPT, Llama, and DeepSeek and discovered notable patterns that could compromise their integrity.
"All of the models are aware that a good password consists of at least 12 characters, including uppercase and lowercase letters, numbers and symbols. They report this when generating passwords," says Antonov.
Antonov observed that DeepSeek and Llama sometimes produced passwords utilising dictionary words with letters swapped for similarly-shaped numbers, such as S@d0w12, M@n@go3, and B@n@n@7 for DeepSeek, and K5yB0a8dS8 and S1mP1eL1on for Llama. He noted: "Both of these models like to generate the password 'password': P@ssw0rd, P@ssw0rd!23 (DeepSeek), P@ssw0rd1, P@ssw0rdV (Llama). Needless to say, such passwords are not safe."
He explained that the technique of substituting certain letters with numbers, while appearing to increase complexity, is well-known among cybercriminals and can be easily breached using brute force methods. According to Antonov, ChatGPT produces passwords which initially appear random, such as qLUx@^9Wp#YZ, LU#@^9WpYqxZ and YLU@x#Wp9q^Z, yet further analysis reveals telling consistencies.
"However, if you look closely, you can see patterns. For example, the number 9 is often encountered," Antonov said.
Examining 1,000 passwords generated by ChatGPT, he found that certain characters, such as x, p, l and L, appeared with much higher frequency, which is inconsistent with true randomness. Similar patterns were observed for Llama, which favoured the # symbol and particular letters. DeepSeek showed comparable tendencies in password generation habits.
"This doesn't look like random letters at all," Antonov commented when reviewing the symbol and character distributions.
Moreover, the LLMs often failed to include special characters or digits in a significant portion of passwords: 26% of ChatGPT passwords, 32% for Llama, and 29% for DeepSeek were affected. DeepSeek and Llama occasionally generated passwords that were shorter than the 12-character minimum generally recommended for security.
These weaknesses, including pronounced character patterns and inconsistent composition, potentially enable cybercriminals to target common combinations more efficiently, increasing the likelihood of successful brute force attacks.
Antonov referenced the findings of a machine learning algorithm he developed in 2024 to assess password strength, stating that almost 60% of all tested passwords could be deciphered in under an hour using contemporary GPUs or cloud-based cracking services. When applying similar tests to AI-generated passwords, the results were concerning: "88% of DeepSeek and 87% of Llama generated passwords were not strong enough to withstand attack from sophisticated cyber criminals. While ChatGPT did a little better with 33% of passwords not strong enough to pass the Kaspersky test."
Addressing the core problem, Antonov remarked, "The problem is LLMs don't create true randomness. Instead, they mimic patterns from existing data, making their outputs predictable to attackers who understand how these models work, notes Antonov"
In light of these findings, Kaspersky recommends individuals and organisations use dedicated password management software instead of relying on LLMs. According to Kaspersky, dedicated password managers employ cryptographically secure generators, providing randomness with no detectable patterns and storing credentials safely in encrypted vaults accessible via a single master password.
Password management software, Kaspersky notes, often provides additional features such as auto-fill, device synchronisation, and breach monitoring to alert users should their credentials appear in data leaks. These measures aim to reduce the risk of credential theft and the impact of data breaches by encouraging strong, unique passwords for each service.
Kaspersky emphasised that while AI is useful for numerous applications, password creation is not among them due to its tendency to generate predictable, pattern-based outputs. The company underlines the need to use reputable password managers as a first line of defence in maintaining account security and privacy in the digital era.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
an hour ago
- Techday NZ
The risks of using AI in the software development pipeline
The unveiling of a new technology is often accompanied by much fanfare about the significant positive impact it will have on society. Think back to events such as the creation of the internet, the mobile phone, cloud computing, and now artificial intelligence. Each was lauded as a big step forward for daily life. However, the disruption caused by such advances doesn't always come down to the technology itself but rather how it is utilised by the end user. Unfortunately, a positive outcome isn't always guaranteed. A recent StackOverflow survey[1] revealed approximately 76% of developers are using (or are planning to use) AI tooling in the software development process. This represents a rapid, seismic shift in how software is created, especially at the enterprise level. In just three years, it seems many development teams have shifted from gradual changes in the software development life cycle (SDLC), opting for enormous productivity gains and instant output. However, these gains come at a price that business leaders should not be willing to pay. The rampant, plentiful security bugs plaguing every major artificial intelligence and large language model (AI/LLM) coding assistant represent a code-level security risk for an organisation. Indeed, the best-performing tools are still only accurate around half the time. These tools - in the hands of a developer with low security awareness - simply expedite a volume of vulnerabilities entering the codebase, adding to the ever-growing mountain of code under which security professionals are buried. AI coding assistants are not going away, and the upgrade in code velocity cannot be ignored. However, security leaders must act now to manage their use safely. The growing appeal of AI-assisted coding Today, software developers are expected to perform a wide range of tasks, and that list is growing in scope and complexity. It stands to reason that, when an opportunity for assistance presents itself, your average overworked developer will welcome it with open arms. The issue, however, is that developers will choose whatever AI model will do the job fastest and cheapest, and that may not be in the best interests of their organisation. Take DeepSeek as an example. By all accounts it's an easy, highly functional tool that is (above all), free to use. However, despite the initial hype, it would appear the tool has significant security issues[2], including insecure code output, backdoors that leak sensitive data, and guardrails around creating malware that are far too easy to clear. The challenge of insecure code development Attention has recently been focused on so-called 'vibe coding'. The term refers to coding undertaken exclusively with agentic AI programming tools like Cursor AI. The developers use prompt engineering rather than writing and continue to prompt an LLM until the desired result is achieved. Naturally, this process places complete trust in the LLM to deliver functioning code, and the way in which many of these tools are programmed is to process answers with unwavering confidence in their accuracy. Independent benchmarking from BaxBench[3] reveals that many popular AI/LLM tools capable of acting as coding assistants produce insecure code. This has led BaxBench to the conclusion that none of the current flagship LLMs are ready for code automation from a security perspective. With 86% of developers indicating they struggle to practice secure coding[4], this should be a deep concern to enterprise security leaders. While it is absolutely true that a security-skilled developer paired with a competent AI tool will see gains in productivity this does not represent the skill state of the general developer population. Developers with low security awareness will simply supercharge the delivery of poor-quality, insecure code into enterprise code repositories, exacerbating the problems the AppSec team is already ill-equipped to address. Skilling the next generation of software developers Vibe coding, agentic AI coding, and whatever the next iteration of AI-powered software development will be are not going away. Indeed, they have already changed the way developers approach their jobs. The solution is not to ban the tools outright and possibly create a monster in the form of unchecked, 'shadow AI' within development teams. Rather, the next generation of developers must be shown how to leverage AI effectively and safely. It must be made clear why and how AI/LLM tools create acceptable risk, with hands-on, practical learning pathways delivering the knowledge required to manage and mitigate that risk as it presents itself. Organisations that don't follow this path risk opening themselves up to security holes that could cause widespread disruption and loss.


NZ Herald
11 hours ago
- NZ Herald
AI's increasing lies may reflect it is learning to become more human
It's anticipated that OpenAI's web browser will include ChatGPT features built in, among other attributes. A Kiwi, Ben Goodger, is also reportedly heavily involved in its plans. But as we dive headfirst into this new AI-fuelled future, we should demand that this new technology gets the basics right first. Over the past 30 years the internet has opened up our world. We can connect with people and enjoy endless volumes of information with the click of a button. It's a scene out of the Jetsons, minus the flying cars – for now. Traditionally, most internet searches have given the user an exhaustively long list of links to websites with varying degrees of relevant information. The user can then sort through what they find and determine what is most helpful, discarding the rest. However, with AI (artificial intelligence) tools acting as an aggregator, scraping the depths of the internet for whatever information it can find, we must ask: how reliable are its replies to our questions? Well, the growing evidence suggests the reliability is not good. When researching for a story, Google's AI Overview, which provides a summary in response to a user's search prompt, confidently asserted to the Herald that Jim Bolger was a Labour Prime Minister. Even more concerning, however, was that its answer cited official New Zealand Government websites as the source for this information. Bolger spent his entire political career in Parliament with the National Party, so predictably these 'sources' contained no information to support the falsehood. This is an example of what is now commonly referred to as an AI hallucination. It is when the system's algorithm generates information that seems plausible but is totally fabricated. Some of these hallucinations could be relatively minor, but others could be gross misrepresentations of the world we live in and our history. In a New York Times article, published by the Herald on Sunday earlier this year, researchers found the hallucination rate appeared to be increasing. The newest and most powerful systems – called reasoning systems – from companies including OpenAI and Google were generating more errors, not fewer. On one test, the hallucination rates of newer AI systems were as high as 79%. This hardly seems like a piece of technology we can or should be relying on to make sense of our world or teach others about it. We should use AI to help us where it can and there are already basic functions where it performs well, but we need to be wary of the evangelists who preach it as the answer to all our productivity and economic woes. The matter of why AI is having more Jim Morrison-like hallucinations has confused both the technology's creators and sceptical researchers. Perhaps it wants to please us? Perhaps it wants to give us the answers we want to hear – confirming the bias in our questions. Perhaps it is learning to act more human? Sign up to the Daily H, a free newsletter curated by our editors and delivered straight to your inbox every weekday.


NZ Herald
a day ago
- NZ Herald
Watch out, Google: OpenAI's browser and its Kiwi creator are coming for Chrome
Operator, which is only available for those on a US$200 ($334) per month ChatGPT Pro account, handles 'repetitive tasks such as filling out forms, ordering groceries'. OpenAI has not commented. But we do know that Kiwi Ben Goodger is at or near the centre of its plans. Comet is here. A web browser built for today's — Perplexity (@perplexity_ai) July 9, 2025 After creating several core features for Netscape (kids, ask your parents), he was the lead developer for the Firefox browser in the 2000s, from his new home base of San Francisco. After moving to Google, where he would become a vice-president, Goodger co-founded the team who created the Chrome web browser that would knock Microsoft's Internet Explorer off its perch. Google Chrome's Kiwi creator, Ben Goodger, has posted about joining OpenAI, but – riffing on the Apple TV series Severance – will only say his work for the ChatGPT maker is "mysterious and important'. Images / Ben Goodger, Open AI A few months ago, Goodger jumped ship to ChatGPT maker OpenAI, where he is listed simply as a 'member of technical staff' – a job description so intensely nondescript that it screams secret. Goodger posted to social media: 'I'm thrilled to be working at the frontier of technology, helping to develop products that benefit everyone!' How exactly? It's still under wraps. Goodger wouldn't comment. On X, formerly Twitter, he posted a screengrab from Apple TV series Severance with the caption 'The work is mysterious and important'. Is it a Chrome-killer? We'll see over the next few weeks. Two AI browsers have just hit the market. They can 'see' what's on your screen, in various browser tabs, by taking screenshots then interacting with content. The Browser Company's AI-powered web browser, Dia, being used to summarise a 20-minute YouTube video in which several products were compared. Image / New York Times One is from AI start-up Perplexity, which features its own search engine and AI assistant. With a US$200 per month subscription required, reviews have been thin on the ground. ZDNet found Comet Assistant is context-aware, able to reference open tabs for research, summarise content inline – without you having to switch browser tabs or to a chatbot app – and answer questions about web pages without copy-pasting. 'This is especially useful for tasks such as comparing products across sites or analysing information on the fly ... [but] Like all LLMs [large language models], Perplexity still frequently gets facts wrong. Always, always double-check its responses,' ZDNet said. And the New York Times recently reviewed a second newcomer, Dia – from a start-up called the Browser Company. While AI bots like ChatGPT, Gemini and Claude require opening a separate tab or app and pasting in content, Dia's web browser integrates its chatbot – so you can ask questions about a website's content without leaving your current browser tab. Dia can pull tricks like summarising a 20-minute video, so you don't have to watch it. The New York Times used it to shortcut a clip that compared various products. The new browser performed well. But there's no unique AI. Instead, Dia works with several LLMs. Dia is currently free, but Apple-only (a Windows version is coming) and there's a waitlist. The firm says it will ultimately charge for a range of versions costing between US$5 and 'hundreds of dollars per month'. Why all the monthly subscription charges from the start-ups (and the top-tier versions of LLMs from OpenAI, Microsoft, Google and other Big Techs)? Searches via Google Chrome are cheap to run and bring in hundreds of billions of revenue as punters click on search ad links (Google's highly-profitable search ad business had US$237 billion in revenue last year). AI queries are a lot more power-intensive, which makes them much more expensive to run, with self-contained answers. King of the artificial intelligence hill OpenAI lost money on a reported US$10b revenue last year. Someone's got to pay the bills. There are other possible twists. Google is facing an antitrust case brought by the US Department of Justice. During an initial hearing in May, both OpenAI and Perplexity said they were open to buying Chrome, if that formed part of a settlement or court-mandated outcome. And Apple, which is already adding AI smarts to its Safari web browser and is unencumbered by the need for search ad revenue, has in turn been reportedly sniffing around Perplexity. Beam me down Beam Mobility e-scooters have been spotted this month. Is the beleaguered rideshare operator making a comeback? Last year, Beam was kicked out of Auckland and several other Australasian cities after breaching licence terms. In Auckland, it put hundreds more scooters on the city's streets than its licence allowed, according to the council. The council referred Beam's conduct to the police. The police told the Herald it was a civil matter and sent it back in the council's court. An Auckland Council spokeswoman told Tech Insider today: 'The council was offered a settlement, which we declined.' The amount Beam offered was not disclosed. The cash settlement offer was made in January. On July 2, Beam said it has reached a preliminary agreement to merge with a second Singapore-owned e-scooter rental operator, Neuron – and it was a Neuron warehouse where Beam e-scooters were recently spotted. A merger can be a way back on to a city's streets. After being exiled over glitches and its inadequate response, Lime returned to Auckland after it was merged with Jump (both players had a common major investor, Uber; Lime subsequently won a licence in its own right). But while Neuron has at times operated in Auckland, Christchurch and Dunedin, it does not currently hold a licence in any New Zealand city. The back door is shut. ComCom boss takes to Uber on LinkedIn Commerce Commission John Small has taken to LinkedIn to recommend people 'switch' to Uber rivals Bolt or Didi to 'be kind to your driver', whom he says will get a bigger clip of the ticket. Uber is fighting, in the Supreme Court, to overturn a 2022 Court of Appeal ruling that classified four drivers as employees. Bolt recently launched in NZ against Uber and Didi. In his post, Small says Uber has a higher 'tax' (taking a bigger cut of the fare) but also - perhaps ironically - boosts one of the pro-Uber talking points by reiterating a point made earlier by the Herald that the tech giant's contractor-only arrangement provides new market entrants with a readymade pool of drivers. And here's one from me: Chris Keall is an Auckland-based member of the Herald's business team. He joined the Herald in 2018 and is the technology editor and a senior business writer.