SecurityBrief Asia - Technology news for CISOs & cybersecurity decision-makers
Story image

Kaspersky warns AI-generated passwords expose users to attacks

Yesterday

Kaspersky has issued a warning regarding the use of large language models (LLMs) such as ChatGPT, Llama, and DeepSeek for password generation, citing unpredictable security weaknesses that could make users vulnerable to cyberattacks.

The increased prevalence of online accounts has led to a surge in password re-use and reliance on predictable combinations of names, dictionary words, and numbers. According to Kaspersky, many people are seeking shortcuts by using AI-based tools like LLMs to create passwords, assuming that AI-generated strings offer superior security due to their apparent randomness.

However, concerns have been raised over the actual strength of these passwords. Alexey Antonov, Data Science Team Lead at Kaspersky, examined passwords produced by ChatGPT, Llama, and DeepSeek and discovered notable patterns that could compromise their integrity.

"All of the models are aware that a good password consists of at least 12 characters, including uppercase and lowercase letters, numbers and symbols. They report this when generating passwords," says Antonov.

Antonov observed that DeepSeek and Llama sometimes produced passwords utilising dictionary words with letters swapped for similarly-shaped numbers, such as S@d0w12, M@n@go3, and B@n@n@7 for DeepSeek, and K5yB0a8dS8 and S1mP1eL1on for Llama. He noted: "Both of these models like to generate the password 'password': P@ssw0rd, P@ssw0rd!23 (DeepSeek), P@ssw0rd1, P@ssw0rdV (Llama). Needless to say, such passwords are not safe."

He explained that the technique of substituting certain letters with numbers, while appearing to increase complexity, is well-known among cybercriminals and can be easily breached using brute force methods. According to Antonov, ChatGPT produces passwords which initially appear random, such as qLUx@^9Wp#YZ, LU#@^9WpYqxZ and YLU@x#Wp9q^Z, yet further analysis reveals telling consistencies.

"However, if you look closely, you can see patterns. For example, the number 9 is often encountered," Antonov said.

Examining 1,000 passwords generated by ChatGPT, he found that certain characters, such as x, p, l and L, appeared with much higher frequency, which is inconsistent with true randomness. Similar patterns were observed for Llama, which favoured the # symbol and particular letters. DeepSeek showed comparable tendencies in password generation habits.

"This doesn't look like random letters at all," Antonov commented when reviewing the symbol and character distributions.

Moreover, the LLMs often failed to include special characters or digits in a significant portion of passwords: 26% of ChatGPT passwords, 32% for Llama, and 29% for DeepSeek were affected. DeepSeek and Llama occasionally generated passwords that were shorter than the 12-character minimum generally recommended for security.

These weaknesses, including pronounced character patterns and inconsistent composition, potentially enable cybercriminals to target common combinations more efficiently, increasing the likelihood of successful brute force attacks.

Antonov referenced the findings of a machine learning algorithm he developed in 2024 to assess password strength, stating that almost 60% of all tested passwords could be deciphered in under an hour using contemporary GPUs or cloud-based cracking services. When applying similar tests to AI-generated passwords, the results were concerning: "88% of DeepSeek and 87% of Llama generated passwords were not strong enough to withstand attack from sophisticated cyber criminals. While ChatGPT did a little better with 33% of passwords not strong enough to pass the Kaspersky test."

Addressing the core problem, Antonov remarked, "The problem is LLMs don't create true randomness. Instead, they mimic patterns from existing data, making their outputs predictable to attackers who understand how these models work, notes Antonov"

In light of these findings, Kaspersky recommends individuals and organisations use dedicated password management software instead of relying on LLMs. According to Kaspersky, dedicated password managers employ cryptographically secure generators, providing randomness with no detectable patterns and storing credentials safely in encrypted vaults accessible via a single master password.

Password management software, Kaspersky notes, often provides additional features such as auto-fill, device synchronisation, and breach monitoring to alert users should their credentials appear in data leaks. These measures aim to reduce the risk of credential theft and the impact of data breaches by encouraging strong, unique passwords for each service.

Kaspersky emphasised that while AI is useful for numerous applications, password creation is not among them due to its tendency to generate predictable, pattern-based outputs. The company underlines the need to use reputable password managers as a first line of defence in maintaining account security and privacy in the digital era.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X