SecurityBrief Asia - Technology news for CISOs & cybersecurity decision-makers
Story image

DeepSeek breach highlights centralised AI vulnerabilities

Today

Leading AI platform DeepSeek is currently embroiled in a significant data breach incident, raising alarms over the security of centralised AI models from experts and analysts alike. The breach elucidates the potential vulnerabilities inherent in centralised systems, which function as singular points of failure, thereby making them susceptible to unauthorised access to sensitive user data.

Dr Ben Goertzel, a noted computer scientist and AI expert, has weighed in on the matter, highlighting the benefits of decentralised AI systems. According to Dr Goertzel, while decentralised databases do not inherently resolve all AI challenges, they can contribute to creating a more secure and accountable AI infrastructure. "Decentralized databases don't automatically solve all of AI's problems, but they do push us closer to an AI ecosystem that is more secure, user-controlled, and censorship-resistant," he stated. Such a shift could potentially mitigate risks associated with data breaches by reducing single points of failure and enhancing data ownership and privacy.

While DeepSeek has been gaining traction as a competitive alternative to OpenAI's ChatGPT, this popularity has unfortunately led to activities from cybercriminals aiming to exploit the platform's users. Olga Svistunova, a Senior Web Content Analyst at cybersecurity firm Kaspersky, has reported several scam attempts, including the creation of fake DeepSeek web pages designed to siphon off users' credentials. According to Svistunova, many users have encountered registration process issues on DeepSeek's platforms, which malicious actors have seized upon, creating fraudulent pages that mimic legitimate ones to deceive users into revealing their personal information.

Svistunova advises users to exercise caution by scrutinising web addresses collecting account credentials to ensure authenticity, use unique and strong passwords with the aid of password managers, and enable two-factor authentication wherever feasible. Utilizing reliable cybersecurity solutions for all devices is also strongly recommended to thwart potential loss of credentials and mitigate malware threats.

The DeepSeek situation has further prompted concerns from Positive Technologies about malicious developers impersonating official DeepSeek tools. Threat actors have been distributing infostealers masquerading as Python clients for the AI platform, designed to acquire data from unsuspecting developers.

Gunter Ollmann, CTO at cybersecurity firm Cobalt, believes such attack vectors are proliferating due to their efficacy and the increasing ease with which they can be executed. "I think we'll see this trend continue - both because it's proven to be successful and because it's getting easier to do," Ollmann commented. The rise of AI-assisted malware development has enabled attackers to craft personalised and sophisticated attacks that breach conventional security measures.

This series of incidents surrounding DeepSeek not only underscores the immediate need for robust cybersecurity practices but also calls into question the long-term sustainability of centralised versus decentralised AI systems. As research and development continue to evolve, the importance of fostering secure, user-focused AI architectures remains paramount to protect users and fortify trust in AI technologies.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X