SecurityBrief Asia - Technology news for CISOs & cybersecurity decision-makers
Story image

Enterprises face rising risks from generative AI data leaks

Yesterday

Netskope Threat Labs has released a report detailing a significant increase in data interactions with generative AI (genAI) applications by enterprise users, illustrating increased risks of data breaches and insider threats.

The report highlights a substantial rise in data being sent to genAI applications over the past year, which has surged by 30 times.

Sensitive data such as source codes, regulated data, passwords, keys, and intellectual property are among the data types being transmitted, posing a heightened risk of security breaches, compliance violations, and intellectual property theft. Moreover, the prevalence of 'shadow AI' has emerged as a noteworthy challenge, affecting 72% of enterprise users who engage with genAI apps through personal accounts for work purposes.

The analysis identifies 317 genAI applications, including ChatGPT, Google Gemini, and GitHub Copilot, which are widely accessed across various enterprises.

The report suggests that approximately 75% of enterprise users utilise apps with genAI features, creating potential insider threats, albeit unintentional.

James Robinson, the Chief Information Security Officer of Netskope, commented on the findings, stating: "Despite earnest efforts by organisations to implement company-managed genAI tools, our research shows that shadow IT has turned into shadow AI, with nearly three-quarters of users still accessing genAI apps through personal accounts. This ongoing trend, when combined with the data in which it is being shared, underscores the need for advanced data security capabilities so that security and risk management teams can regain governance, visibility, and acceptable use over genAI usage within their organisations."

The report underscores the lack of visibility within organisations on how data is processed and stored when engaging with indirect genAI usage. The prevailing approach has been to restrict app usage until clear policies are formulated. However, Ray Canzanese, Director of Netskope Threat Labs, urged a more strategic approach: "Our latest data shows genAI is no longer a niche technology; it's everywhere."

"It is becoming increasingly integrated into everything from dedicated apps to backend integrations. This ubiquity presents a growing cybersecurity challenge, demanding organisations adopt a comprehensive approach to risk management or risk having their sensitive data exposed to third parties who may use it to train new AI models, creating opportunities for even more widespread data exposures."

The research also indicates a considerable shift towards local hosting of genAI infrastructure, growing from less than 1% to 54% within a year. While this shift aims to reduce exposure risks from external apps, it introduces new risks associated with data management and security protocols.

Ari Giguere, Vice President of Security and Intelligence Operations at Netskope, elaborated on the evolving security landscape: "AI isn't just reshaping perimeter and platform security—it's rewriting the rules. As attackers craft threats with generative precision, defenses must be equally generative, evolving in real-time to counter the resulting 'innovation inflation.' Effective combat of a creative human adversary will always require a creative human defender, but in an AI-driven battlefield, only AI-fueled security can keep pace."

To mitigate AI-related risks, nearly all organisations are deploying security measures by controlling access to AI tools and managing data sharing protocols. Netskope advises enterprises to frequently update risk frameworks specifically concerning AI or genAI implementations, ensuring adequate data protection. This includes assessing the genAI environment, strengthening app controls, and regularly benchmarking these measures against industry standards and best practices.

These findings stress the importance for organisations to remain vigilant and proactive in managing the risks associated with genAI, including adapting their existing security measures to accommodate the rapidly evolving nature of AI technologies.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X