SecurityBrief Asia - Technology news for CISOs & cybersecurity decision-makers
Doctor anxious at pc patient data leaking into ai clouds scene

GenAI drives patient data policy breaches in healthcare

Wed, 11th Mar 2026

Regulated healthcare data such as patient records accounts for most data policy breaches linked to cloud and generative AI use in the sector, according to research from Netskope Threat Labs.

The group analysed activity across a subset of Netskope healthcare customers worldwide over 13 months. Regulated data made up 89% of data policy violations tied to generative AI use, compared with 31% across industries.

The findings come as healthcare organisations expand generative AI use in clinical and administrative work. Staff are also experimenting with consumer services that employers do not manage, increasing the risk that sensitive information is entered into prompts or uploaded as supporting documents.

GenAI account use

Personal generative AI accounts remain common in healthcare workplaces. Netskope Threat Labs found that 43% of healthcare workers still use personal generative AI accounts at work. While this has dropped sharply over the past 13 months, it remains a concern because unmanaged accounts can fall outside normal monitoring and control.

At the same time, organisations have expanded the use of approved tools. The share of healthcare workers using organisation-managed generative AI applications rose from 18% to 67% over the same period. Across industries, the shift was from 26% to 62%.

The report links these patterns to efforts by healthcare security and IT teams to reduce data exposure while meeting employee demand for widely available AI services. Managed applications can sit within corporate identity, policy and logging frameworks, giving organisations more control over what data is shared and where it is stored.

Internal AI tools

Netskope Threat Labs also pointed to growing deployment of internal AI tools in healthcare. These systems often connect to cloud-hosted models through application programming interfaces, creating new traffic patterns for security teams to observe and govern.

API monitoring can show how widely AI services are used, even when an application is deployed internally. Nearly two in three healthcare organisations detected API traffic to OpenAI and AssemblyAI (63% and 62%, respectively). More than a third (36%) detected API traffic to Anthropic.

The figures suggest broad reliance on third-party AI services as organisations integrate AI features into existing workflows. They also underscore the need for access controls and data-handling rules when clinical or operational systems exchange information with external processing services.

Personal cloud risk

The report also examined the use of personal cloud applications at work, where staff may upload files to accounts outside corporate oversight. Regulated data again dominated policy violations, representing 82% of violations tied to personal cloud applications.

Organisations are responding with technical controls that discourage or block uploads to unmanaged services. Over the past year, 56% of healthcare organisations that deployed such policies blocked users from uploading files to personal Google Drive accounts.

Google Drive was followed by Google Gmail (39% blocking uploads), then OneDrive (30%). The report described these controls as an indicator of how often staff try to move data into common consumer services.

Malware delivery

Threat actors continue to use trusted cloud platforms to distribute malware. Employees may be more likely to click links or download files hosted on familiar services, especially where cloud collaboration tools are standard.

In healthcare, Azure Static Web Apps, GitHub and Microsoft OneDrive were most frequently associated with malware distribution attempts detected in the dataset. In total, 8.2% of organisations detected employees trying to download malware from Azure Static Web Apps. GitHub followed at 8%, and Microsoft OneDrive at 6.3%.

These platforms are widely used for software development, file hosting and content distribution, making them attractive channels for attackers seeking to blend malicious activity into routine web traffic.

Governance focus

Ray Canzanese, director of Netskope Threat Labs, said internal risks require as much attention as external threats in healthcare.

"While building defences against external threats is essential for healthcare organisations that have historically been prime targets for cybercriminals, addressing internal risk is equally important, especially in such a highly-regulated industry and a context of fast-paced cloud and AI adoption. Our report shows that those that operate without security guardrails governing cloud and AI usage are very likely to suffer regulated patient and clinical data leaks, and potentially high regulatory penalties. Deploying company-approved applications that meet employees' demands for convenience and productivity, along with relevant security tools that offer full visibility and control over usage and data movements, should be a high priority for healthcare organisations to strike a balance between modernisation and security," Canzanese said.

The research was based on anonymised usage data from a subset of Netskope customers in the global healthcare sector. The data was collected between December 1, 2024 and December 31, 2025 with prior authorisation.