SecurityBrief Asia - Technology news for CISOs & cybersecurity decision-makers
Illustration server digital locks shields spotlight cracks vulnerabilities insecure data protection ai adoption

Most organisations neglect key security in rapid AI adoption

Fri, 14th Nov 2025

Organisations are quickly adopting artificial intelligence (AI), but most are failing to address significant risks to their data and systems, according to new industry research. The study, published by Tenable in collaboration with the Cloud Security Alliance, claims that while 89% of firms are running or piloting AI workloads, the vast majority are leaving data unsecured and failing to implement critical security controls.

Rapid uptake

Firms across industries are embracing AI at scale. More than half (55%) are running active AI workloads, while another 34% are in the pilot or experimental phase. Despite this widespread uptake, 34% of these AI adopters have already suffered an AI-related security breach. These incidents typically stem from well-known security gaps, including software vulnerabilities, flaws in AI models, and insider threats.

Common weaknesses

The research suggests that most breaches are not the result of exotic or highly technical AI attacks, but of familiar weaknesses. Exploited software vulnerabilities accounted for 21% of reported incidents, followed by flaws in AI models at 19%, and insider threats at 18%. However, organisational focus appears to be on more speculative, advanced risks such as model manipulation or the use of unauthorised AI systems, highlighting a disconnect between real-world risks and perceived dangers.

Insufficient data protection

Security measures for AI data remain inconsistent. Only 22% of surveyed organisations both classify and encrypt their AI data, while 78% have yet to take both foundational steps. Furthermore, only 26% conduct AI-specific security tests, such as red-teaming. Most businesses rely heavily on regulatory compliance, with 51% basing their approach on frameworks like the NIST AI Risk Management Framework or the EU AI Act. However, compliance alone does not appear to equate to real risk reduction, given the frequency of breaches linked to basic security lapses.

Strategic recommendations

The findings indicate that organisations should move beyond a compliance-led mindset and prioritise well-established security controls such as identity governance, misconfiguration monitoring, workload hardening, and access management within AI environments. It is also recommended that companies embed AI-specific exposures into unified risk strategies across hybrid and multi-cloud infrastructures.

"The data shows us that AI breaches are already here and confirms what we've been warning about: most organisations are looking in the wrong direction," said Liat Hayun, VP of Product and Research, Tenable. "The real risks come from familiar exposures - identity, misconfigurations, vulnerabilities - not science-fiction scenarios. Without addressing these fundamentals, AI environments will remain exposed."

The research also highlights the need for unified security platforms capable of extending risk management to AI workloads. These platforms should enable organisations to move from reactive breach response to more proactive risk reduction by providing comprehensive visibility across IT, hybrid, and cloud environments.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X