Security teams want AI tools under human oversight
Cyware's latest survey found that 77% of security professionals want AI tools in security operations to operate under human oversight. It also found that 88% of organisations are implementing guardrails for AI security tools or already have them in place.
Conducted on site at RSA Conference 2026, the poll drew responses from more than 100 cybersecurity professionals across enterprises, government agencies and service providers. It examined how organisations are using AI and threat intelligence in security operations.
The findings suggest AI is becoming more embedded in day-to-day security work, even as teams remain focused on controls, governance and workflow design. Overall, 78% of respondents said AI has improved threat intelligence operations to some degree.
Threat intelligence sharing also featured strongly in the results, with 79% of respondents describing it as critical or very important. That suggests organisations increasingly want to connect intelligence more closely with detection and response.
Automation Progress
The survey points to gains in several operational areas compared with Cyware's 2025 findings. The share of organisations reporting effective automation between cyber threat intelligence and security operations tools rose to 26% from 13%.
Real-time sharing of threat intelligence across security operations, incident response and vulnerability management teams increased to 32% from 17%. The results suggest more organisations are trying to reduce delays between identifying threats and acting on them.
Participation in formal threat-sharing networks also appears to be expanding. Among respondents, 35% are already part of such networks and another 21% are actively planning to join one, bringing the combined figure to 56%.
Governance Focus
One of the clearest themes in the data is a preference for supervised AI over fully autonomous systems. While Cyware described this as demand for controlled agentic AI, the figures show that security teams still want analysts closely involved in decisions and workflow execution.
That concern is reflected in the governance data. While the release highlights that 32% of respondents have already established clearly defined governance or guardrails for AI security tools, the broader survey finding shows that 88% are either actively implementing such measures or already have them in place.
That gap suggests many organisations are still in transition. AI may be gaining acceptance in security operations, but formal policies and operational controls are still catching up.
Industry Shift
The survey comes amid a broader push across the cybersecurity sector to apply generative and agentic AI to tasks such as triage, investigation and response. Vendors increasingly argue that AI can help analysts manage the volume of alerts, indicators and intelligence feeds flowing into modern security teams.
Even so, the results suggest buyers are not looking for unchecked automation. Instead, they appear to favour systems that support threat intelligence workflows while preserving visibility into how actions are taken and who remains responsible for them.
Cyware linked the findings to its own product strategy, which centres on embedding AI into threat intelligence workflows. Sachin Jade, chief product officer at Cyware, said the results show why security teams are focusing on usage rules and control frameworks as AI becomes more common in operational settings.
"AI is solidifying its role as an essential part of everyday security operations, driving organizations to prioritize the definition of usage and control frameworks," said Jade.
He added that the company had introduced an approach designed to place AI directly into threat intelligence processes while maintaining analyst oversight.
"At RSAC, we introduced our Agentic Fabric approach to meet this exact need by embedding AI directly into threat intelligence workflows, ensuring both powerful automation and the critical need for visibility, control, and analyst oversight are fully maintained," he said.
For the wider market, the figures offer a snapshot of an industry moving beyond experimentation but not yet fully standardised in how AI should be governed within security operations. The rise in automation, collaboration and network participation points to growing operational maturity, while the demand for oversight suggests trust remains tied to human review.