AI-native attacks drive shift to continuous cyber tests
Security executives expect 2026 to mark a shift towards continuous validation of cyber controls, wider use of AI agents on both sides of the attack-defence divide, and a rise in synthetic identities that blur the boundary between humans and machines.
Experts from HackerOne and Delinea say organisations will need to pair human judgement with automated systems at greater scale, as attackers adopt AI-native techniques and enterprises deploy more AI in their own operations.
Several predict that frameworks such as Continuous Threat Exposure Management (CTEM) will move from pilot status into daily practice, as security budgets come under pressure and boards demand measurable risk reduction.
Risk economics
HackerOne chief executive Kara Sprague said security leaders are reassessing spending against demonstrable risk outcomes and will favour controls that show what can actually be exploited.
"As economic headwinds persist, security leaders are no longer asking what to cut-they're asking what delivers measurable risk reduction. In this environment, security can't afford to be static, theoretical, or siloed. It must be continuous, validated, and tied to business impact.
If your budget were halved, which controls would you keep? The answer increasingly points to what delivers real-time insight into what's exploitable-not just what's theoretically vulnerable.
In 2026, the shift toward operationalized exposure management will accelerate. Inspired by frameworks like Continuous Threat Exposure Management (CTEM), security leaders will prioritize ongoing visibility, adversarial validation, and faster remediation," said Kara Sprague, CEO, HackerOne.
Sprague expects this emphasis on validation to sit alongside a change in how organisations build resilience, with less focus on adding tools and more on verified findings and rapid remediation paths.
Agentic security
Sprague said security teams will lean more on agent-based techniques as attackers expand their own use of AI, and as enterprise AI deployments open new classes of vulnerabilities.
"In 2026, resilience won't come from adding more tools. It will come from having verified vulnerabilities, reproducible exploit paths, and clear severity insights-and acting on them quickly. Two forces are pushing this shift.
First, AI is reshaping the threat landscape. Attackers are using AI to accelerate their workflow-automating discovery, chaining exploits, and evading defenses faster than before. At the same time, enterprise adoption of AI systems is exploding, which dramatically expands the attack surface and exposes organizations to new classes of vulnerabilities such as prompt injection and model manipulation.
Second, agentic security is starting to change the game. Defenders now have AI agents that can automatically probe systems, reproduce exploit chains, score impact, and even trigger fixes. Combined with human creativity, this creates a feedback loop that adapts as fast as attackers do.
"And in that world, crowdsourced security becomes even more essential. When human ingenuity pairs with AI-validated findings, organizations get fewer false positives, clearer prioritization, and a faster path from "something looks suspicious" to "we know what's exploitable and how to fix it.""
said Sprague.
Bionic hackers
Laurie Mercer, Senior Director of Solutions Engineering at HackerOne, said a blend of security researchers and automated agents is already changing how vulnerabilities are found and validated.
"By 2026, over 100 autonomous hackbots will surge across the digital frontier, discovering what once lay beyond human reach. They're already finding real issues, with our latest Hacker-Powered Security Report revealing that in the last year, over 560 reports submitted by hackbots have been valid. They're brilliant at spotting specific bugs like XSS, but they fail at the complex stuff, such as business logic flaws, privilege escalations and chained exploits. Hackbots now combine AI efficiency with human expertise, ensuring that legitimate vulnerabilities aren't lost in the noise while maintaining the nuanced judgment that security decisions require. Leaders in the space embrace AI for scale and speed, but always within a framework that values transparency, responsibility, and human expertise.
Sixty-six percent of researchers already see these machines as allies, amplifying creativity and productivity. It is becoming increasingly clear that the future isn't AI versus humans - it's AI plus humans, and organisations will see the rapid rise of bionic hackers across all organisations. That means automation for coverage, but people for creativity. In 2026, 4,000 security vulnerabilities will be discovered or validated using AI-assisted or autonomous tools, representing around 5% of total findings on major vulnerability platforms," said Laurie Mercer, Senior Director of Solutions Engineering, HackerOne.
Offensive security
HackerOne Chief Product Officer Nidhi Aggarwal said organisations adopting new technologies will lean more heavily on offensive security approaches supported by CTEM.
"In 2026, offensive security will be essential for enabling confident adoption of emerging technologies. Teams will move from reacting to alerts to transforming programs with continuous threat exposure management (CTEM). CTEM is a framework that shifts security from a point-in-time exercise to a dynamic process that adapts as new threats, technologies, and business priorities emerge. By integrating human oversight, it ensures that context, judgment, and accountability remain at the heart of every decision.
The shift is already visible. According to HackerOne's 2025 Hacker-Powered Security Report, program testing for AI grew 270% last year, valid AI vulnerabilities rose 210%, and prompt injection attacks jumped 540%. Yet 97% of AI-related incidents stemmed from basic access control failures. CTEM helps reverse that trend by reducing the reliance on reactive fixes and instead creating a continuous loop of discovery, prioritization, and remediation.
The next leap in innovation will come from deepening that collaboration between human skill and AI's scale. Nearly 70% of security researchers now use AI in their workflows, and more than half are expanding their expertise in AI- and machine-learning-based security. That mix of automation and judgment is how trustworthy, self-testing systems will take shape. 2026 will be the year we stop chasing every vulnerability and start continuously reducing real risk," said Nidhi Aggarwal, Chief Product Officer, HackerOne.
AI-native attacks
Gal Diskin, Vice President of Identity Threat and Research at Delinea, said attackers are moving from AI-assisted to AI-native campaigns that alter the tempo and shape of incidents.
"AI-Native Attacks Outpace Human Detection
As artificial intelligence becomes fully embedded in offensive cyber operations, 2026 will mark the first year where AI-generated attacks consistently outpace human detection and response. Threat actors are no longer merely using AI to assist their campaigns - they are designing AI-native attacks: dynamic, self-learning exploit chains that evolve in real time, adjust to defensive behavior, and execute at machine speed.
The resulting "speed gap" between human defenders and AI-driven adversaries will fundamentally reshape cybersecurity operations. Organisations will increasingly rely on defensive AI not just for analytics, but for active countermeasure automation and continuous identity validation.
Key manifestations of AI-native attacks include:
Adaptive Exploit Chains: AI systems autonomously re-sequence payloads and modify indicators of compromise (IOCs) based on real-time feedback from defenses.
Generative Malware: Attack models create polymorphic code variants in seconds, defeating signature-based detection and delaying attribution.
Identity Evasion through AI Personas: Adversarial AI impersonates trusted users or internal service accounts, blending into legitimate behavior patterns.
Accelerated Kill Chain: Breach timelines that once took hours are compressed into minutes, reducing the window for containment.
By late 2026, security teams will measure success not in "mean time to detect" but in "mean time to algorithmic response." Human oversight will remain critical, but human speed alone will no longer be sufficient.
Synthetic Identities Blur Human-Machine Boundaries
The rise of generative AI is eroding the once-clear distinction between real and fabricated identities. In 2026, synthetic identities - digital personas combining authentic personal data with AI-generated attributes - will emerge as a dominant form of identity abuse in both cybercrime and state-sponsored espionage.
What began as a financial fraud problem is now evolving into a multidomain threat. Attackers are creating AI-generated employees, suppliers, or even "partners," complete with social media presence, HR documentation, and verifiable credentials, to infiltrate organizations and supply chains. These identities can pass many forms of traditional verification, allowing adversaries to gain access, establish trust, and move laterally before detection.
Key developments to watch include:
AI-Crafted Personas: Synthetic employees and contractors used to infiltrate enterprises and obtain legitimate access credentials.
Blended Identity Fraud: Merging stolen PII with AI-generated data to bypass KYC, AML, and background verification systems.
Espionage via Synthetic Influence: AI agents posing as journalists, recruiters, or researchers to harvest information from real targets.
Deepfake-Assisted Validation: Combining synthetic identities with realistic voice or video to defeat visual and biometric checks.
The implications extend beyond authentication - they challenge the very notion of digital trust. By 2026, verifying "who" is on the other side of a transaction will require cryptographic assurance and continuous behavioral validation, not just credentials or tokens. "
said Gal Diskin, VP of Identity Threat and Research, Delinea.