SecurityBrief Asia - Technology news for CISOs & cybersecurity decision-makers
Asia
Cisco open-sources Foundry Security Spec for AI testing

Cisco open-sources Foundry Security Spec for AI testing

Thu, 14th May 2026 (Today)
Mark Tarre
MARK TARRE News Chief

Cisco has open-sourced Foundry Security Spec, a specification for building agentic security evaluation systems.

The release outlines a framework for security teams using large language models to test software for vulnerabilities, with an emphasis on verifying findings rather than relying on raw model output.

The specification is model-agnostic and infrastructure-neutral, allowing teams to adapt it to different AI models and technical environments. Drawn from Cisco's internal security engineering work, it is intended for use with GitHub's spec-kit workflows.

Cisco is publishing two main elements: the specification itself, which covers eight core agent roles, five extension roles, a finding lifecycle, a coordination layer and about 130 functional requirements; and a constitution containing 11 principles shaped by production failures the company encountered and fixed.

The problem

The move addresses a problem that has emerged as security teams experiment with frontier AI models. Simply pointing a model at a code repository and asking it to find flaws can produce a flood of unverified results, mixing useful leads with false positives and invented issues.

Cisco's argument is that the value lies less in the model itself than in the system around it. Foundry Security Spec defines roles, controls and a process for detection, triage, validation and reporting, so teams can measure coverage and decide when an evaluation is complete.

The framework defines roles including Orchestrator, Indexer, Cartographer, Detector, Triager, Validator, Coverage-Guide and Reporter. Each has a specific purpose, defined inputs and outputs, and a set of functional requirements with supporting rationale.

The specification is not a software product or managed service. Users are expected to implement the design in ways that fit their own environment, threat model and governance requirements, while keeping human oversight central to security decisions.

Open design

Cisco chose to publish the design rather than release its internal code. Its in-house implementations are closely tied to company systems, including its LLM gateway, issue-tracking tools and private cloud, which would limit their usefulness outside Cisco.

What the company believes can transfer more broadly is the architecture. That includes the roles required in an agentic evaluation system, how findings move from detection to publication, what counts as completion, and where controls should sit to reduce the risk of poor decisions by AI models.

The framework is also intended to work alongside Project CodeGuard, an earlier Cisco initiative later donated to the Coalition for Secure AI. In this setup, CodeGuard provides rules for the Detector role, while Foundry adds the surrounding process for broader, more structured security evaluation.

Cisco described the relationship between the two as a loop between detection and prevention. In that model, established rules scan a codebase for known classes of issues, while exploratory agents look for new or target-specific weaknesses. When a new issue is confirmed, it can be turned into an updated rule for future scans and, in principle, into secure coding guidance used earlier in the software development process.

That approach could help address one of the main weaknesses in AI-assisted security work: the lack of a reliable signal on what has been covered, what has been verified and what remains unresolved.

It also reflects a wider shift in cybersecurity operations. As AI tools become more capable of identifying vulnerabilities quickly, defenders face pressure to move beyond manual review cycles and ad hoc testing methods. The challenge is not only speed, but confidence in the output and the ability to audit decisions.

Foundry is built around functional requirements and roles rather than a specific model, so it should remain usable even as the underlying AI systems change. Cisco's position is that core functions such as orchestration, detection and validation will still be needed whether teams are using current large language models or later reasoning agents.

Omar Santos, Distinguished Engineer, AI Security Engineering, Cisco, wrote: "In the age of AI, the real game changer is more than the latest LLM, it's how you put it to work. That's why we're open-sourcing the Foundry Security Spec, a battle-tested blueprint for building an agentic security evaluation system. Because the framework is model-agnostic and stack-agnostic, organizations can build a harness that fits their unique environment. In sharing what we've learned, our goal is to help the community of defenders move faster and smarter. It enables organizations to shift from noisy alerts to verifiable security findings that drive impact."

He added: "Our internal implementations are tightly bound to Cisco infrastructure: our LLM gateway, our issue tracker, our private cloud, etc. Open sourcing that code would give defenders something that runs in exactly one environment. It would not transfer. What transfers is the design: which roles you need and why, what each must guarantee, how findings flow from detection to publication, what 'done' means for an evaluation, where the quality gates go, and which shortcuts will hurt you six months in. That design is model agnostic and infrastructure-neutral."