Proofpoint unveils AI intent security for enterprises
Proofpoint has launched Proofpoint AI Security, which it positions as an intent-based security approach for organisations using autonomous AI agents alongside employees.
Proofpoint said the offering is designed to address a growing gap in enterprise controls as AI agents take actions across email, browsers, endpoints and internal systems. It framed the problem as one of intent: existing tools can track identities, traffic and permissions, but do not assess whether an agent's behaviour matches what a user asked it to do.
The product is built on what Proofpoint calls an Agent Integrity Framework. It also introduced a five-phase maturity model, described as a roadmap for governance from early discovery through to runtime enforcement.
The announcement follows Proofpoint's acquisition of Acuvity. Proofpoint cited Acuvity research showing that 70% of organisations lack optimised AI governance, and that 50% expect AI-related data loss within 12 months.
Autonomous agents have moved quickly from experiments to operational use in areas such as web browsing, system access, email and workflow orchestration. In developer settings, organisations are also connecting coding assistants, plugins and tools through emerging integration patterns. Proofpoint highlighted risks including prompt injection and agentic privilege escalation, in which an agent takes actions outside its approved scope.
Another concern is the speed and chaining of actions: a single request could trigger dozens of automated steps across systems, often without human oversight.
Intent checks
Proofpoint said its approach centres on intent-based detection models across AI interactions. Traditional security products, it argued, lack visibility into the semantic content of AI prompts and responses, where policy breaches and risky actions may arise.
Proofpoint said the product continuously evaluates whether behaviour initiated by a user or an agent aligns with the original request and defined policies. It analyses semantic context and flags misaligned or high-risk actions in real time, including activity that could lead to non-compliant communications or data loss.
"AI is now embedded in how work gets done, and security must evolve with it," said Sumit Dhawan, CEO of Proofpoint.
"Humans and AI agents share similar risks: both can be manipulated and both can take actions that diverge from their intended purpose, yet traditional security was never designed to validate intent. Proofpoint is uniquely positioned as a unified cybersecurity platform built to protect people, defend data, and govern AI agents together, providing continuous, intent-based verification that behaviour aligns with policy and intent in the agentic workspace," said Dhawan.
Multiple surfaces
Proofpoint said the product works across endpoints, browser extensions and MCP connections-areas where people and agents use AI tools and where security teams need visibility and control.
Proofpoint said organisations can discover AI tools and services used by staff and agents, including OpenClaw, Ollama, ChatGPT and MCP servers. It can also observe prompts, responses and data flows during AI tool usage.
Proofpoint also outlined policy controls, including access controls and "guardrails", and said the system can inspect and enforce policies during live AI interactions.
The MCP reference points to a growing set of connections between AI assistants and external tools and services. Security teams have raised concerns that these connections may expand the range of actions an agent can take, particularly when it has access to credentials, internal data sources or operational systems.
Framework rollout
Alongside the product, Proofpoint introduced the Agent Integrity Framework, which it described as a guide for defining integrity for AI agents operating inside enterprise environments.
Proofpoint defined agent integrity as assurance that an AI agent operates within its intended purpose, authorised permissions and expected behaviour across interactions, tool calls and data access. It listed five pillars: Intent Alignment, Identity and Attribution, Behavioural Consistency, Auditability, and Operational Transparency.
Proofpoint said the maturity model gives Chief Information Security Officers a phased route for operationalising governance, from discovery to runtime enforcement, without requiring an overhaul of existing security architecture.
"Humans are expected to operate with integrity when using business systems, and AI agents must be held to the same standard," said Ryan Kalember, Executive Vice President of Cybersecurity Strategy at Proofpoint.
"Agent Integrity means ensuring that AI agents act within the boundaries of their intended purpose, authorised permissions, and expected behaviour across every interaction, tool call, and data access," Kalember said.
"With Proofpoint AI Security and the Agent Integrity Framework, we can provide a clear blueprint to help enterprises comprehensively address the full spectrum of risks that emerge when AI agents operate autonomously across enterprise systems," he said.