Veza’s new AI Agent Security productcodifies a practical — and urgently needed — approach to securing agentic AI by treating AI agents as first-class identities, offering unified discovery, access governance, and least-privilege controls across major cloud and model platforms.
Agentic AI — autonomous software that can act, reason, and take multi-step actions on behalf of people or systems — moved from research curiosity to enterprise traction over the past two years. As organizations deploy agents to automate workflows, fetch and act on data, and orchestrate services, security teams have been left scrambling to manage a rapidly multiplying population of non-human identities and the complex access relationships those identities create. Industry research and analyst commentary have warned that a substantial share of early agentic projects will fail unless identity and governance problems are solved. Veza, the identity-security vendor built around a relationship-centric Access Graph, announced AI Agent Security as a purpose-built product to address those gaps. The company frames the solution as the first offering aimed at enterprise-scale AI Security Posture Management (AI SPM) that unifies discovery, governance, and enforcement for human, machine, and AI agent identities. The product was unveiled at the Gartner Identity & Access Management Summit and is presented as available to customers now.
At the same time, acquisition activity and platform consolidation are accelerating. ServiceNow and other larger cloud and security vendors are explicitly targeting identity platforms and agent governance as core components of an “AI Control Tower” strategy. That means customers evaluating purchases should weigh the benefits of best-of-breed identity security against the promise of broader platform consolidation where identity + workflow + runtime controls may be offered by a single vendor.
Veza’s product launch is a clear signal that identity security vendors view agentic AI as an identity problem first and foremost — and that practical, enterprise-ready tooling to manage agent identities is now central to any credible AI security strategy.
Source: SiliconANGLE Veza introduces AI Agent Security to tackle emerging agentic AI risks - SiliconANGLE
Background
Agentic AI — autonomous software that can act, reason, and take multi-step actions on behalf of people or systems — moved from research curiosity to enterprise traction over the past two years. As organizations deploy agents to automate workflows, fetch and act on data, and orchestrate services, security teams have been left scrambling to manage a rapidly multiplying population of non-human identities and the complex access relationships those identities create. Industry research and analyst commentary have warned that a substantial share of early agentic projects will fail unless identity and governance problems are solved. Veza, the identity-security vendor built around a relationship-centric Access Graph, announced AI Agent Security as a purpose-built product to address those gaps. The company frames the solution as the first offering aimed at enterprise-scale AI Security Posture Management (AI SPM) that unifies discovery, governance, and enforcement for human, machine, and AI agent identities. The product was unveiled at the Gartner Identity & Access Management Summit and is presented as available to customers now. What Veza is selling: product overview
Veza’s AI Agent Security combines several distinct capabilities into a single control plane built on the company’s Access Graph. The core capabilities the vendor highlights are:- Agent discovery and classification — find AI agents and MCP (Model Control Plane) servers across Copilot Studio, Salesforce Agentforce, AWS Bedrock, Google Vertex AI and public endpoints such as GitHub.
- Agent-to-human mapping — assign ownership and map which humans can deploy, manage, or act through a given agent, enabling accountability.
- Blast-radius and access visualization — show what sensitive data, services, and systems an agent can access, and the paths that connect agents, humans, and resources via the Access Graph.
- Continuous posture assessment — track agent configuration details (models in use, expired secrets, elevated privileges) and surface risky agents or misconfigurations for remediation.
- Governance enforcement — run least-privilege policies, access reviews, and integration with identity governance workflows (IGA) to automate remediation and approvals.
Why the market needs AI Agent Security now
The new threat surface: agents + data + actions
AI agents increase risk in three structural ways:- Agents read and act on data at machine speed, creating more opportunities for data leakage or misuse. Prompt injection and other manipulations can convert seemingly benign inputs (emails, form fields, calendar invites) into exfiltration vectors.
- Agents are frequently created and run by application teams, not by centralized identity teams; this causes agent sprawl and shadow AI where owners, permissions, and lifecycles are unknown.
- Agents often use service credentials, API keys, or model endpoints that, if misconfigured or overprivileged, amplify the blast radius of a single compromise into broad data/system access.
Standards and community guidance are converging
Open, community-driven guidance for GenAI and agentic security has matured rapidly. The OWASP GenAI Security Project has published LLMSecOps frameworks, threat taxonomies, red‑teaming guidance, and operational checklists that explicitly call out the need to treat agents as managed identities and to instrument monitoring and access audits. Enterprises now have a reference baseline for what secure agent operations should look like. Veza’s messaging aligns directly with these community recommendations by focusing on discovery, monitoring, and governance.Technical analysis: what Veza brings to the table
Access Graph as a relationship-first control plane
Veza’s distinguishing technical claim is its Access Graph — a graph data model that represents identities, resources, entitlements, and the edges between them. Graph models excel at mapping transitive access (e.g., user → agent → model → data store) and calculating blast radius, which is essential for agentic systems where indirect access paths quickly become the primary attack surface.- Strength: Graph queries can answer complex “who-can-reach-what” questions quickly, enabling auditors and security teams to see end-to-end exposure.
- Caveat: The effectiveness depends on depth and freshness of integrations — if the graph is missing connectors to specific MCPs, data systems, or identity providers, visibility gaps persist. Integration completeness therefore becomes a critical operational metric.
Integrations with major MCPs and identity systems
Veza advertises integrations with Microsoft Copilot Studio, Salesforce Agentforce, AWS Bedrock, Google Vertex AI, OpenAI, Azure AI, and major identity and cloud platforms. These integrations are necessary to ingest agent metadata, credentials, model usage, and registered endpoints so Veza’s Access Graph can compute meaningful relationships.- Strength: Pre-built connectors accelerate deployment and increase the likelihood of accurate discovery.
- Risk: Each integration represents an attacker surface; careful attention is required to secure the connectors themselves (least-privilege service accounts, encrypted transport, and auditable access patterns).
Continuous posture assessment and automation
Detecting expired secrets, model changes, or new agents is only useful if teams can remediate at scale. Veza’s platform ties detection to governance actions (access reviews, policy enforcement, IGA workflows), intending to close the loop between discovery and mitigation.- Strength: Automating the “last mile” of remediation and integrating with IGA reduces time-to-fix for risky entitlements.
- Risk: Automation must be applied conservatively; overly aggressive revocations can break critical business workflows driven by agents. Orchestration and staged enforcement are operational necessities.
Strengths and competitive signals
- Identity-centric approach: Treating agents as identities aligns with Zero Trust and least-privilege principles. The model scales conceptually — every agent gets the same governance lifecycle as a human or machine identity.
- Graph-based visibility: Access graphs are an inherently better fit than tabular lists for representing transitive access flows and computing blast radii.
- Fast time-to-value via integrations: Strong out-of-the-box connectors to major cloud platforms and MCPs lower the integration burden for customers.
- Market timing and analyst tailwinds: Gartner and other industry analysts have highlighted identity and governance as gating factors for agentic AI success, creating demand for solutions that can operationalize these controls.
Risks, limitations, and open questions
- Coverage vs. completeness: Public product messaging lists many integrations, but real-world environments include bespoke model endpoints, on-premises inference stacks, and homegrown agents. The Access Graph is only as good as the telemetry it receives. Organizations must validate connector coverage and plan for custom integrations.
- Operational overhead: Discovering thousands of transient agents and then assigning human owners, creating access reviews, and maintaining policies can be operationally heavy. Automation helps, but it requires mature identity governance processes to avoid false positives or unnecessary sprint cycles.
- Runtime controls vs. governance: Veza focuses on discovery, mapping, and governance (policy and entitlement control). Runtime protections — model‑level defenses, LLM firewalls, and input sanitization — remain critical and typically require complementary tooling. LLMSecOps guidance emphasizes layered defenses: policy + runtime monitoring + red teaming. Enterprises will need both governance platforms and runtime defenses.
- Integration security: Each connector to cloud or MCP services requires credentials and privileges; if those are not narrowly scoped, organizations replace one problem (unmanaged agents) with another (overprivileged connectors). Integrations must follow JIT (just-in-time) credentialing, rotation, and least-privilege patterns.
- Vendor consolidation and acquisition risk: Veza’s AI Agent Security launch arrives as the company is being targeted for acquisition by larger platforms that are building AI control towers. Consolidation can accelerate product roadmaps but may also change integration priorities or licensing models. Customers should evaluate vendor roadmaps and acquisition contingencies.
Practical guidance for security teams evaluating AI agent security tools
Adopting an agent security product successfully requires more than a single purchase. Use the following checklist to evaluate vendors and readiness:- Discovery validation:
- Do vendor connectors find agents in my production, staging, and developer sandbox environments?
- Are public MCPs, private endpoints, and GitHub-hosted MCP servers discovered?
- Ownership and lifecycle:
- Can the platform assign and enforce human ownership for each agent?
- Does it integrate with ticketing/ITSM to attach remediation workflows?
- Granular access modeling:
- Does the product surface transitive access paths (e.g., user ⇒ agent ⇒ model ⇒ DB) and quantify blast radius?
- Automation and remediation:
- Can the system automate access reviews, policy enforcement, and connectors to IGA for approvals and deprovisioning?
- Runtime defense integration:
- Does the vendor integrate or interoperate with runtime protections (model sanitizers, LLM firewalls, API gateways)? If not, how will you layer defenses per OWASP LLMSecOps recommendations?
- Compliance and audit readiness:
- Can the platform produce auditors-ready answers: which agents had access to PCI/PHI/SOX-scoped data and who authorized that access?
- Security of the control plane:
- What is the vendor’s model for securing connectors, storing secrets, and providing least-privilege access to the graph?
Operational patterns to adopt alongside tooling
- Adopt agent onboarding standards: require registration, purpose, owners, and retention policies before production deployment.
- Enforce least-privilege model templates for common agent roles (data‑reader, ticket-triage, billing‑reporter) with periodic automated access reviews.
- Use segmented runbooks and circuit breakers: agents granted access to sensitive operations should require multi-party approvals or human-in-the-loop checks.
- Schedule routine agent red-team drills and inject adversarial prompts or malformed inputs to verify that identity, access, and runtime protections behave as expected. Follow OWASP’s GenAI red‑teaming guidance to define scenarios.
Market implications and competitive context
Veza is not the only vendor pitching AI agent security: existing IAM/PAM vendors, cloud providers, and emerging AI security startups are racing to add agents to their identity lexicons. What distinguishes Veza is its identity-first graph and a positioning that tightly couples NHI (non-human identity) governance with agent posture management.At the same time, acquisition activity and platform consolidation are accelerating. ServiceNow and other larger cloud and security vendors are explicitly targeting identity platforms and agent governance as core components of an “AI Control Tower” strategy. That means customers evaluating purchases should weigh the benefits of best-of-breed identity security against the promise of broader platform consolidation where identity + workflow + runtime controls may be offered by a single vendor.
A cautious endorsement: where Veza’s approach helps — and where it won’t be enough
Veza’s AI Agent Security is a timely and pragmatic response to an emerging security gap. It addresses a real operational pain: the need to see agent inventories, map their access, and enforce human accountability across distributed cloud and MCP environments. For organizations that already treat identity as the primary security control plane, the product will likely reduce risk quickly by eliminating blindspots and enabling enforceable least-privilege policies. However, no governance product alone will fully secure agentic AI. Enterprises must combine:- Identity and entitlement governance (discovery, graph analysis, PoLP enforcement), with
- Runtime model protections (input sanitization, LLM firewalls, adversarial detection), and
- Organizational controls (ownership, change control, incident playbooks, and red-teaming).
Bottom line
AI agent proliferation is rewriting the threat model for enterprise security: agents are non-human identities that can read, reason, and act across systems. Veza’s AI Agent Security product brings a graph‑centric, identity-first solution to that problem, combining discovery, agent-to-human mapping, blast-radius visualization, and governance automation in a single platform. That approach maps well to both Zero Trust and emerging community standards such as OWASP’s LLMSecOps, and it answers several of the specific gating problems analysts say will halt many agentic initiatives if left unresolved. Organizations should evaluate Veza and comparable products on three axes: integration completeness, automation safety, and interoperability with runtime protections. For teams that accept the operational work of onboarding governance, access graph modeling can materially reduce agentic risk; for others, the solution will only be as effective as the organization’s ability to operationalize and maintain the governance lifecycle.Veza’s product launch is a clear signal that identity security vendors view agentic AI as an identity problem first and foremost — and that practical, enterprise-ready tooling to manage agent identities is now central to any credible AI security strategy.
Source: SiliconANGLE Veza introduces AI Agent Security to tackle emerging agentic AI risks - SiliconANGLE
