Identity Is The New Perimeter: Securing AI Agents and Privileged Access

  • Thread Author
When an industry veteran says “identity is the new perimeter,” they mean more than a slogan — they mean a strategic pivot that should already be reshaping every security program, architecture review, and boardroom risk discussion. In a recent interview reported by IT Brief New Zealand, James Maude, Field Chief Technology Officer at BeyondTrust, laid out a stark case: attackers today often don’t “break in” — they log in. Worse, the rapid adoption of agentic AI and a proliferation of non‑human identities have created a new class of standing privileges operating under the radar, sometimes with global admin‑level access and no operational oversight. Those warnings aren't theoretical: vendors, academics, and incident researchers are now documenting practical attack patterns, product responses, and measurable governance gaps that make Maude’s admonition a top priority for IT and security leaders.

Neon holographic display proclaims 'Identity is the new perimeter,' featuring a shielded network and cloud connections.Background / Overview​

Identity-first attacks have been on the radar for years, but the threat landscape has changed in speed, scale, and semantics. Traditional perimeter defenses — firewalls, segmented networks, and endpoint protection — assumed an attacker needed to compromise a host to escalate to sensitive assets. Today’s attack economics reward access by credential more than access by compromise: stolen or misconfigured identities grant immediate, legitimate-looking entry to cloud workloads, SaaS apps, and orchestration APIs. BeyondTrust’s own vulnerability research and reporting reinforce that elevation‑of‑privilege remains a dominant vector in modern breaches, and identity-centric misconfigurations are a recurring enabling condition. At the same time, enterprise adoption of generative AI features and agentic assistants — from vendor-embedded copilots to no‑code automation platforms — has introduced a growing population of non‑human identities. These identities often act like service accounts or bots but are created and managed through convenience-first UX flows that skip established IAM and PAM controls. The result: a stealthy identity layer that multiplies attack paths and undermines control models built for human users. Industry analysis and breach studies show AI adoption has outpaced governance; when AI‑related incidents occur, most affected organizations lacked adequate AI access controls.

Why identity — not network — is the battleground now​

The economics of “just log in”​

An attacker with the right credentials can authenticate through legitimate channels — SSO, OAuth tokens, or API keys — and inherit access consistent with that identity. This avoids noisy lateral movement and leaves fewer forensics breadcrumbs than malware‑driven campaigns. Privileged and effective privileges — the permissions a principal actually has across systems, not just the permissions they're nominally assigned — are the critical metric. Tools that reveal effective privilege graphs expose how a seemingly low‑privileged account can reach elevated resources via chained entitlements, application integrations, or misconfigured connectors.

Siloed teams create blind spots​

Operational silos — separate teams for Active Directory, cloud IAM, SaaS administration, and endpoint — are a structural risk. Each team may secure its realm well, but cross‑domain configurations or stale service accounts create lateral pathways that go unnoticed. A locked down Domain Admin account is irrelevant if a misconfigured synchronization or a cloud API mapping allows privilege escalation from a standard user. Maude emphasises that the true defense is a holistic identity lens: visibility across humans, machines, workloads, and agents. This requires tooling that maps relationships rather than static inventories.

Agentic AI: convenience that creates standing privileges​

What an AI agent identity looks like​

Modern platforms provide multi‑step automation: a user asks a Copilot to monitor email, a Salesforce automation creates an “agent” to update records, or a ServiceNow workflow spawns a bot to escalate tickets. These agents are non‑human identities that can:
  • Hold API keys or OAuth tokens
  • Receive delegated access via service principals or app registrations
  • Perform scheduled operations or on‑demand queries
  • Chain multiple systems together (SSO → API → data store)
When the agent is given broad or persistent privileges, it becomes a standing attack surface comparable to a service account — but created outside traditional governance channels. BeyondTrust’s recent product additions (AI Agent Insights and identity graphing) were expressly designed to discover and classify these agents and their effective privileges.

Shadow agents and “I didn’t know it existed”​

A common pattern: a business user in a low‑code platform toggles an automation, or a user tells a Copilot “monitor my inbox,” and behind the scenes a new identity is provisioned with tokens or connectors. Management consoles, conditional access policies, and identity governance may never see that entity if it's created in the SaaS layer, particularly when decentralized app marketplaces or third‑party plugins are involved. The result is hundreds or thousands of agent identities that security teams only discover after an incident — by then the blast radius can be large. Maude’s interview cited organisations discovering hundreds to thousands of Microsoft‑based agent identities when scanned for the first time; that audit‑first approach is precisely where defenders should start. That specific operational statistic reported in the interview could not be directly corroborated in BeyondTrust’s public product materials at the time of review and should be treated as an interview claim pending independent validation.

The real‑world proof point: EchoLeak and the limits of model‑centric trust​

In mid‑2025 researchers disclosed a critical, zero‑click vulnerability affecting Microsoft 365 Copilot (dubbed “EchoLeak”, CVE‑2025‑32711). The attack demonstrated how a maliciously crafted email or document could inject prompts or content that an agentic AI — operating with broad contextual access — would process and use to exfiltrate internal data, without user interaction. EchoLeak showed two critical things:
  • AI agents that access broad context (mailboxes, SharePoint, Teams) can be manipulated through content poisoning and retrieval‑augmented attacks.
  • Traditional controls (EDR, AV, macro blocking) do not address the model‑level retrieval and reasoning steps where leakage occurs.
Microsoft patched the flaw at the service layer after responsible disclosure, and researchers and incident responses documented the vulnerability class as an “LLM scope violation” — a category defenders must expressly design against. EchoLeak is a concrete example of how an agentic identity, when given wide reach inside the environment, becomes a near‑instant exfiltration vector.

Where vendors and tools are responding​

BeyondTrust and other vendors are building features to address agentic identity risks head‑on. BeyondTrust announced AI Agent Insights and enhancements to its Identity Security Insights platform to discover, classify, and risk‑score AI agents and to orchestrate safer agent behaviour (for example, enforcing just‑in‑time API access and rotating credentials automatically). These capabilities are intended to bring agents under the same governance posture that controls privileged human and machine accounts. BeyondTrust’s Identity Security Insights also emphasizes a “True Privilege” graph that maps effective privileges across hybrid estates — the kind of analysis defenders need to find escalation chains that cross AD, SaaS, and cloud IAM. But product capability alone isn’t enough: operational integration, process change, and governance must follow.

Practical playbook: how to reduce identity and AI‑agent risk (prescriptive)​

Security leaders need a pragmatic sequence that balances speed, impact, and operational friction. The following roadmap is ordered so each step builds the visibility and controls needed for the next.
  • Rapid discovery and risk assessment (0–30 days)
  • Run a full identity inventory: human users, service accounts, workload identities, and agent inventories across SaaS, cloud, and on‑prem directories.
  • Map the effective privileges for each identity; focus on where an identity can reach rather than what it was assigned.
  • Prioritize high‑impact paths for remediation (domain admin, API keys with broad scope, connectors into HR/payroll/finance systems).
  • Containment and least privilege enforcement (30–90 days)
  • Apply least privilege and just‑in‑time (JIT) access where feasible. Convert long‑lived keys to short‑lived tokens or ephemeral credentials.
  • Enforce conditional access policies and MFA for privileged operations; extend these policies to non‑human identities where possible.
  • Rotate and vault secrets immediately; require approval workflows for new agent provisioning.
  • Agent lifecycle governance (90–180 days)
  • Treat agents as first‑class identities: require registration, approved scopes, documented purpose, and scheduled review/expiry.
  • Implement automated monitoring for anomalous agent behaviour (sudden large data pulls, cross‑tenant API calls, unusual schedule changes).
  • Integrate DLP for model outputs and enforce output filtering before any agent‑generated content leaves controlled channels.
  • Continuous adversarial testing and human‑in‑the‑loop (ongoing)
  • Red‑team your agented workflows: prompt‑injection tests, RAG abuse scenarios, and simulated supply‑chain manipulations.
  • Require human sign‑off for critical actions (breaking glass exceptions); maintain an auditable approval trail for autonomy escalation.
  • Conduct regular tabletop exercises to test the detection and revocation playbooks for agent compromise.
  • Governance, policy, and cultural change (parallel)
  • Define a clear AI usage policy (sanctioned tools, data permitted for model use, vendor attestation requirements).
  • Train staff on “shadow AI” risks and prompt hygiene; make it easy for employees to request sanctioned solutions.
  • Assign board‑level accountability for AI and identity risk — these are business risks, not just technical ones.
This playbook reflects Zero Trust principles adapted for agentic AI and identity complexity; practical adoption requires cross‑functional coordination between security, IAM, cloud ops, and business units.

Technical controls that deserve immediate attention​

  • Identity graphing and path‑analysis: Tools that compute effective privilege graphs are essential; they reveal indirect escalation paths that policy reviews miss.
  • Short‑lived credentials for agents: Replace static API keys with scoped OAuth tokens or workload identities that expire and require renewal.
  • Agent discovery and classification: Regular scanning of SaaS connectors, app registrations, and platform integrations to find unapproved agents.
  • Just‑in‑time (JIT) elevation and Zero Standing Privilege (ZSP): For high‑impact operations, require temporary elevation with audit and approval workflows.
  • Model input/output controls: DLP on both prompts submitted to models and outputs returned by agents, plus explicit provenance metadata for model‑generated outputs.

Governance and cultural changes: the hard but necessary work​

Technology can find agents and block tokens, but without policy and accountability, convenience will win. A few organizational shifts make a disproportionate difference:
  • Make AI and identity policy board‑level: AI + identity risk can produce legal, regulatory, and reputational damage; they belong in enterprise risk discussions.
  • Treat agents like service owners: Every agent should have an owner responsible for access justification, review cadence, and incident response contact information.
  • Encourage sanctioned alternatives: Provide easy, approved AI integrations so employees are not tempted to use shadow AI services. Shadow AI has been linked to materially higher breach costs and more frequent data exposure.

What to watch for — risk signals and red flags​

  • Rapid creation of new app registrations or service principals with broad Graph API scopes.
  • Spike in OAuth consent approvals or new SSO connectors added without change tickets.
  • Unexplained outbound data transfers from copilots or scheduled agent tasks.
  • Reuse of long‑lived API keys across multiple systems and teams.
  • Evidence in telemetry of retrieval‑augmented generation (RAG) activity that pulls from sensitive repositories when user queries are unrelated.
These signals should trigger immediate investigation and, where necessary, rapid key rotation and access revocation.

Tradeoffs, limits, and realistic expectations​

  • Zero Trust and short‑lived credentials reduce blast radius but don’t eliminate the need for human oversight. AI models are probabilistic and can be manipulated; controls reduce risk, they do not make AI infallible.
  • Overly strict controls risk pushing users to shadow AI — unsanctioned tools that are far harder to detect and secure. Balancing convenience and security requires offering usable, governed alternatives and clear, enforced policies.
  • Vendor product features (like BeyondTrust’s agent insights) are valuable, but they solve visibility and orchestration problems only when operationalized within processes and team responsibilities. Tooling plus governance equals resilience, not tooling alone.

Conclusion — identity is the new perimeter; treat agents like identities​

The shift to identity as the primary defensive boundary is no longer an architectural preference — it’s an operational imperative. Agentic AI has accelerated productivity and automation, but without explicit governance those same agents become high‑value attack platforms. The right response is not to ban AI; it’s to treat AI agents as identities — discover them, classify and score their risk, enforce least privilege and JIT access, and integrate their lifecycle into IAM and PAM processes. That is the pragmatic path to preserving value while reducing exposure.
BeyondTrust’s announcements and product updates reflect one vendor’s attempt to align identity security tools to this reality, while independent research and incident disclosures (including zero‑click exploits like EchoLeak and major governance gaps identified in the IBM Cost of a Data Breach work) underscore why organizations must move quickly. Detection and remediation will need new graph‑based analytics, stronger credential hygiene, stricter agent lifecycle controls, and a cultural commitment to treating AI as a governed asset — not a magical productivity shortcut.
Security teams that adopt a data‑driven, identity‑first playbook now will be the ones who can confidently adopt and scale AI tomorrow.
Source: IT Brief New Zealand Exclusive: BeyondTrust CTO warns of AI identity risks
 

Back
Top