Agentic AI Security Risks in ServiceNow and Copilot Studio

  • Thread Author
Fresh disclosures about exploitable AI agents in ServiceNow and Microsoft Copilot Studio make a single uncomfortable fact unavoidable: agentic AI is shipping into production with avoidable security gaps that turn productivity features into attack surfaces. Two independent research teams — AppOmni’s AO Labs and Zenity Labs — disclosed practical, high‑risk attack paths late in 2025 and early 2026 that show how agent‑to‑agent connections, overbroad defaults, and insufficient telemetry enable lateral movement, data exfiltration, and silent persistence without needing classic credential theft. These incidents are not hypothetical exercises; they are reproducible abuse chains that should jolt CISOs into treating agents like first‑class, high‑risk identities.

Background​

In October 2025 AppOmni researchers disclosed and publicly documented a critical ServiceNow vulnerability — later named BodySnatcher and tracked as CVE‑2025‑12420 — that allowed an unauthenticated attacker to coerce agentic workflows into performing privileged actions by abusing provider configuration and account‑linking logic. AppOmni’s detailed writeup and subsequent vendor fixes showed that the exploit chain required nothing more than a target email address and weakly‑scoped integration secrets. ServiceNow issued mitigations to cloud tenants and released patched application versions for self‑hosted environments.
Independently, Zenity Labs published research in late December 2025 showing that Microsoft Copilot Studio’s Connected Agents feature — which lets one agent invoke another to reuse tools and knowledge — was enabled by default for new agents. That default, combined with limited visibility into cross‑agent invocations, made it possible for a low‑privilege or untrusted agent to piggyback on a privileged agent (for example, one authorized to send emails or access files), effectively amplifying the attack surface and bypassing conventional detection points.
Taken together, these disclosures expose a repeatable pattern: agentic integrations create new non‑human identity channels and implicit trust relationships, and vendors have sometimes shipped convenience‑first defaults without the governance primitives or telemetry needed to secure them.

Why these incidents matter: agentic attack paths are stealthy and effective​

Traditional security models assume a human or a known service calls a resource; audit logs, IAM policies, and multi‑factor authentication center on that model. Agentic AI breaks the assumption in three key ways:
  • Agents can compose other agents and chain tools autonomously, creating indirect access paths that simple allowlists and per‑user MFA do not cover.
  • Many current platforms lack end‑to‑end provenance for agent‑to‑agent calls — who asked which agent to do what — so lateral movement happens with minimal or no visible trail.
  • Default configurations that permit broad cross‑agent connections create large blast radii for a single misconfiguration or malicious agent.
Put concretely: a marketing assistant agent (with access to corporate drafts) could be tricked into asking a privileged email agent to send curated documents to external recipients. No credential theft is required; no human approval is recorded. The resulting damage—data loss, phishing campaigns, domain reputation damage—mirrors a full account takeover but is far harder to detect using old playbooks.

Case study 1 — ServiceNow “BodySnatcher”: mechanics and lessons​

What happened (short version)​

Researchers at AppOmni announced the BodySnatcher chain, a combination of insecure provider configuration, static secrets, and permissive auto‑linking that allowed an unauthenticated party to co‑opt agent invocations and create or escalate accounts on affected ServiceNow instances. The issue targeted the Now Assist AI Agents and Virtual Agent API components and was fixed after disclosure; cloud tenants received automatic remediation while on‑prem and self‑hosted customers were urged to upgrade specific application versions.

Technical takeaways​

  • The exploit chain relied on a platform‑wide shared secret and insecure account‑linking logic that trusted only an email address for identity linkage. That single trust assumption subverted MFA and SSO protections.
  • Example mitigations that stop this category of abuse are not limited to rotating provider secrets: you also need enforced MFA for account linking, scoped provider credentials, and removal of globally shared agent UIDs that act as cross‑tenant “master keys.”
  • The vulnerable surface was the Virtual Agent API and the Now Assist application’s use of provider records that were accessible in a broader application scope — a classic configuration and scoping mistake amplified by agentic behavior.

Why it’s more than a CVE​

BodySnatcher is instructive because it is a design problem as much as an implementation bug. The path to compromise was built from convenience choices (shared static secrets, auto‑linking) and a lack of layered checks. Patching the specific bug is necessary but insufficient: organizations must fix the identity model and governance assumptions that made the exploit possible.

Case study 2 — Copilot Studio’s Connected Agents: default openness and invisible invocations​

The feature and the risk​

Copilot Studio’s Connected Agents is intended to let builders encapsulate reusable logic (for example, an “email sender” agent) and call it from other agents. That architectural pattern is sound in principle: it reduces duplication and centralizes capabilities. The hazard comes when connecting is allowed by default and when the platform provides no clear, auditable record of which agents invoked another agent and why.
Zenity Labs’ research showed that:
  • The Connected Agents setting was automatically enabled for new agents, widening the window for accidental or malicious connections.
  • Cross‑agent invocations often leave no entries in the target agent’s activity tab, creating blind spots for defenders.
  • Untrusted or semi‑trusted agents can be created in the same environment and then used to trigger actions through a privileged agent (email sending, database queries, file exports) without the usual human‑centric indicators.

Practical impact​

  • A malicious or compromised agent can remotely trigger privileged behavior (mass email sends, confidential file retrieval) without leaving the same forensic footprint as a human‑initiated suspicious login.
  • Default openness amplifies insider risk: a careless developer or low‑privilege user can expose a powerful toolset by connecting an agent inadvertently or by assigning overly broad scopes.

Vendor guidance and configuration controls​

Microsoft’s administrative guidance emphasizes agent identities (Entra Agent ID) and recommends disabling Connected Agents for any bot that uses unauthenticated tools or sensitive knowledge sources. That guidance is useful but must be operationalized: mere availability of agent IDs is not a silver bullet unless registration, monitoring, and enforcement are mandatory parts of an organization’s agent lifecycle.

The agent‑to‑agent blind spot: why identity alone isn’t enough​

Many modern security controls focus on identity — sign‑in protection, RBAC, conditional access, and so on. Agentic ecosystems require the same identity rigor applied to non‑human identities plus additional governance layers:
  • Identity without fine‑grained authorization is insufficient. An Entra Agent ID or service principal gives you a named identity, but you must still implement tightly scoped capabilities and per‑invocation checks.
  • Observability for agents must be EDR‑grade: logs that capture prompts, tool calls, parameters, responses, and the chain of agent invocations — not just “agent X invoked tool Y.”
  • Trust boundaries must be explicit and narrow: cross‑agent calls should require signed, short‑lived invocation tokens and explicit allowlists listing which agent IDs can call which other agent IDs.
If you treat agents as mere automation scripts rather than as networked identities with independent privileges, you will miss attack paths that don’t rely on stolen human credentials.

The defensive blueprint: engineering controls every organization should deploy now​

Security teams can blunt this class of attacks with a mix of immediate hardening steps and longer‑term engineering controls.

Immediate actions (do these this week)​

  • Inventory and register: discover every agent across cloud, SaaS, and on‑prem platforms and register them with your identity provider. Treat each agent like a service account and attach ownership metadata.
  • Patch and verify: apply vendor patches — for example, ensure ServiceNow instances run the fixed Now Assist/Virtual Agent API versions if you use those modules.
  • Disable permissive defaults: turn off Connected Agents in Copilot Studio for any agent that touches sensitive data or uses unauthenticated tools. Make cross‑agent invocation opt‑in.
  • Remove blanket privileges: pull email‑sending, bulk export, and admin capabilities out of general‑purpose agents. Create narrowly scoped, dedicated agents for those tasks.
  • Centralize monitoring: route agent traffic through an API gateway and enable detailed tracing. Alert on mass messages, bulk reads, or unusual cross‑agent calls.

Engineering and operational controls (next 30–90 days)​

  • Per‑agent managed identities: require Entra Agent ID or equivalent for every agent and enforce lifecycle policies (rotation, expiration, decommissioning).
  • Signed invocation tokens + mutual TLS: require cryptographically signed, short‑lived tokens for agent‑to‑agent calls and mutual TLS for agent endpoints to prevent token replay and man‑in‑the‑middle.
  • Capability scoping and just‑in‑time grants: grant agents minimal capabilities by default and escalate on a just‑in‑time basis with human or policy approval for high‑risk actions.
  • Comprehensive provenance: log prompts, inputs, tool calls, responses, and cross‑agent hops so you can reconstruct the full chain of events during an investigation.
  • Human approval for high‑risk actions: force interactive approval (or multi‑party consent) before publishing agents with export, email, or privileged system access.
  • Tabletop exercises and threat modeling: run regular red team scenarios focused on agent abuse and maintain rollback plans for agent deployments.

Governance, defaults, and vendor responsibility​

The ServiceNow and Copilot Studio cases expose a recurring theme: convenience defaults shipped to accelerate adoption can become systemic vulnerabilities at scale. Vendors must adopt stronger secure‑by‑default postures:
  • Default: closed. Cross‑agent connectivity and powerful tools should default to off; customers then opt‑in with documented, auditable justification.
  • Enforceable guardrails. Platforms should provide mandatory allowlists, per‑agent approval workflows, and tenure‑based auto‑decommissioning for experimental agents.
  • Better telemetry out of the box. Vendors owe customers usable, tamper‑resistant audit trails that include cross‑agent invocation records and tool call telemetry.
It’s encouraging that vendors and standards bodies are moving: identity vendors and major cloud providers are actively adding controls (agent IDs, registries, and discovery tools), and industry groups are working on specifications addressing autonomous services. But those improvements will not protect organizations that accept permissive defaults and delay implementing controls.

Threat modeling: what attackers will do next​

Expect attackers to treat agent meshes as fertile ground for new pivot techniques:
  • Shadow‑agent creation: attackers will abuse delegated or public agent creation paths to deploy malicious agents that call privileged counterparts.
  • Collusion and covert channels: multiple low‑privilege agents can collude to assemble a data exfiltration pipeline that looks like normal automation.
  • Supply‑chain abuse: compromise of a single vendor‑provided agent or connector can cascade across many tenants if the agent has broad cross‑tenant identifiers or shared UIDs.
  • Living‑off‑the‑land automation: instead of stealing credentials, attackers will instruct legitimate automation agents to perform malicious acts, leaving less obvious forensic traces.
Defenders must plan for these scenarios now, not after the first widescale incident.

Practical checklist for security teams (actionable, prioritized)​

  • Inventory every agent and register it in your IAM/asset database.
  • Disable Connected Agents by default; re‑enable only with review and explicit allowlists.
  • Apply all vendor patches; confirm hosted tenants received fixes and verify self‑hosted versions are upgraded.
  • Remove any general‑purpose agent’s access to email sending, bulk export, or admin workflows.
  • Route agents through an API gateway for centralized policy enforcement and telemetry.
  • Require Entra Agent IDs or equivalent managed identities; link agents to human owners.
  • Enforce short‑lived tokens, mutual TLS, and signed cross‑agent invocation tokens.
  • Implement an “agent registry” and automated lifecycle policies (expiry, rotation, decommission).
  • Run red‑team exercises simulating agentic abuse and create rollback plans.

Strengths to build on — where vendors and defenders are already improving​

There are genuine advances to leverage. Identity providers (including major vendors) are introducing agent‑focused identity primitives and administrative controls that make agent governance possible. Observability and discovery startups and established security vendors are rolling out agent discovery and NHI (non‑human identity) management tools to find and map shadow agents. Standardization efforts and cloud vendor guidance on managing agent IDs, conditional access, and data policies are maturing.
Those tools are effective only if organizations use them deliberately: inventory and governance must be baked into the development lifecycle, not bolted on after the fact.

Risks and caveats — what remains uncertain​

  • Visibility gaps remain in some platforms: not all SaaS or on‑prem products provide the granular telemetry defenders need, making comprehensive monitoring expensive and complex.
  • Patch timelines vary: hosted SaaS tenants may be patched by the vendor automatically, but self‑hosted installations and third‑party apps bundled into platforms can remain vulnerable long after public disclosure.
  • Operational tradeoffs: overly tight defaults can slow innovation and cause shadowing as developers improvise workarounds; equally, permissive defaults invite exploitation. Finding the right operational balance requires executive commitment to governance.
  • Interoperability and standards are still nascent: there is no universal standard for signed agent‑to‑agent invocation tokens or for recording chain‑of‑provenance across heterogeneous agent platforms — meaning organizations must adopt a best‑of‑breed approach until the ecosystem converges.
Any claim about absolute protection against agentic abuse should be treated cautiously: this is an evolving threat space and defenses will need to iterate with the technology.

Conclusion: treat agents as identities, not conveniences​

The ServiceNow BodySnatcher disclosure and the Copilot Studio Connected Agents research are early but vivid case studies of a larger trend: agentic AI amplifies both productivity and risk. The underlying problems are solvable with disciplined defaults, granular identity and capability scoping, comprehensive telemetry, and a mindset that treats agents as production identities subject to the same security lifecycle as code and services.
For CISOs, the message is plain and urgent: inventory your agents, close permissive defaults, enforce least privilege, and demand auditable provenance from vendors. Organizations that act now — converting convenience into controlled capability and instrumentation — will blunt a class of attacks that will otherwise look like ordinary automation doing extraordinary damage.

Source: findarticles.com Microsoft and ServiceNow Agents Expose AI Risks