Agentic AI Security: BodySnatcher and Copilot Studio Risks

  • Thread Author
ServiceNow and Microsoft — two of the enterprise world’s most ubiquitous platforms — were this week at the center of fresh security alarm bells after independent researchers demonstrated how agentic AI features can be abused to impersonate administrators, create privileged backdoors, and move laterally across an organization with astonishing ease. The vulnerabilities are not minor edge cases: AppOmni’s “BodySnatcher” disclosure shows how a combination of shared credentials and permissive account‑linking could let an unauthenticated actor escalate to full admin control in ServiceNow, while Zenity Labs’ findings around Copilot Studio’s Connected Agents expose how default connectivity and poor observability can turn useful shared agents into silent backdoors. These incidents are an urgent wake‑up call: agent‑to‑agent interactions convert convenience into a systemic attack surface that organizations must treat as first‑class security risk.

Neon enterprise network map with a central red 'BODY SNATCHER' sign and glowing figures.Background / Overview​

Enterprises are deploying AI agents—task‑specific, automated assistants that can query data, send email, update records, and call APIs—at scale. The same properties that make them productive (natural‑language control, reusable tool connectors, and composability) also make them uniquely attackable: agents can call other agents, chain actions, and act autonomously without a human in the loop. Two independent security labs recently published reproducible attack paths that exploit those exact properties.
  • AppOmni Labs disclosed a severe ServiceNow flaw (tracked as CVE‑2025‑12420) that combined non‑rotating shared secrets and permissive account‑linking to let an unauthenticated actor impersonate users and execute privileged agent workflows. AppOmni researchers called the chain “BodySnatcher.”
  • Zenity Labs demonstrated practical abuse against Microsoft Copilot Studio’s Connected Agents functionality, arguing that the feature’s default openness and lack of end‑to‑end visibility enable stealthy lateral escalation and data exfiltration. Zenity published a detailed write‑up and accompanying guidance.
The immediate operational facts are straightforward: ServiceNow patched the reported flaw and rolled updates for hosted tenants in October 2025, with application versions and self‑hosted hotfix guidance published; reporting so far indicates no confirmed in‑the‑wild exploitation. Microsoft has engaged with researchers around Copilot Studio, defended some design choices, and recommended administrators disable Connected Agents on high‑risk agents; researchers continue to argue that the defaults and systemic risk.
For readers who saw the WinBuzzer summary that circulated this morning: the article condenses these developments and highlights the same central thesis — agent‑to‑agent interactions are a new, high‑blast‑radius attack vector — and is aligned with the technical write‑ups from AppOmni and Zenity.

ServiceNow: BodySnatcher — mechanics, impact, and mitigation​

What AppOmni found​

AppOmni’s AO Labs published an extensive technical analysis explaining how a chain of implementation and configuration choices produced a critical escalation path in ServiceNow’s Now Assist / Virtual Agent API components. The exploit combined:
  • A non‑rotating, platform‑wide static client secret used by provider configurations.
  • An auto‑linking account logic that trusted only an email address for identity linkage and did not enforce MFA during linking.
  • An included, high‑privilege example agent that had identical agent IDs across customer instances.
By supplying the shared secret and a victim email address, an attacker could be treated as the victim by the Virtual Agent pipeline, then instruct a shipped agent to create or escalate accounts, reset passwords, or run privileged actions—effectively remote controlling admin workflows without valid credentials. AppOmni labeled the result one of the most severe AI‑driven vulnerabilities they’d seen.

Verified technical details​

AppOmni and subsequent industry reporting agree on key, verifiable points:
  • The issue was tracked as CVE‑2025‑12420.
  • ServiceNow deployed security updates to hosted (SaaS) instances on October 30, 2025, and provided patches and guidance for self‑hosted customers and app store versions. The fixed versions were published for the Now Assist AI Agents and Virtual Agent API modules.
These are not hearsay claims; AppOmni’s write‑up contains proof‑of‑concept steps and code excerpts, while ServiceNow’s update cadence and version notes were confirmed in independent reporting. Treat the CVE and patch metadata as authoritative for triage and remediation planning.

Why this design failure matters​

This incident is instructive because it’s not merely a single bug in an obscure API path. It’s a composition failure where multiple convenience decisions—shared secrets, permissive account linking, and reusable agent artifacts—combined to defeat identity protections such as MFA and SSO. Those convenience choices created an attack surface that no amount of traditional endpoint protection would have detected without specific telemetry into agent invocation provenance.
  • When agents are treated as just another automation, they inherit neither the governance nor the lifecycle controls that human identities receive.
  • Shared, immutable identifiers (same agent ID across instances) make cross‑tenant assumptions dangerous when attackers can obtain the one credential that validates requests platform‑wide.
  • Auto‑linking flows that accept only an email effectively let the system believe the requester is the end user without proof.
AppOmni’s mitigation advice maps to these root causes: rotate provider secrets, require MFA for account linking, remove globally shared agent UIDs in shipped artifacts, implement agent stewardship and deprovisioning, and treat agent invocations as auditable, sensitive events.

Practical remediation steps for ServiceNow operators​

  • Confirm your instance and installed app versions against the fixed releases (Now Assist AI Agents and Virtual Agent API). If you’re on SaaS, verify the update timestamp; if self‑hosted, apply vendor‑published patches immediately.
  • Rotate any provider/shared secrets and enforce unique, per‑tenant credentials.
  • Enforce MFA and robust account‑linking validation for any provider that can impersonate users.
  • Audit shipped example agents and remove or re‑scoped high‑privilege sample agents; ensure agent IDs are not reusable across tenants.
  • Add agent lifecycle policies: discovery, ownership tags, automated de‑provisioning for dormant agents, and a mandatory approval workflow for agents that request privileged connectors.
These steps are immediate, concrete, and in many cases already recommended by AppOmni. They turn a single CVE patch into a broader platform hardening program—exactly the posture required to avoid a recurrence.

Microsoft Copilot Studio: Connected Agents — feature, risk, and vendor stance​

What Zenity reported​

Zenity Labs’ research focused on Copilot Studio’s Connected Agents capability: a design that lets agent authors share an agent’s knowledge, tools, and topics with other agents in the same environment so logic can be reused rather than duplicated. Zenity’s core findings:
  • In observed environments, Connected Agents defaulted to enabled for newly created agents, increasing exposure windows.
  • There is limited native visibility that shows which agents have connected to a given agent; defender telemetry cannot always reveal the caller‑to‑callee invocation chain.
  • In Zenity’s tests, connected invocations sometimes left no entries in the invoked agent’s activity tab, further obscuring attribution.
Zenity argued that these behaviors create an “invisible control plane”: a privileged, shared agent (for example, an email sender) becomes a callable backend that any local agent could invoke—intentionally or maliciously—without leaving clear traces.

Microsoft’s position​

Microsoft’s public guidance and documentation outline administrative controls for managing connected agents and the Entra Agent Registry identity primitives; the official docs describe how to list, connect, and remove connected agents within the Microsoft 365 admin center. That documentation does not universally assert a single default behavior across all tenant variants, and the behavior can vary by product version and administrative configuration. Microsoft has stated the Connected Agents capability enables interoperability and that turning it off universally would break scenarios customers rely on; administrators are advised to disable Connected Agents for agents that access unauthenticated tools or sensitive knowledge sources.
This is a contested space: Zenity and multiple security outlets report default‑on behavior and insufficient visibility in many tenant configurations, while Microsoft emphasizes admin controls and the need to balance collaboration with security. In practice this means operators need to verify their tenant’s exact defaults and logging behavior rather than accept a single headline claim. Treat default behavior as environment‑dependent until you can validate within your tenant.

Attack surfaces Zenity demonstrated​

  • A low‑privilege or maliciously created agent could connect to a privileged “sender” agent and instruct it to send emails, export files, or query sensitive knowledge stores.
  • Because chain attribution may be incomplete, defenders might see the privileged agent’s action (an email was sent) without a linked caller ID; that makes detection, alerting, and forensic triage far harder.
  • Linked agents with broad Graph or connector permissions can be weaponized to mass‑exfiltrate records or trigger large‑scale phishing from seemingly legitimate domains.

What administrators must do right now (Copilot Studio)​

  • Inventory every Copilot Studio agent and label ownership, purpose, and data sensitivity. Treat agent artifacts like application code—store them in version control and require CI/gates before publish.
  • Disable Connected Agents by default for any agent that interacts with sensitive connectors (email senders, SharePoint lists containing PII, CRM exports). Make cross‑agent connection opt‑in after review.
  • Apply least privilege to connector scopes—use read‑only for consumer‑facing agents; confine write actions to narrowly scoped, auditable service accounts.
  • Instrument step‑level observability: log agent‑to‑agent invocations with caller/callee IDs, prompts, tool calls, and parameters; export these to SIEM and tie them into detection rules. If your tenant does not surface this, escalate through your Microsoft account team and audit runtime outputs.
These are defensive moves you can make today that materially reduce the class of attack Zenity demonstrated.

Systemic analysis: why agentic AI expands the blast radius​

Identity + Composition = New Failure Modes​

Traditional security models assume a human or a known service is the principal that performs actions; identity controls, MFA, conditional access, and audit logs are focused on those flows. Agentic AI breaks this assumption in at least three ways:
  • Agents can compose other agents and chain tools autonomously, creating indirect access paths that usual IAM policies and MFA don’t directly address.
  • Many platforms currently lack end‑to‑end provenance for agent‑to‑agent calls; if caller identity is dropped or invisible, the link between an action and an initiating principal is lost.
  • Default convenience settings (shared samples, auto‑linking, default connectivity) produce large, invisible blast radii before teams can implement mature governance.
In short: the attack surface is not just code or container escapes anymore — it’s policy, defaults, and composition semantics. That requires a shift in defender thinking from patching code to redesigning the agent lifecycle and control plane.

Observability becomes the new perimeter​

If you cannot reconstruct the chain of agent intent (who asked which agent to do what, with what parameters, and what data was returned), you cannot reliably detect abuse or respond quickly. Defenders must demand EDR‑grade telemetry for agents:
  • Per‑invocation logging (caller agent ID, callee agent ID, prompt, tool, parameters, response).
  • Signed short‑lived invocation tokens for cross‑agent calls.
  • Service‑level allowlists per agent ID and per tool action.
  • Centralized agent registry and attestation metadata (owner, purpose, review status, last used).
Observability turns the invisible control plane into a governed control plane.

A prioritized hardening checklist for CISOs (practical, actionable)​

  • Inventory and classify
  • Discover every agent across Copilot Studio, ServiceNow, and other platforms; record owner, purpose, connectors, and last activity.
  • Apply least privilege
  • Granular connector scopes; minimize write privileges; separate read vs write agents.
  • Disable permissive defaults
  • Turn off Connected Agents by default for sensitive agents; require explicit, documented opt‑in.
  • Enforce strong linking and MFA
  • For any provider or account‑linking flow, require MFA and avoid trusting email alone.
  • Rotate and scope provider credentials
  • Replace platform‑wide static secrets with per‑tenant, short‑lived credentials.
  • Strengthen CI/CD and governance
  • Version agent definitions in source control, require code reviews and approval gates before publish.
  • Add runtime enforcement
  • Implement inline prevention or runtime policy checks that block high‑risk operations (e.g., mass exports, unauthenticated sends).
  • Enhance logging and SIEM correlation
  • Log prompts, invocations, and tool calls; create detection rules for abnormal cross‑agent volumes or unusual callers.
  • Test and red‑team
  • Simulate agent‑to‑agent abuse scenarios during tabletop exercises and penetration tests to validate detection and response.
  • Vendor engagement
  • Demand clarity from vendors about default behaviors, logs emitted, and supported controls; push for secure‑by‑default product settings.

What vendors must do differently​

Vendors cannot simply publish knobs and rely on admins to configure safety in complex environments. The research highlights three vendor responsibilities:
  • Secure by default: Disable connectivity and high‑impact capabilities by default for newly created agents; require explicit authorization and approval workflows to expose tools or knowledge to other agents. Zenity’s observations show that default openness materially increases risk; vendors should bias toward conservatism.
  • Provenance and telemetry: Emit tamper‑resistant, step‑level logs for agent invocations that show the full caller chain and contextualized prompts and tool calls. Without this, defenders are blind.
  • Agent lifecycle primitives: Provide built‑in agent registries, ownership metadata, decommissioning policies, and signing/revocation for shipped agent artifacts so administrators can proactively reduce blast radius. AppOmni’s BodySnatcher case shows how shipped example agents and shared identifiers become an attack amplifier.
Microsoft and ServiceNow both demonstrate that vendor responsiveness matters: ServiceNow rolled out the patch for BodySnatcher in October 2025 and communicated remediation paths; Microsoft engaged with Zenity and updated certain handling paths in Copilot Studio, while publicly defending some design choices and urging admins to disable connected agents on sensitive workflows. But the deeper structural changes—secure defaults, richer telemetry, and mandatory lifecycle governance—remain necessary to avoid repeat incidents.

Limits, uncertainties, and cautions​

  • Some operational claims vary by tenantd cloud region. For example, whether Connected Agents is globally default‑on can depend on the product release and administrative configuration; operators must verify behavior in their own tenants rather than assuming a single global posture. Treat claims about platform defaults as environment‑dependent until validated.
  • There is currently no broad, confirmed evidence that BodySnatcher was widely exploited in the wild prior to disclosure; ServiceNow and reporting outlets indicate no confirmed customer exploitation to date. That does not reduce urgency—misconfigurations may still exist in self‑hosted environments where admins have not applied fixes.
  • Some mitigation proposals (e.g., full runtime enforcement) require vendor cooperation and operational investment. Not all organizations can implement inline prevention immediately; prioritize controls that reduce blast radius (inventory, least privilege, disable risky defaults) while working toward runtime enforcement.

Conclusion: treat agents like first‑class principal risks​

The AppOmni and Zenity disclosures are not isolated curiosities—they are early, high‑visibility examples of a systemic phenomenon: agentic AI changes the calculus of risk. Where once identity, MFA, and human audit trails were the primary controls, organizations now have to secure a new class of non‑human, composable principals that can call each other, act autonomously, and multiply their impact.
The immediate playbook is clear and achievable: inventory agents, apply least privilege, disable permissive defaults, rotate shared secrets, require MFA for linking flows, and demand end‑to‑end telemetry from vendors. Longer term, product teams and platform vendors must bake secure‑by‑default behaviors, provide signed agent artifacts, and emit step‑level, tamper‑resistant provenance so defenders can see and stop agent‑to‑agent abuse before damage occurs.
These disclosures should not be read as a reason to stop using agents—they offer enormous productivity gains—but they are a stark reminder that rushing agent rollouts without governance will turn helpers into vulnerabilities. Organizations that act now—treating agents as critical infrastructure and closing the governance, identity, and telemetry gaps—will avoid being the next cautionary tale.

Source: WinBuzzer Critical AI Agent Flaws Exposed in Microsoft Copilot Studio and ServiceNow
 

Back
Top