Securing Connected Agents: Zenity Inline Prevention for Copilot Studio

  • Thread Author
Zenity’s warning that Microsoft Copilot’s Connected Agents can create an “invisible control plane” — where a privileged or shared agent enables other agents to reuse tools and knowledge without clear logs, attribution, or native visibility — has pushed a fresh, urgent wrinkle into the enterprise AI security conversation: agent-to-agent connectivity is no longer an abstract capability, it is a new attack surface that organizations must govern, observe, and defend in real time.

AI-driven security workflow shows cloud-based telemetry feeding a secure SIEM dashboard.Background / Overview​

Microsoft’s Copilot ecosystem is evolving from single assistants into a fabric of interoperable, deployable agents. Copilot Studio supports declarative agents that can call other agents (the “Connected Agents” feature), enabling modular, microservice-like reuse of agent capabilities across business workflows. The Microsoft documentation explains how a declarative agent may invoke other declarative agents to offload tasks, allowing teams to split capabilities and orchestrate multi-agent workflows. Users must install connected agents before use, and manifests define which agents may be called. At the same time, Microsoft has framed agent features inside a security posture that includes Agent Workspaces, agent accounts, the Model Context Protocol (MCP), and signing/revocation for agents and connectors. Those primitives aim to make agents first-class principals with auditable identities and scoped filesystem access, but Microsoft itself and independent reviewers have emphasized these features are new, experimental, and introduce novel threats — notably cross‑prompt injection (XPIA), data exfiltration via automation flows, and supply‑chain signing risks.
Zenity — an AI‑security vendor focused on agent governance — has announced inline prevention for Microsoft Copilot Studio (now generally available per the company) and preview integration for Microsoft Foundry. Zenity’s pitch is direct: embed deterministic runtime controls into the agent execution path to block unsafe tool calls and data flows before they complete, not merely log them after the fact. The vendor claims step‑level visibility into agent execution, enforcement of hard boundaries at runtime, and protections tailored to agent‑specific attack patterns such as indirect prompt injection and chained exfiltration.

How connected agents change the attack surface​

The architecture: agent-to-agent calls and shared capabilities​

Connected Agents lets an agent delegate work to another agent — for example, a CRM‑focused agent could call an “email sender” agent to dispatch templated emails, or a finance agent could call a reconciler agent to normalize ledger entries. Conceptually this mirrors microservices: separate components with focused responsibilities, reused across systems. Microsoft’s guidance clarifies that the communication between declarative agents is textual (prompts and responses) and that connected agents must be deployed and installed, but the runtime makes the invocation seamless to users. This convenience introduces several consequences:
  • Agents become re‑usable primitives that expose tool endpoints (email-sending, ticket creation, database writes) to a broader set of callers.
  • The act of invoking another agent can hide the original caller’s identity or blur the attribution trail unless the platform preserves clear, tamper‑resistant logs for both the caller and callee.
  • Shared or public agents — especially those with broad connectors or high privileges — create high-value targets: if compromised or misconfigured, they can be called by many others and act as a force multiplier for attackers.

Invisible control plane: what Zenity and others are warning about​

Zenity’s central observation is that agent‑to‑agent connectivity can allow a single privileged agent to act as an “invisible control plane,” enabling other agents to reuse tools and knowledge without the same degree of visibility, logs, or direct attribution. In practice, this could mean:
  • An agent with permission to send emails or access a shared mailbox could be invoked by many other agents; those invocations may not appear in the invoked agent’s audit log with sufficient context, or the platform may not surface which agent triggered the call.
  • Tools and topics owned by a shared agent become remotely callable capabilities — attackers who access a calling agent could leverage those capabilities indirectly to perform high‑impact actions.
  • Without explicit, per‑invocation visibility and enforcement, policy enforcement and forensics become much harder because activity is fragmented across agent boundaries.
Independent reporting from security outlets and researchers has echoed and amplified these concerns, documenting emergent customer cases and lab demonstrations where connected agent features were misused to achieve stealthy privilege escalation, token theft, or backdoor-like behavior at scale. These reports underscore that agent connectivity increases the probability of chained attacks where the exploitable link is not a binary or a kernel bug but a governance gap.

Specific threat scenarios enabled by connected agents​

1) Credential abuse and token chaining​

When a caller agent holds or obtains credentials (OAuth tokens, API keys, service principals) and can instruct a privileged agent to act, attackers can chain token‑based flows. Research and incident reporting have documented how OAuth flows and consent screens can be abused to harvest tokens via social engineering, and connected agents add a second route to abuse by enabling automated indirect use of tokens. If logs do not capture the full call chain, token abuse can be hard to detect.

2) Impersonation and operational abuse​

Shared agents with email‑sending, calendaring, or ticketing capabilities can be used to impersonate official processes at scale. An attacker who can cause a non‑privileged agent to invoke a privileged “sender” agent could effectively send messages that appear to originate from trusted operational addresses, facilitating phishing, account takeovers, or fraudulent changes. Several security writeups describe how the default configurations or publication flows could make such an attack easier if controls and visibility are weak.

3) Unintended data exposure and exfiltration​

Data exfiltration via agents is not hypothetical: agents that can read known folders or call connectors to cloud storage can be instructed — directly or via XPIA — to assemble and transmit sensitive content. The agent chain amplifies this risk: a data‑access agent can hand off to a network‑capable agent, bypassing file‑level restrictions that might otherwise block a single agent from completing both steps. Microsoft’s own security guidance highlights cross‑prompt injection and the dangers of agents processing untrusted content as first‑order risks.

4) Audit and observability blind spots​

Several reports claim that invocations between agents may leave incomplete traces or may not be visible in the invoked agent’s audit trail in a way that links back to the requester. That lack of end-to-end telemetry converts a compromise or misuse into an event that’s difficult to detect or attribute quickly, increasing dwell time and complicating incident response. These concerns are central to Zenity’s positioning: without step‑level telemetry and deterministic runtime blocks, traditional log‑centric defenses will miss agent‑native attack patterns.

Zenity’s response: inline prevention and step-level visibility​

What Zenity says it delivers​

Zenity’s announced integration promises three core capabilities:
  • Deterministic runtime enforcement: policy checks embedded inline with the agent’s execution path so dangerous tool calls or data exports can be blocked proactively.
  • Step‑level observability: telemetry and breakdowns of agent plans, tool invocations, and data flows so security teams can inspect intent and action at a granularity agents inherently need.
  • Agent lifecycle posture management: tying build‑time posture (who built the agent, permissions, connector scopes) to runtime behavior for correlation and faster threat detection.
Zenity frames these as “hard boundaries” — not advisory alerts but enforcement that can “hard stop” actions before data leaves an agent or a tool call completes. The integration is described as generally available for Copilot Studio and previewing for Microsoft Foundry, positioning Zenity to sit inside Microsoft’s agent control plane to intercept risky operations.

Why runtime, inline controls matter​

Traditional security tooling tends to detect suspicious flows after they occur or rely on heuristic detection of anomalous network or process behavior. Agents require a different model because:
  • The risk emerges from reasoning and orchestration (how prompts and documents are interpreted as actions), not just code execution.
  • Agents chain actions across systems; a late警 detection leaves many downstream actions completed.
  • Inline enforcement can block attempts at the source — e.g., prevent a tool call that would send a sensitive file — rather than relying solely on DLP post hoc. Zenity emphasizes deterministic, policy-backed prevention to close this gap.

Corroboration and where claims diverge​

Key claims in the public discourse deserve careful verification:
  • Zenity’s announcement of Copilot Studio GA and Foundry preview is verifiable in company statements and press releases. BusinessWire and Zenity’s own blog post confirm the availability statements and the vendor’s product roadmap claims.
  • Microsoft’s connected‑agent primitives, agent accounts, Agent Workspace, and XPIA risk are documented in Microsoft support and Learn pages, and independently covered by multiple technical outlets. Those are verifiable platform features and stated risks.
  • Claims that Connected Agents are enabled by default for all agents or that invocations leave zero trace in Copilot Studio are reported by multiple security outlets but are operational claims that vary by tenant configuration, Microsoft region, and the precise Copilot Studio/Foundry version. Microsoft Learn and documentation indicate that connected agents must be specified in a manifest and installed, which suggests administrative steps are involved; reports that default‑on behavior exists in some contexts may reflect early‑preview or misconfigured tenants. Treat these as contested or environment‑specific and verify in your tenant before assuming a platform‑wide default.
Where reporting diverges, the prudent stance is to treat platform claims about default behavior and logging as operational variables to be validated in a proof‑of‑concept: test which events show up in the caller and callee logs, whether the platform surfaces caller identity end‑to‑end, and how consent flows are recorded. Zenity and others offer inline control as a mitigation, but integration details (where enforcement sits, what latency is introduced, and how false positives are handled) must be validated in situ.

Practical guidance for IT, security, and product teams​

The immediate operational checklist for teams evaluating Copilot Studio, connected agents, or integrating Zenity’s controls should include the following steps:
  • Inventory and classification
  • Map all Copilot Studio/Foun dry tenants, published agents, and which connectors each agent uses.
  • Treat each agent as a privileged automation account and classify its access (read, write, modify) against data sensitivity.
  • Lock down agent creation and publication
  • Restrict who can create or publish agents in your tenant; use admin approval workflows for any agent that touches sensitive systems.
  • Require owner attestations and periodic re‑certification for any agent with write privileges.
  • Harden consent and token flows
  • Enforce admin consent for OAuth/connector scopes where feasible and disable user app consent to reduce token theft risk.
  • Apply conditional access and MFA to administrator and integration accounts.
  • Integrate agent telemetry with SIEM/SOAR
  • Validate that connected‑agent invocations appear in both caller and callee logs; ingest those logs to your SIEM with field‑level context to reconstruct chains.
  • Set alerts for anomalous patterns: bulk reads, unusual connector invocations, or unexpected cross‑agent calls.
  • Apply DLP, field‑level allowlists, and action whitelists
  • Extend DLP and Purview policies to account for agent-originated requests; require explicit approvals before agents can export or transmit sensitive content externally.
  • Prefer field‑level allowlists/denylists for connectors and require human approval gates for state‑changing actions.
  • Pilot inline enforcement and measure impact
  • Run a small pilot of Zenity’s Copilot Studio inline prevention in a limited environment: measure blocked actions, false positives, and business disruption.
  • Pair red‑team prompt‑injection tests with the pilot to validate detection and prevention efficacy.
  • Update IR playbooks and recovery plans
  • Ensure incident response exercises account for agent-specific incidents: chain reconstruction, token revocation, agent revocation, and rollback of state‑changing operations.
  • Maintain robust backups and versioning for any data that agents may modify.
These steps are practical and address the specific failure modes raised by Zenity and other reporting. Vendors can help, but primary responsibility for safe rollout remains with the organization operating the agents.

Strengths of Zenity’s approach — and where it may fall short​

Strengths​

  • Runtime prevention addresses a real gap. Traditional detection‑only tools struggle with agentic, multi‑step orchestration. Inline enforcement can block dangerous actions at the source and reduce the window for damage.
  • Step‑level visibility improves forensics. Breaking agent execution into discrete steps (prompts, tool calls, data access) gives security teams the context required to understand intent and rapidly triage incidents.
  • Integration with Microsoft stack is strategically important. If Zenity’s enforcement integrates deeply with Copilot Studio and Foundry control planes, it can provide enterprise‑grade controls where Microsoft’s native tooling is still maturing.

Limitations and risks​

  • False positives and business disruption. Deterministic blocks are powerful but can break legitimate workflows if policies aren’t finely tuned. Organizations must balance safety and availability.
  • Integration complexity and SLAs. Inline prevention requires tight platform hooks (control plane access) which can vary across Foundry, Copilot Studio, and tenant configurations; operational details and performance SLAs matter and are often negotiable.
  • Partial visibility is still possible. Even with inline enforcement, some attack paths (e.g., supply‑chain compromise of signed agents, or public MCP registries) require coordination with platform vendors and cannot be fully solved by a third‑party layer alone.
  • Vendor claims require tenant validation. Public claims that certain telemetry or prevention modes are available “by default” or reduce violations by a stated percentage should be validated in customer environments; such figures are often derived from controlled tests. Treat product claims as hypotheses to be measured.

Investor and market implications​

Zenity’s public statements and the subsequent market coverage (including TipRanks’ investor‑oriented summary) frame a clear investment narrative: as enterprises scale agentic AI, the demand for agent‑aware security — observability, governance, and deterministic runtime enforcement — will grow, creating a specialized market niche. Zenity has positioned itself as an early vendor offering inline prevention for Copilot Studio and preview support for Foundry; that early mover positioning can be valuable if the company converts technical integrations into enterprise customers and deep platform partnerships. However, investors should weigh several factors:
  • Execution risk: enterprise rollouts require product maturity, low false‑positive rates, and vendor resiliency; pilot results and reference customers matter more than press releases.
  • Platform dependence: close integration with Microsoft’s control planes is an advantage but also a concentration risk; shifts in Microsoft’s native tooling could reduce the addressable market if platform defenses eat into third‑party value.
  • Competitive landscape: other security vendors and cloud providers are quickly addressing agentic risks; differentiation will come from proven SLAs, SIEM/SOAR integration, and cross‑platform support.
TipRanks’ coverage correctly frames Zenity’s announcement as a signal to investors that the AI security market is evolving, but it is not a substitute for due diligence: prospective buyers and investors need pilot data, customer case studies, and contract terms that specify support and performance guarantees before extrapolating long‑term growth.

What to test and measure in a proof‑of‑concept​

To validate vendor claims and ensure safe adoption, organizations should measure:
  • Fidelity of telemetry: does the platform preserve end‑to‑end traces of agent invocations with caller/callee linkage?
  • Prevention accuracy: what percentage of malicious or policy‑violating calls does inline prevention block in controlled red‑team exercises?
  • Business impact: how often do blocks cause legitimate workflow failures and what is the mean time to triage/resolution?
  • Latency and reliability: does enforcement introduce unacceptable delays for user‑facing agents or batch processes?
  • Operational integration: how easily does the solution forward events into SIEM/SOAR, and does it support automated remediation (token revocation, agent quarantine)?
If these tests show positive results — low false positive rates, durable telemetry, and manageable latency — inline prevention can be a meaningful operational control that reduces the enterprise risk of agentic workflows.

Conclusion​

Zenity’s alert about connected agents acting as an invisible control plane has crystallized a critical truth: agent‑to‑agent connectivity turns previously passive “data channels” into active attack vectors. Microsoft’s Copilot Studio and Foundry introduce powerful developer and productivity capabilities, but those capabilities change the threat model and require new defensive patterns — build‑time posture management and runtime, deterministic enforcement among them. Zenity’s inline prevention solves a real problem: stopping unsafe agent actions in the execution path rather than chasing them after the fact. That said, product claims and operational defaults vary by environment; some assertions in public reporting (for example, about default‑on behaviors or gaps in logging) are environment‑specific and require tenant validation. Security teams should not wait for perfection: begin with inventory, narrow pilots, and adversarial testing; extend DLP and consent hardening practices to agent flows; and validate any third‑party inline enforcement in a controlled PoC that measures detection efficacy, false positives, and business impact. These steps will materially reduce the likelihood that a privileged or shared agent becomes an invisible conduit for credential abuse, impersonation, or data leakage.
The market opportunity for specialized AI‑agent security is real and growing — but it will reward vendors that can demonstrate measurable, low‑disruption enforcement in production environments and integrate deeply with enterprise telemetry and governance systems. Zenity’s announcements mark an important product milestone in that direction; organizations and investors should treat the claims as promising but subject to rigorous, tenant‑level verification before scaling reliance on any single control plane.
Source: TipRanks Zenity Flags Emerging Security Risks in Microsoft Copilot Connected Agents - TipRanks.com
 

Back
Top