• Thread Author
Zenity’s expanded partnership with Microsoft plugs real-time, inline security directly into Microsoft Copilot Studio agents — a move that promises to make agentic AI safer for widespread enterprise use while raising new operational and architectural questions for security teams. The announcement, carried by the company and live in Azure Marketplace previews, describes an enforcement model that interposes Zenity’s controls inside each Copilot Studio agent to prevent data exfiltration, block direct and indirect prompt injections, and stop improper secrets handling at the moment a tool would be invoked. For organizations planning to scale Copilot Studio across lines of business, this is an important evolution: it shifts many protections from post-hoc detection into inline prevention at the step and action level of agent execution.

Background / Overview​

Microsoft Copilot Studio is Microsoft’s low-code/no-code environment for building AI agents that can reason over internal data, call external services via connectors, and execute actions across enterprise systems. Its architecture supports Power Platform-style connectors, Model Context Protocol (MCP) servers for tool integration, and an increasingly rich catalog of Copilot connectors to bring CRM, ERP, email, and other data into agent reasoning and workflows.
Zenity is an AI agent security and governance vendor that positions itself as providing full-lifecycle protection for agentic AI — from discovery and posture management at build time to runtime detection, prevention, and response. The vendor’s platform emphasizes an agent-centric model: security is applied based on what an agent is allowed to do, what data it touches, and which tools it invokes, rather than treating agents as generic cloud workloads.
The recent expansion of the integration between the two platforms (announced publicly in vendor communications and made available via the Azure Marketplace public preview) stitches Zenity’s runtime enforcement into Copilot Studio agents. Zenity says it can operate “inside” each agent built in Copilot Studio, enforcing policies at the step level, analyzing intent and behavior signals in real time, and preventing risky actions before they complete.

What the integration does — technically and operationally​

Inline attack prevention inside agent flows​

Zenity’s core claim with this integration is that it can provide inline attack prevention at the moment an agent attempts to invoke an external tool or perform an action. That changes the threat model:
  • Instead of merely detecting suspicious outputs or anomalous API calls after the fact, the system analyzes the agent’s planned action in-context (the step), evaluates risk signals (intent, data involved, connector target), and blocks or modifies the action if it violates policy.
  • Inline enforcement is applied to common action surfaces in Copilot Studio: MCP servers, Power Platform connectors, CRM APIs, business applications, email, and other tools that agents call to retrieve or write data.
This approach targets several high-risk attack vectors specific to agents:
  • Prompt injection and indirect prompt injection: where crafted inputs, knowledge sources, or tools steer an agent to leak secrets or perform unauthorized actions.
  • Data exfiltration via connectors or RAG: where a retrieval-augmented generation workflow or direct connector access could expose PII/PHI or intellectual property.
  • Secrets misuse and inappropriate tool execution: where an agent uses credentials or privileges it should not use, or calls a tool in a way that bypasses governance.

Step-level monitoring and policy enforcement​

Zenity emphasizes step-level context: every agent interaction is decomposed into discrete steps (prompts, tool calls, flow branches). Policies map to those steps and can be enforced automatically. That enables granular rules such as:
  • Block any step that would send more than X fields of PII to an external service.
  • Disallow connector invocations that would write to payment systems unless the agent has a documented business justification and an approved role.
  • Intercept steps that combine internal secrets with third-party endpoints.
Operationally, that means security teams get richer execution context for every agent decision, enabling automated enforcement and a tighter audit trail than traditional network or DLP controls provide.

Real-time threat disruption & intelligent reasoning​

Beyond rule-based blocking, Zenity’s platform claims to apply intelligent “threat reasoning” — behavioral signals and intent analysis — to distinguish benign from malicious or unsafe actions. The system’s idea is to detect subtle indicators (unexpected sequence of steps, anomalous entity access pattern, unusual destination for exported data) and stop the workflow before damage occurs, all while preserving user productivity by avoiding disruptive manual gates when not needed.

Continuous enterprise visibility and lifecycle coverage​

The integration is described as extending Zenity’s existing capabilities — discover, posture management, detection and response — into Copilot Studio. Visibility spans:
  • Buildtime checks: posture and configuration issues as copilots are created (e.g., overly-permissive connectors).
  • Runtime monitoring: live tracking of agent behavior, step execution, tool invocations, and deviations from baseline.
  • Response actions: automated playbooks to stop, quarantine, or roll back problematic agent actions.
This buildtime-to-runtime model aims to give security teams a single-pane view of agent risk and the operational levers to control it.

Why this matters for enterprises​

  • Copilot Studio adoption will expand fast: Copilot Studio lowers the technical barrier for departmental teams — marketing, HR, finance, operations — to create agents that automate workflows and access sensitive business systems. That democratization accelerates innovation but massively enlarges the attack surface. Inline enforcement gives central security teams a mechanism to let business units build while keeping risk in check.
  • Connectors and MCP servers are high-value targets: Agents are most useful when they can call into CRM, ERP, email, and other systems. Those integrations are exactly where prompt injections, credential misuse, and data leakage happen. Security controls that attach to those action points are therefore strategically important.
  • Compliance and auditability: With step-level logs and execution context, organizations can produce far better artifacts for compliance and incident forensics than generic telemetry from infrastructure layers.
  • Friction vs. safety trade-off: The integration tries to minimize friction by doing enforcement inline and automatically; the promise is fewer manual approvals while still blocking risky behavior.

Verifying the claims — what’s confirmed and what’s still opaque​

The integration and its broad capabilities have been announced via vendor channels and are visible in the Azure Marketplace preview listings and product blogs. Public vendor materials verify several concrete points:
  • Zenity’s platform includes AI Security Posture Management (AISPM) and AI Detection & Response (AIDR) features designed to manage agent risk from buildtime to runtime.
  • Copilot Studio supports connectors and MCP servers as the mechanism agents use to reach external systems; those are common vectors for exfiltration and action execution inside Copilot Studio agents.
  • Zenity is available in the Azure Marketplace and has features targeted at Microsoft 365 Copilot and Copilot Studio integration.
However, some operational details remain less verifiable from public materials:
  • The exact mechanism by which Zenity inserts inline controls into a Copilot Studio agent’s runtime flow (for example, whether enforcement is implemented via a Microsoft-provided extension point, a runtime hook in the agent execution engine, or by mediating MCP calls) is not fully documented in vendor public pages.
  • The efficacy of the threat-reasoning models (false-positive/false-negative rates, performance at scale, latency impact on agent responses) is not reproduced in independent testing available publicly.
  • Pricing, consumption model, and enterprise-scale deployment patterns for large organizations building thousands of agents are not detailed in the preview materials.
Because these points impact operational decision-making (latency, user experience, telemetry integration, and total cost of ownership), they should be validated in technical proofs-of-concept and procurement discussions.

Strengths: what this integration gets right​

  • Native, inline prevention is the right direction: Moving controls from detection-only models into inline enforcement at the tool-invocation layer addresses the core weakness of many AI security strategies — detection after the fact.
  • Agent-centric telemetry and step context: Security signals tied to the logical steps of an agent offer higher fidelity for policy enforcement and forensics than infrastructure-level logs alone.
  • Alignment with enterprise tooling: Deep integration with Microsoft connectors, MCP servers, and Power Platform means Zenity’s protections can be applied where most enterprises will run their agents.
  • Lifecycle coverage: Combining buildtime posture management with runtime blocking narrows blind spots that appear when citizen developers publish agents without security reviews.
  • Enterprise-ready compliance posture: Zenity’s publicly-stated compliance profile (ISO 27001, ISO 27701, SOC 2 Type II, GDPR commitments) and recognition in industry market guides indicate maturity in vendor controls and governance posture.

Risks, limitations, and potential pitfalls​

  • False positives and workflow disruption: Inline blocking is powerful but can substitute one problem (data leakage) for another (blocked business-critical workflows) if policies are too strict or poorly tuned. False positives will directly impact users in real time.
  • Operational complexity at scale: Managing step-level policies across hundreds or thousands of disparate agents — especially those built by non-technical business users — is non-trivial. Policy drift, exceptions, and consent models will need governance tooling and human processes.
  • Coverage gaps outside Microsoft ecosystem: Enterprises with multi-cloud or multi-vendor agent deployments (AWS Bedrock, Google Vertex AI, bespoke LLM apps) will need consistent security across platforms. A Microsoft-Copilot-focused inline solution helps one major vector but is not a universal defense unless extended.
  • Reliance on vendor-controlled hooks: Inline enforcement requires integration points in the agent runtime. If those integration points are proprietary or subject to change, enterprises can be exposed to future compatibility or support risk.
  • Privacy and telemetry concerns: Step-level analysis of agent behavior means sensitive context may be visible to the security vendor. Data residency and access controls over telemetry must be explicitly agreed and audited.
  • Adversarial evasion and sophisticated threat models: Attackers continuously evolve prompt-injection and RAG-poisoning techniques. Behavior-based detections and intent models can be evaded by clever adversaries or by trusted insiders.
  • Supply-chain risk via connectors and MCP servers: Third-party connectors and external MCP services introduce their own trust and integrity problems. Inline prevention at the agent level helps but cannot fully remove risks posed by compromised connectors or API endpoints.

Practical implementation: recommended rollout plan (operational steps)​

  • Inventory agents and connectors
  • Use discovery tools to map existing Copilot Studio agents, connectors, and MCP servers in a single inventory.
  • Classify data and high-risk actions
  • Label systems and data the agents access: PII, PHI, payment systems, IP, and regulatory-scoped assets.
  • Start with a phased public preview
  • Deploy Zenity’s inline controls in a staging/limited-production scope to measure impact on latency and false positives.
  • Define step-level policies and guardrails
  • Create default policy templates for common tasks (read-only knowledge retrieval; email send with sanitized fields; no exports to external domains).
  • Integrate telemetry with SOC stack
  • Forward significant events and blocked actions to SIEM/EDR/SOAR for centralized investigation and correlation.
  • Establish automated playbooks
  • Build automated responses for high-severity events: block, quarantine, revoke agent credentials, and notify owners.
  • Run adversarial testing
  • Execute prompt-injection and RAG-poisoning red-team scenarios to tune detection thresholds and refine rules.
  • Train business users and makers
  • Teach citizen developers secure design patterns and how to interpret policy exceptions.
  • Monitor and iterate
  • Regularly review blocked actions, false positives, and agent behavior baselines to refine policies and reduce friction.

How to benchmark vendor claims during procurement​

  • Request a technical architecture diagram showing how inline enforcement is injected into Copilot Studio agent execution, and whether Microsoft-provided extension points or runtime hooks are used.
  • Ask for SLA metrics and performance impact numbers: average added latency per agent action, throughput limits, and scale validation (agents per tenant).
  • Request real-world telemetry statistics about false-positive rates in comparable deployments, and typical incident response times for automated playbooks.
  • Validate compliance artifacts: SOC 2 Type II report, third-party penetration test summaries, and data processing agreements covering telemetry handling and retention.
  • Insist on a live proof-of-concept that includes representative agent workflows and connectors from your environment, not just synthetic demos.

Longer-term implications and the security roadmap for agentic AI​

The introduction of inline prevention into production-grade agent frameworks is a milestone — it reflects the growing consensus that agentic AI requires novel controls beyond traditional application security and DLP.
  • Expect to see more runtime enforcement capabilities born inside platform ecosystems (Microsoft, Google, AWS), either via partner integrations or native guardrails.
  • Standards and frameworks (community-driven OWASP LLM guidance, MITRE’s model for agent threats) will continue to mature; vendor integrations that map findings to these standards simplify governance.
  • Regulatory scrutiny will likely increase: as agents access regulated data, privacy and accountability questions will prompt requirements for explainability, auditability, and data minimization in agent workflows.
  • Security innovation will have to balance autonomy and control: the more independent an agent is allowed to be, the more robust and adaptive the security stack must become.

Bottom line: powerful tool, not a silver bullet​

Zenity’s expanded integration with Microsoft Copilot Studio represents a meaningful evolution in AI agent security: inline attack prevention and step-level policy enforcement promise stronger protections at the point agents interact with enterprise systems. For organizations heavily invested in the Microsoft ecosystem, the combined stack can materially reduce the most common agent-specific risks — prompt injection, RAG-related leakage, and unsafe tool invocation.
That said, the technology is not a silver bullet. Deployments must be planned carefully to avoid disrupting legitimate workflows, and enterprises should validate vendor performance characteristics, integration mechanisms, and telemetry handling before large-scale rollouts. Security teams will still need to run adversarial testing, maintain least-privilege practices, and develop robust governance and incident-response playbooks.
Adopting Zenity’s inline capabilities makes sense as part of a layered defense that includes careful agent design, connector hardening, credential hygiene, SIEM integration, and ongoing user training. When combined with clear policies and measured operational practice, this integration can substantially raise the cost for attackers and help enterprises scale Copilot Studio adoption without surrendering control.

Source: Business Wire https://www.businesswire.com/news/home/20250904371469/en/Zenity-Expands-Integration-with-Microsoft-Copilot-Studio-to-Secure-AI-Agents-at-Scale/