Zenity’s latest move to embed real-time, inline enforcement into Microsoft’s agent ecosystem marks a practical turning point for enterprise AI security: the company has announced inline prevention for Microsoft Foundry and declared general availability of its inline prevention for Microsoft Copilot Studio, promising deterministic runtime controls that stop unsafe agent actions before they complete.
Enterprises are rapidly adopting agentic AI — configurable, multi-step assistants that act across data, tools and workflows — because agents can automate complex tasks, integrate with business systems, and serve as internal “digital workers.” Microsoft’s platforms, from Copilot Studio to Azure AI Foundry, are central to that push: Copilot Studio focuses on low-code authoring and deployment across Microsoft 365 surfaces, while Azure AI Foundry provides an agent factory and runtime for production-grade agent orchestration. Microsoft’s platform roadmap emphasizes model choice, connector breadth, Model Context Protocol (MCP) integration, and runtime observability — all of which expand agents’ capabilities and their attack surface. Zenity positions itself as an agent-centric security and governance layer that spans build time to runtime, aiming to remove “blind spots” that legacy security tooling cannot cover for agentic systems. That includes inventory and posture management, step-level visibility into agent execution, threat detection, and — most importantly for this announcement — inline prevention: enforcement controls embedded into the agent execution path so policy violations are prevented deterministically at runtime. Zenity has already listed Copilot Studio integration in Azure Marketplace and publicized Copilot Studio support; the new announcement extends similar inline controls to the Foundry runtime, with Foundry preview capabilities described as coming soon.
Zenity’s announcement is emblematic of a broader market shift: security tooling is catching up to the realities of agentic AI by moving enforcement closer to action. That is good news for enterprises ready to govern agents with a blend of automation and human oversight — provided IT and security teams implement careful pilots, validate assumptions in their own environments, and maintain pragmatic expectations about false positives, rollout cadence and vendor integration choices.
Source: The AI Journal Zenity Announces Inline Prevention for Microsoft Foundry and General Availability for Capabilities in Microsoft Copilot Studio | The AI Journal
Background / Overview
Enterprises are rapidly adopting agentic AI — configurable, multi-step assistants that act across data, tools and workflows — because agents can automate complex tasks, integrate with business systems, and serve as internal “digital workers.” Microsoft’s platforms, from Copilot Studio to Azure AI Foundry, are central to that push: Copilot Studio focuses on low-code authoring and deployment across Microsoft 365 surfaces, while Azure AI Foundry provides an agent factory and runtime for production-grade agent orchestration. Microsoft’s platform roadmap emphasizes model choice, connector breadth, Model Context Protocol (MCP) integration, and runtime observability — all of which expand agents’ capabilities and their attack surface. Zenity positions itself as an agent-centric security and governance layer that spans build time to runtime, aiming to remove “blind spots” that legacy security tooling cannot cover for agentic systems. That includes inventory and posture management, step-level visibility into agent execution, threat detection, and — most importantly for this announcement — inline prevention: enforcement controls embedded into the agent execution path so policy violations are prevented deterministically at runtime. Zenity has already listed Copilot Studio integration in Azure Marketplace and publicized Copilot Studio support; the new announcement extends similar inline controls to the Foundry runtime, with Foundry preview capabilities described as coming soon. What Zenity is claiming — the essentials
Zenity’s announcement highlights several concrete capabilities it says will now reach Microsoft’s ecosystem:- Inline prevention at runtime: Deterministic enforcement that can block or “hard stop” agent actions (tool calls, data exports, commands) when policies are violated — before data leaves the agent or the action executes.
- Step-level visibility: Breakdowns of agent execution into discrete steps (prompts, tool invocations, data access, triggers) so security teams can analyze intent and context rather than only logs.
- Protection against prompt injection and data exfiltration: Specific countermeasures claimed to detect and disrupt direct and indirect prompt-injection attacks and encoded exfiltration attempts by intervening inline on tool calls and data flows.
- Integration across Microsoft’s agent stack: The Copilot Studio integration is declared generally available; enhanced Foundry controls are announced for preview as Foundry capabilities continue to roll out. Zenity also highlights Azure AI Foundry and other clouds/platforms on its product pages.
Why this matters: the technical context
Modern agent stacks blur the lines between tooling, data access and automation. Agents built in Copilot Studio or Azure AI Foundry can:- Call connectors to CRM systems, ticketing platforms or databases.
- Invoke external MCP-backed services and tools.
- Run flows that trigger Power Automate actions or UI-level automations.
- Execute code or orchestrate multi-step processes using agent flows.
How Zenity’s approach works (vendor claims and technical model)
Zenity frames its protection in three broad buckets:- Hard boundaries / deterministic runtime controls — policy engines that evaluate an agent’s intended action at runtime and either permit, block, or mutate the action before it completes. These controls are presented as immutable enforcement points rather than advisory alerts.
- Full lifecycle observability — correlating build-time configuration and posture (who built the agent, with which connectors and permissions) with runtime actions, to make it possible to detect deviations or exploitation patterns quickly.
- Inline detection for agent-specific attack types — a focus on prompt-injection patterns, encoded or chained exfiltration paths, and “tool misuse” where a seemingly legitimate tool invocation is used to leak data.
Cross-checking the claims: what can be verified
Multiple independent sources confirm the underlying trends and many of the elements Zenity mentions:- Microsoft’s Azure AI Foundry documentation and developer blogs describe a Foundry resource, MCP support, agent orchestration, and the ability to attach monitoring and safety controls to agents — the platform Microsoft is calling Foundry is established and evolving. This validates the premise that a runtime enforcement layer could reasonably integrate with Foundry or the Foundry control plane.
- Zenity’s own product pages and BusinessWire/press materials document its Copilot Studio integration and list Azure AI Foundry and Copilot Studio as supported platforms; Zenity also notes availability through Azure Marketplace for Copilot Studio integration. That establishes that the Copilot Studio integration is public and that Zenity is actively marketing Foundry capabilities.
- Broader coverage of Copilot Studio and Microsoft’s agent roadmap in the uploaded briefing files highlights features like MCP connectors, agent flows, code interpreters and the increasing requirement for runtime governance — which aligns with the threat model Zenity addresses.
- Zenity’s announcement about “inline prevention for Microsoft Foundry” references upcoming preview capabilities. At present, Microsoft’s public Foundry docs and Zenity’s Foundry-focused pages confirm Foundry support and the general possibility of integration, but independent confirmation that a specific, fully integrated inline prevention feature is live in Foundry’s control plane is limited. Customers should confirm preview availability and constraints directly with Zenity and Microsoft, and validate any integration in a test tenant before production rollout.
Strengths: what security teams can realistically gain
- Real-time disruption of risky workflows — Inline enforcement can block dangerous actions (unauthorized tool calls, export commands, or encoded exfil attempts) before data leaves the environment, closing a gap that traditional DLP and SIEM often miss.
- Agent-centric observability — Step-level traces that tie a prompt, its tool invocations, and resulting outputs into a single execution chain reduce investigation time and provide clearer audit evidence for compliance and forensics.
- Policy-driven consistency — Embedding posture checks into build-time pipelines and enforcing them at runtime reduces drift between what an agent was designed to do and what it actually does in production.
- Easier risk-aware scaling — For large organizations that empower business units to author agents in Copilot Studio, bringing centralized runtime controls lowers the operational friction of scaling agent adoption across teams. This aligns with Microsoft’s vision of Copilot Studio as a low-code onramp for enterprise automation.
Risks, limitations, and operational caveats
- False positives vs. productivity friction: Deterministic hard stops are powerful but risk blocking legitimate workflows if policies are too strict or if the enforcement engine lacks good context. Security teams must adopt gradual, telemetry-driven policy tuning and define clear exception workflows.
- Integration surface complexity: Agents invoke diverse connectors (MCP resources, SharePoint, Dataverse, external APIs). A universal enforcement layer must understand connector semantics to avoid over-blocking or under-protecting; misconfigurations here are a common source of both outages and security gaps.
- Vendor and platform lock-in: Where enforcement relies on vendor-specific integrations or proprietary agent runtime hooks, organizations may trade reduced risk for reduced portability. Ensure contractual and architectural clarity on where enforcement logic lives (client-side, cloud-side, or Microsoft-provided control plane).
- Evolving threat tactics: Attackers adapt. Even with prompt-injection and exfil protections, adversaries may craft multi-step chains that blend permitted actions with obfuscated leaks. Security programs must combine runtime prevention with robust build-time posture management, red-teaming, and ongoing model testing.
- Unclear Foundry rollout timing: Zenity’s Foundry capabilities are announced for preview as Foundry evolves. Enterprises must validate exact availability, supported features, and tenant-level controls before relying on Foundry inline prevention for critical controls.
Practical guidance for IT and security teams — a recommended checklist
- Inventory and classify agents
- Discover agents across Copilot Studio, Azure AI Foundry projects, Power Platform flows, and home-grown agent runtimes.
- Map agents to owners, data sources, connectors and risk tiers.
- Pilot inline enforcement in a non-production tenant
- Start with high-impact, low-risk agents (e.g., knowledge assistants with no PII access).
- Monitor blocked actions, false positives, and operator workflows; iterate policies before production.
- Apply least-privilege connector and identity mappings
- Lock agent identities to minimal permissions; prefer service principals and Azure managed identities with narrowly scoped roles.
- Combine build-time posture checks with runtime controls
- Use AISPM (AI Security Posture Management) for pre-flight vulnerability scans and inline prevention for runtime enforcement.
- Put human-in-the-loop for exception handling
- Hard stops should surface clear remediation and escalation steps so business continuity is preserved.
- Test adversarial scenarios
- Red-team agents for prompt injection, encoded exfiltration, and chained requests that cross connector boundaries.
- Integrate telemetry with SIEM and SOAR
- Route agent step traces and enforcement events into existing SOC pipelines for correlation, alerting and automated playbooks.
- Validate contractual and operational SLAs
- Confirm where enforcement runs (Zenity-managed SaaS, tenant-side runtime hooks, or Microsoft-managed control plane) and ensure SLAs and data processing agreements align with security and compliance needs.
Deployment patterns and architectures to consider
- Proxy enforcement: A runtime proxy intercepts agent tool calls and evaluates policies before forwarding. This centralizes control but introduces latency and an additional network dependency.
- Agent SDK hooks: Instrumentation added to agents or agent runtimes that reports intentions to a policy decision point (PDP) and receives allow/deny responses. Lower latency but requires deeper runtime integration.
- Control-plane integration: Enforcement embedded in the platform control plane (e.g., Foundry Control Plane) where Microsoft exposes hooks for partners to attach enforcement filters; this offers tight integration but depends on platform-specific capabilities and availability. Zenity’s messaging suggests partnerships across these layers, but implementation specifics vary by environment and should be validated in each tenant.
Strategic implications for organizations and vendors
- Security vendors that specialize in agent-aware controls are moving from detection to prevention. That shift reduces dwell time for attacks but raises questions about interoperability and governance.
- Microsoft’s expanding agent ecosystem (Copilot Studio, Azure AI Foundry, Entra Agent IDs, MCP connectors) is maturing quickly; third-party enforcement layers that integrate deeply can accelerate secure adoption, but organizations must plan for staged rollouts and mixed environments during transition.
- CIOs and CISOs should treat agent governance as a cross-functional challenge: procurement, IT, app owners, security and compliance must align on acceptance criteria and operational playbooks for agent behavior and exceptions.
Where to be cautious — unverifiable or fluid claims
- Statements that new Foundry enforcement features will be available “soon in preview” come from vendor announcements and reflect product roadmaps; these are subject to change and regional staging. Validate dates and tenant availability directly with Microsoft and Zenity before scheduling proof-of-concept (PoC) timelines.
- Some vendor-provided effectiveness metrics (e.g., percent reductions in policy violations) are useful signals but often reflect controlled customer environments; reproduce key tests in your own environment and document baselines for comparison.
Bottom line: measured optimism with rigorous validation
Zenity’s inline prevention for Copilot Studio (now generally available) and its announced support for Microsoft Foundry represent a substantive step toward operationalizing runtime controls for agentic AI. For security teams, the prospect of deterministic, inline enforcement and step-level observability addresses a pressing gap created by agents’ new reach into enterprise systems. However, the promise must be validated in practice. Deterministic blocks are powerful but risk disrupting legitimate business processes if policies aren’t tuned or the enforcement lacks context. Integration complexity across MCP connectors, Power Platform, and host environments means that PoCs, red-team tests, and staged rollouts are essential. Confirm Foundry preview availability and integration specifics with Microsoft and Zenity, and map enforcement placement and SLAs in procurement contracts.Quick-read recommendations (for publication or board briefings)
- Accept the premise: agents expand attack surfaces; runtime, inline controls are needed.
- Pilot Zenity’s Copilot Studio integration now (GA) in a limited business unit; measure detection, blocked actions, and false positive rates.
- Treat Foundry inline prevention as an upcoming capability; require sandbox validation and contractual clarity before production reliance.
- Combine build-time posture scanning (AISPM) with runtime enforcement and SOC integration for comprehensive coverage.
- Budget for operational overhead: policy tuning, exception workflows, and agent-specific testing must be resourced.
Zenity’s announcement is emblematic of a broader market shift: security tooling is catching up to the realities of agentic AI by moving enforcement closer to action. That is good news for enterprises ready to govern agents with a blend of automation and human oversight — provided IT and security teams implement careful pilots, validate assumptions in their own environments, and maintain pragmatic expectations about false positives, rollout cadence and vendor integration choices.
Source: The AI Journal Zenity Announces Inline Prevention for Microsoft Foundry and General Availability for Capabilities in Microsoft Copilot Studio | The AI Journal