Check Point and Microsoft announced a collaboration this week to embed enterprise-grade AI security directly into Microsoft Copilot Studio, promising continuous runtime protection, DLP, and threat prevention for AI agents built and deployed on the platform.
The announcement — published by Check Point on November 18–19, 2025 — positions the partnership as a response to a stark reality: AI agents are rapidly moving from experiments into production, and when they act autonomously they expand the enterprise attack surface in new ways. Copilot Studio is Microsoft’s platform for building, extending, and publishing AI agents into business workflows. Check Point’s pitch is straightforward: integrate its AI Guardrails, Data Loss Prevention (DLP), and Threat Prevention capabilities into Copilot Studio at runtime so organizations can iterate on agent-powered automation while preserving visibility, compliance, and prevention-first security.
This integration follows a broader industry pattern: as Microsoft commercializes agent platforms (Copilot Studio features include code interpreters, MCP connectors, hosted browsers/“computer use,” and runtime credential handling), multiple security vendors are racing to provide inline and runtime protections that stop prompt injection, data exfiltration, credential theft, and model misuse. Check Point’s announcement joins similar integrations and partnerships from other security providers that aim to secure agents across build-time and runtime.
Given that Copilot Studio supports:
Check Point’s approach is to extend runtime protections inside Copilot Studio with its prevention-first stack. The core technical idea is:
Two important verifiable platform realities underpin the collaboration:
For enterprise buyers, this means:
However, the move does not eliminate the need for comprehensive AI governance. Runtime protection is necessary but not sufficient. Organizations must combine robust build-time controls, identity and secret management, telemetry integration, incident preparedness, and privacy review to safely realize the productivity benefits of Copilot Studio. Claims of “low latency” and seamless scale should be validated in each environment, and teams should assume adversaries will adapt. With careful design, testing, and governance, the Check Point integration can materially reduce key agent risks — but only as a part of a layered, continuously evolving enterprise AI security program.
Source: APN News Check Point Software Collaborates with Microsoft to Deliver Enterprise-Grade AI Security for Microsoft Copilot Studio
Background
The announcement — published by Check Point on November 18–19, 2025 — positions the partnership as a response to a stark reality: AI agents are rapidly moving from experiments into production, and when they act autonomously they expand the enterprise attack surface in new ways. Copilot Studio is Microsoft’s platform for building, extending, and publishing AI agents into business workflows. Check Point’s pitch is straightforward: integrate its AI Guardrails, Data Loss Prevention (DLP), and Threat Prevention capabilities into Copilot Studio at runtime so organizations can iterate on agent-powered automation while preserving visibility, compliance, and prevention-first security.This integration follows a broader industry pattern: as Microsoft commercializes agent platforms (Copilot Studio features include code interpreters, MCP connectors, hosted browsers/“computer use,” and runtime credential handling), multiple security vendors are racing to provide inline and runtime protections that stop prompt injection, data exfiltration, credential theft, and model misuse. Check Point’s announcement joins similar integrations and partnerships from other security providers that aim to secure agents across build-time and runtime.
What the collaboration promises
Key capabilities Check Point highlights
- Runtime AI Guardrails — Continuous protection while an agent is running to prevent prompt injection, stop data leakage, and block misuse of models or actions.
- Data Loss and Threat Prevention — Integrated DLP and threat engines that inspect tool calls and workflows inside Copilot Studio to block or quarantine sensitive content and malicious behavior.
- Enterprise-Grade Scale and Precision — A packaged security bundle aimed at large deployments, with claims of consistent protection and low latency.
- Seamless Productivity — Intended to preserve developer and end-user productivity by running protections inline, without altering agent behavior except to prevent unsafe operations.
Microsoft’s stated intent
Microsoft frames Copilot Studio as a platform enabling enterprise agents across Microsoft 365 and other channels, and has been steadily adding governance, DLP, allow-listing, credentials management, and analytics for agent management. The vendor emphasizes that platform features and partner integrations should help enterprises scale agent deployments while meeting regulatory and operational requirements.Why this matters now
AI agents in the enterprise differ from a regular SaaS or on-prem app: they execute actions, call external systems, and can be given continuous credentials and access. New classes of threats have already surfaced — prompt injection, exfiltration through seemingly legitimate connectors, malicious agent templates, and social engineering attacks that exploit agent trust. Recent public reporting has highlighted techniques where Copilot Studio agents were abused to capture OAuth tokens or to trick users into granting permissions.Given that Copilot Studio supports:
- file uploads and downstream tool calls,
- code interpreter/runtime Python execution,
- hosted browser / computer automation,
- MCP connectors and third-party model integrations,
Technical context and verification
Microsoft’s Copilot Studio already includes native DLP and governance features (tenant defaults, DLP enforcement modes, allow-list controls, credential management, and analytics). It also supports advanced agent capabilities like code interpreter execution and MCP connectors that broaden the possible inputs and actions of an agent.Check Point’s approach is to extend runtime protections inside Copilot Studio with its prevention-first stack. The core technical idea is:
- Inspect each agent’s tool invocation and workflow execution at runtime.
- Apply DLP and threat rules inline to block data exfiltration or suspicious behavior before a tool call completes.
- Detect prompt injection patterns and malformed prompts that could coax the agent into abusive behavior.
- Enforce enterprise policy and audit logs across agent runs to support compliance and forensic needs.
Two important verifiable platform realities underpin the collaboration:
- Copilot Studio's runtime features (file ingestion, code execution, browser automation, connectors) materially expand agent capabilities — and therefore risk. Microsoft’s product documentation and “what’s new” updates confirm these features and their enterprise configuration surfaces.
- Multiple security vendors have announced or expanded integrations to protect Copilot Studio agents at runtime, which indicates Microsoft’s platform supports partner inline controls and that enterprises are seeking layered protections beyond built-in DLP.
Strengths of the Check Point + Microsoft approach
- Runtime protection is the right shift: Build-time scanning and policy configuration are necessary but insufficient. Agents behave dynamically; a prevention posture that inspects actions as they occur is a practical improvement for stopping exfiltration and malicious commands.
- Integration with an enterprise security stack: Check Point’s suite (firewall, DLP, threat prevention) already spans network, cloud, and endpoint controls. Tying agent activity into that telemetry can reduce blind spots and accelerate incident response.
- Prevention-first focus: Prioritizing blocking over detection is appropriate for high-value data and actions — once a hook or token is stolen, detection alone is often too late.
- Scalability claim meets enterprise need: Large organizations will publish hundreds or thousands of agents. A vendor-positioned “enterprise-grade” offering aimed at low-latency inline enforcement is a meaningful market differentiator if it truly meets the performance and scale bar.
- Policy alignment and governance: Embedding enterprise policy enforcement (retention, allow-lists, compliance checks) into agent runtime reduces the risk of drift between security policy and agent behavior.
Risks, gaps, and areas that need scrutiny
- Performance and latency claims need independent validation. Inline runtime inspection can be CPU- and I/O-intensive. The press materials state “low latency without impacting performance,” but real-world agent environments (concurrent runs, heavy file workloads, code interpreter tasks) can expose bottlenecks. Enterprises should benchmark in production-like conditions before wide rollout.
- Policy complexity and false positives. Strong runtime prevention can generate false positives that break legitimate automation. Blocking flows without graceful fallback or transparent explainability will frustrate developer teams and slow adoption.
- Supply-chain and orchestration risks. Copilot Studio supports external connectors and multiple model providers. Securing only the Copilot surface is helpful but not sufficient if third-party connectors, MCP servers, or external model endpoints are compromised or misconfigured.
- Telemetry and privacy trade-offs. Deep runtime inspection of conversation content, files, and tool calls raises data privacy concerns and possible regulatory implications. Organizations subject to data residency, health, financial, or other sensitive regulations must ensure that inspection mechanisms do not create additional compliance exposure.
- Elevation of single-vendor dependencies. Broadly integrating an enterprise security vendor into the agent runtime plane increases the operational dependency on that vendor. Outages, misconfigurations, or product bugs in the security layer could inadvertently impair critical business automation.
- Evasion and adversary adaptation. Runtime guardrails are effective against known attack patterns, but adversaries will adapt with stealthier techniques, staged exfiltration, or poisoning attacks that exploit model behavior or trust chains. Continuous threat modeling and red teaming are still required.
Practical guidance for enterprises evaluating the integration
1. Validate risk scenarios first
Map where your agents will operate, which systems they can access, and what data they can touch. Prioritize protecting agents that handle highly sensitive data, privileged credentials, or autonomous action triggers.2. Perform staged testing and SLO benchmarking
- Deploy protections in a non-production tenant with representative load.
- Measure latency and error rates for common agent actions (file upload, code execution, browser automation, connector calls).
- Define Service Level Objectives (SLOs) for agent response times and availability; confirm the security stack meets them.
3. Tune policies for a balance of prevention and availability
- Start with monitoring-only modes to establish baselines.
- Use progressive enforcement: soft-blocking (alerts + blocked updates) → selective blocking (high-risk flows) → full blocking for critical paths.
- Implement developer-friendly exceptions and an incident feedback loop to reduce friction.
4. Harden identities and secrets handling
- Avoid embedding long-lived credentials in agents; require ephemeral credentials and strong MPC or managed identity approaches.
- Enforce least-privilege for agent identities and apply conditional access and MFA where human consent or elevated access is required.
5. Integrate logs into central SIEM and automation
Ensure agent runtime logs and security events flow into your central SIEM and SOAR systems, with playbooks for rapid token revocation, credential rotation, and connector quarantine.6. Document compliance posture and data residency
For regulated industries, document how runtime inspection affects data flows, whether content is duplicated to third systems for scanning, and where logs/data are stored.How this fits into a broader AI security architecture
A secure agent program needs layered controls across the AI lifecycle:- Design and model governance: Model selection, evaluation, and guardrails at the model layer (red-teaming, bias and safety checks).
- Build-time controls: Approved templates, code review, static DLP, and policy enforcement in CI/CD for agents.
- Runtime controls: Inline DLP, threat prevention, allow-lists, and telemetry for agent runs.
- Operational controls: Identity/secret management, incident response, monitoring, and continuous risk assessment.
- Policy and training: Governance frameworks, developer playbooks, and training for makers and citizen developers.
Competitive landscape and market implications
The Check Point collaboration is one of several vendor partnerships and integrations targeting Copilot Studio and the agent ecosystem. Other security vendors have announced native, inline, or agent-level protections for Copilot Studio, reflecting high demand for agent runtime security.For enterprise buyers, this means:
- A growing ecosystem of choices, each with different trade-offs around depth of inspection, deployment model (cloud-native vs. appliance), and cost.
- Increased expectations that vendors will support low-latency, scaleable enforcement — but customers must validate these claims empirically.
- Pressure to standardize agent security controls and telemetry schemas so organizations can manage multi-vendor environments without fragmentation.
Threat scenarios this integration helps mitigate — and those it won’t
Mitigations
- Prompt injection attempts that attempt to corrupt agent behavior by manipulating prompts or system instructions can be detected and blocked by runtime guardrails.
- Immediate data exfiltration via connectors or tool calls can be intercepted by inline DLP before content leaves an approved boundary.
- Automated misuse of actions (e.g., mass deletion, unauthorized external requests) can be prevented by policy enforcement on tool invocations.
Residual risks
- Supply-chain compromise of third-party MCP servers, model providers, or connector endpoints can still lead to data leakage if the connected end points are inherently insecure.
- Stealthy exfiltration strategies (slow drip, covert channels, steganography) require advanced detection and continuous adaptation of rules.
- Human-in-the-loop consent abuse (social engineering leading operators to approve malicious actions) remains a high-risk vector that technical controls alone cannot fully eliminate.
Governance, legal, and privacy considerations
Runtime inspection that analyzes content and conversations will often process sensitive information. Organizations must:- Confirm where inspection occurs (in-tenant vs. vendor cloud), how long data is retained, and whether content is stored in cleartext.
- Ensure inspection and logging comply with data protection laws (e.g., data residency laws, GDPR, sector-specific rules).
- Include security partners in vendor risk assessments and data processing agreements so obligations, breach responsibilities, and data handling procedures are contractually clear.
Recommendations for security teams
- Treat agents as a first-class asset class. Add agent ownership to your asset inventory and include agent risk profiles in threat modeling.
- Mandate least-privilege identity for agent service accounts and enforce ephemeral credential patterns for any service-level access.
- Start with targeted pilot deployments for high-value agents, and expand only after performance validation and policy tuning.
- Maintain a red-team program that includes agent-specific playbooks (prompt injection tests, connector spoofing, credential harvesting scenarios).
- Ensure incident response runbooks include agent-specific remediation steps: revoke credentials, quarantine connectors, roll back agent versions, and audit agent run history.
Conclusion
The Check Point–Microsoft collaboration addresses an urgent and growing need: protecting agent runtime behavior in environments where AI agents can perform actions, connect to enterprise systems, and touch sensitive data. Embedding runtime guardrails, DLP, and threat prevention into the Copilot Studio execution path is a pragmatic advance for enterprises that intend to deploy agents at scale.However, the move does not eliminate the need for comprehensive AI governance. Runtime protection is necessary but not sufficient. Organizations must combine robust build-time controls, identity and secret management, telemetry integration, incident preparedness, and privacy review to safely realize the productivity benefits of Copilot Studio. Claims of “low latency” and seamless scale should be validated in each environment, and teams should assume adversaries will adapt. With careful design, testing, and governance, the Check Point integration can materially reduce key agent risks — but only as a part of a layered, continuously evolving enterprise AI security program.
Source: APN News Check Point Software Collaborates with Microsoft to Deliver Enterprise-Grade AI Security for Microsoft Copilot Studio




