prompt injection

  1. Copilot Studio Introduces Near Real-Time Runtime Monitoring for AI Agents

    Microsoft has pushed a meaningful new enforcement point into AI agent workflows: Copilot Studio now supports near‑real‑time runtime monitoring that lets organizations route an agent’s planned actions to an external policy engine — such as Microsoft Defender, a third‑party XDR, or a custom...
  2. Near Real-Time Enforcement for Copilot Studio in Power Platform

    Microsoft has added a near‑real‑time enforcement layer to Copilot Studio that lets security teams intercept, evaluate and — when necessary — block the actions autonomous agents plan to take as they run, bringing step‑level policy decisioning into the live execution loop for Power Platform...
  3. Near‑Real‑Time Runtime Security for Copilot Studio in Power Platform

    Microsoft has moved a critical enforcement point for autonomous workflows from design-time checks and post‑hoc logging into the live execution path: Copilot Studio now supports near‑real‑time runtime security controls that let organizations route an agent’s planned actions to external monitors...
  4. Copilot Studio Runtime Protection in Power Platform: Real‑Time Approve/Block Governance

    Microsoft’s Copilot Studio has added a near‑real‑time security control that routes an agent’s planned actions through external monitors—allowing organizations to approve or block tool calls and actions while an AI agent runs—and the capability is now available in public preview for Power...
  5. Copilot Studio Enables Inline Real-Time Enforcement via External Monitors

    Microsoft’s Copilot Studio has moved from built‑in guardrails to active, near‑real‑time intervention: organizations can now route an agent’s planned actions to external monitors that approve or block those actions while the agent is executing, enabling step‑level enforcement that ties existing...
  6. Inline Security for Copilot Studio Agents: Zenity's Real-Time Guardrails

    Zenity’s expanded partnership with Microsoft plugs real-time, inline security directly into Microsoft Copilot Studio agents — a move that promises to make agentic AI safer for widespread enterprise use while raising new operational and architectural questions for security teams. The...
  7. Near-Real-Time Runtime Security for Copilot Studio in Power Platform

    Microsoft has quietly but meaningfully shifted the balance of power between autonomous AI agents and enterprise defenders: Copilot Studio now supports near‑real‑time runtime security controls that let organizations route an agent’s planned actions through external monitors (Microsoft Defender...
  8. Copilot Studio Runtime: Near Real-Time AI Protection for Actions

    Microsoft is putting a second line of defense around AI agents: Copilot Studio now supports advanced near‑real‑time protection during agent runtime, a public‑preview capability that lets organizations route an agent’s planned actions through external monitoring systems — including Microsoft...
  9. Zero Trust for GenAI: Guarding Data From EchoLeak and Prompt Attacks

    In January, security researchers at Aim Labs disclosed a zero-click prompt‑injection flaw in Microsoft 365 Copilot that demonstrated how a GenAI assistant with broad document access could be tricked into exfiltrating sensitive corporate data without any user interaction—an attack class that...
  10. Chrome Becomes an AI Platform: Claude, MAI Models, and Privacy Risks

    Chrome is quietly becoming an AI platform — and the consequences are already rippling through privacy, competition, and enterprise planning. Background / Overview The past week has delivered three tightly coupled developments that deserve close attention: Anthropic’s pilot of Claude for Chrome...
  11. Hotels at the AI Crossroads: Guarding Guest Data Without Stifling Innovation

    Hotels face a crossroads: artificial intelligence promises smarter personalization and leaner operations, but when guest names, preferences or booking histories are casually copy-pasted into public chatbots the consequences can be legal, financial and reputational — as Amsterdam-based middleware...
  12. Claude for Chrome: Enterprise Browser AI Agents with Safe Automation

    Anthropic’s new Chrome extension quietly signals the next phase of enterprise AI: assistants that don’t just answer questions but act inside your browser — clicking, filling, and navigating like a human. The company has begun a controlled pilot of Claude for Chrome, inviting 1,000 paying...
  13. Chrome Security FAQ Adds AI Features Section to Define AI Security Roles

    Google’s quiet change to Chrome’s security documentation — adding an explicit AI Features section to the Chrome Security FAQ — is a small, technical edit with outsized implications for how browser vendors will treat generative AI moving forward. The new guidance makes a clear, pragmatic...
  14. Securing Autonomous AI Agents: Identity-First Governance with Entra Agent ID and MCP

    Microsoft’s deputy CISO for Identity lays out a clear warning: autonomous agents are moving from experiments to production, and without new identity, access, data, and runtime controls they will create risks that are fundamentally different from those posed by traditional users and service...
  15. Copilot Governance Gap: Why Agent Policy Enforcement Fails Across Microsoft Surfaces

    Microsoft’s Copilot agent governance has slid into the spotlight after multiple, independent reports found that tenant-level policies intended to prevent user access to AI agents were not reliably enforced — a misconfiguration and control-plane gap that left some Copilot Agents discoverable or...
  16. Visual Studio GA: Model Context Protocol (MCP) for Secure, Enterprise-Ready AI Tools

    Microsoft has made the Model Context Protocol (MCP) a first‑class citizen in Visual Studio, shipping general availability support that lets Copilot Chat and other agentic features connect to local or remote MCP servers via a simple .mcp.json configuration — a major convenience for developers...
  17. Copilot Audit-Log Gap: Microsoft Patch Spurs Cloud Transparency Debate

    Microsoft’s recent quiet fix to an M365 Copilot logging gap has opened a new debate over cloud transparency, audit integrity, and how enterprise defenders should respond when a vendor patches a service-side flaw without issuing a public advisory. Security researchers say a trivial prompt...
  18. Tenable AI Exposure: Discover, Prioritize, Govern Enterprise AI Risk

    Tenable’s new Tenable AI Exposure bundles discovery, posture management and governance into the company’s Tenable One exposure management platform in a bid to give security teams an “end‑to‑end” answer for the emerging risks of enterprise generative AI—but what it promises and what organisations...
  19. ChatGPT Expands with Google Workspace Connectors: Gmail, Calendar, Contacts

    OpenAI’s ChatGPT can now reach into your Gmail inbox, read your Google Calendar, and look up people in Google Contacts — all from inside a single chat — marking a clear escalation in the product’s push from a conversational assistant toward a full-fledged, context-aware workspace tool. The...
  20. AgentFlayer: Zero-Click Hijacks Threaten Enterprise AI

    Zenity Labs’ Black Hat presentation unveiled a dramatic new class of threats to enterprise AI: “zero‑click” hijacking techniques that can silently compromise widely used agents and assistants — from ChatGPT to Microsoft Copilot, Salesforce Einstein, and Google Gemini — allowing attackers to...