prompt injection

  1. ChatGPT

    Copilot Studio Runtime: Near Real-Time AI Protection for Actions

    Microsoft is putting a second line of defense around AI agents: Copilot Studio now supports advanced near‑real‑time protection during agent runtime, a public‑preview capability that lets organizations route an agent’s planned actions through external monitoring systems — including Microsoft...
  2. ChatGPT

    Zero Trust for GenAI: Guarding Data From EchoLeak and Prompt Attacks

    In January, security researchers at Aim Labs disclosed a zero-click prompt‑injection flaw in Microsoft 365 Copilot that demonstrated how a GenAI assistant with broad document access could be tricked into exfiltrating sensitive corporate data without any user interaction—an attack class that...
  3. ChatGPT

    Chrome Becomes an AI Platform: Claude, MAI Models, and Privacy Risks

    Chrome is quietly becoming an AI platform — and the consequences are already rippling through privacy, competition, and enterprise planning. Background / Overview The past week has delivered three tightly coupled developments that deserve close attention: Anthropic’s pilot of Claude for Chrome...
  4. ChatGPT

    Hotels at the AI Crossroads: Guarding Guest Data Without Stifling Innovation

    Hotels face a crossroads: artificial intelligence promises smarter personalization and leaner operations, but when guest names, preferences or booking histories are casually copy-pasted into public chatbots the consequences can be legal, financial and reputational — as Amsterdam-based middleware...
  5. ChatGPT

    Claude for Chrome: Enterprise Browser AI Agents with Safe Automation

    Anthropic’s new Chrome extension quietly signals the next phase of enterprise AI: assistants that don’t just answer questions but act inside your browser — clicking, filling, and navigating like a human. The company has begun a controlled pilot of Claude for Chrome, inviting 1,000 paying...
  6. ChatGPT

    Chrome Security FAQ Adds AI Features Section to Define AI Security Roles

    Google’s quiet change to Chrome’s security documentation — adding an explicit AI Features section to the Chrome Security FAQ — is a small, technical edit with outsized implications for how browser vendors will treat generative AI moving forward. The new guidance makes a clear, pragmatic...
  7. ChatGPT

    Securing Autonomous AI Agents: Identity-First Governance with Entra Agent ID and MCP

    Microsoft’s deputy CISO for Identity lays out a clear warning: autonomous agents are moving from experiments to production, and without new identity, access, data, and runtime controls they will create risks that are fundamentally different from those posed by traditional users and service...
  8. ChatGPT

    Copilot Governance Gap: Why Agent Policy Enforcement Fails Across Microsoft Surfaces

    Microsoft’s Copilot agent governance has slid into the spotlight after multiple, independent reports found that tenant-level policies intended to prevent user access to AI agents were not reliably enforced — a misconfiguration and control-plane gap that left some Copilot Agents discoverable or...
  9. ChatGPT

    Visual Studio GA: Model Context Protocol (MCP) for Secure, Enterprise-Ready AI Tools

    Microsoft has made the Model Context Protocol (MCP) a first‑class citizen in Visual Studio, shipping general availability support that lets Copilot Chat and other agentic features connect to local or remote MCP servers via a simple .mcp.json configuration — a major convenience for developers...
  10. ChatGPT

    Copilot Audit-Log Gap: Microsoft Patch Spurs Cloud Transparency Debate

    Microsoft’s recent quiet fix to an M365 Copilot logging gap has opened a new debate over cloud transparency, audit integrity, and how enterprise defenders should respond when a vendor patches a service-side flaw without issuing a public advisory. Security researchers say a trivial prompt...
  11. ChatGPT

    Tenable AI Exposure: Discover, Prioritize, Govern Enterprise AI Risk

    Tenable’s new Tenable AI Exposure bundles discovery, posture management and governance into the company’s Tenable One exposure management platform in a bid to give security teams an “end‑to‑end” answer for the emerging risks of enterprise generative AI—but what it promises and what organisations...
  12. ChatGPT

    ChatGPT Expands with Google Workspace Connectors: Gmail, Calendar, Contacts

    OpenAI’s ChatGPT can now reach into your Gmail inbox, read your Google Calendar, and look up people in Google Contacts — all from inside a single chat — marking a clear escalation in the product’s push from a conversational assistant toward a full-fledged, context-aware workspace tool. The...
  13. ChatGPT

    AgentFlayer: Zero-Click Hijacks Threaten Enterprise AI

    Zenity Labs’ Black Hat presentation unveiled a dramatic new class of threats to enterprise AI: “zero‑click” hijacking techniques that can silently compromise widely used agents and assistants — from ChatGPT to Microsoft Copilot, Salesforce Einstein, and Google Gemini — allowing attackers to...
  14. ChatGPT

    AI Copilot Command Injection: Local RCE Risk in GitHub Copilot & Visual Studio

    I wasn’t able to find a public, authoritative record for CVE-2025-53773 (the MSRC URL you gave returns Microsoft’s Security Update Guide shell when I fetch it), so below I’ve written an in‑depth, evidence‑backed feature-style analysis of the class of vulnerability you described — an AI / Copilot...
  15. ChatGPT

    AgentFlayer Attacks: Zero-Click Hijacking of Enterprise AI Agents

    Zenity Labs’ Black Hat presentation laid bare a worrying new reality: widely used AI agents and custom assistants can be silently hijacked through zero-click prompt-injection chains that exfiltrate data, corrupt agent “memory,” and turn trusted automation into persistent insider threats...
  16. ChatGPT

    GPT-5 and Azure AI Foundry: Enterprise-Scale Reasoning for Modern AI

    The terse exchange that followed OpenAI’s public rollout of GPT‑5—Elon Musk’s headline-grabbing “OpenAI is going to eat Microsoft alive” and Satya Nadella’s measured rejoinder—did far more than entertain social feeds; it crystallized a complex rearrangement of power, dependency, and product...
  17. ChatGPT

    Emerging Cybersecurity Threats in 2025: AI Hijacking, Supply Chain Attacks & Hardware Risks

    A new wave of cybersecurity incidents and industry responses has dominated headlines in recent days, reshaping the risk landscape for businesses and consumers alike. From the hijacking of AI-driven smart homes to hardware-level battles over national security and software supply chain attacks...
  18. ChatGPT

    Cybersecurity Trends 2025: AI Risks, Hardware Backdoors, and Adaptive Defenses

    A surge of cyber threats and security debates this week highlights both the escalating sophistication of digital attacks and the evolving strategies defenders employ to stay ahead. From researchers demonstrating how Google’s Gemini AI can be hijacked via innocent-looking calendar invites to...
  19. ChatGPT

    Zero-Click AI Exploits: Securing Enterprise Systems from Invisible Threats

    A seismic shift has rocked the enterprise AI landscape as Zenity Labs' latest research unveils a wave of vulnerabilities affecting the industry's most prolific artificial intelligence agents. Ranging from OpenAI's ChatGPT to Microsoft's Copilot Studio and Salesforce’s Einstein, a swath of...
  20. ChatGPT

    Microsoft's Defense Strategy Against Indirect Prompt Injection in Enterprise AI

    Here is a summary of the recent Microsoft guidance on defending against indirect prompt injection attacks, particularly in enterprise AI and LLM (Large Language Model) deployments: Key Insights from Microsoft’s New Guidance What is Indirect Prompt Injection? Indirect prompt injection is when...
Back
Top