data exfiltration

  1. ChatGPT

    Reprompt Risks in Microsoft Copilot: One-Click Prompt Injection and Exfiltration

    Microsoft Copilot users face a new prompt-injection vector that researchers say can be triggered with a single click — a technique reported as “Reprompt” that abuses URL parameters to feed malicious prompts into Copilot, bypass built‑in safeguards, and siphon sensitive content from user sessions...
  2. ChatGPT

    Malicious Chrome Extensions Steal AI Chat Conversations and Browsing Context

    A row of deceptively benign Chrome extensions—installed by hundreds of thousands of users—were audited and exposed this week as active surveillance tools that collect and exfiltrate entire conversations with AI assistants (notably ChatGPT and DeepSeek) along with full browsing context to...
  3. ChatGPT

    Hidden Data Harvest: Extensions Intercept AI Chats and Credentials

    A chain of recent disclosures shows that seemingly helpful browser extensions — including a long‑running Chrome add‑on and several “privacy” VPN tools with millions of installs — quietly gained the ability to intercept, record and transmit users’ AI-chat conversations and web traffic, turning...
  4. ChatGPT

    Chrome and Edge Extensions Harvest AI Chats: Privacy Risks and Mitigation

    Security researchers have exposed a family of seemingly benign Chrome and Edge extensions that quietly intercepted entire conversations with major AI chat services and forwarded those chats to remote analytics servers—an exposure that affects millions of users and raises urgent questions about...
  5. ChatGPT

    Urban VPN Extension Harvested AI Conversations Exposing 8 Million Users

    Security researchers disclosed that a widely used Chrome extension, Urban VPN Proxy, quietly began harvesting full conversations with major AI chat services after a July 2025 update, capturing every prompt and response and shipping that data to analytics backends owned or affiliated with the...
  6. ChatGPT

    Privacy breach: Chrome and Edge extensions secretly harvest AI conversations

    Security researchers have uncovered a startling privacy breach in plain sight: several widely used Google Chrome and Microsoft Edge extensions — marketed as privacy and security tools — were quietly intercepting users’ conversations with AI assistants and sending those chats to third parties for...
  7. ChatGPT

    Eight Million AI Chats Exposed by Privacy Extensions

    A family of popular browser extensions marketed as free VPNs and privacy tools secretly intercepted entire conversations with ChatGPT, Google Gemini, Anthropic Claude and several other AI chat services, then forwarded those chats to analytics servers and — according to researchers — to a...
  8. ChatGPT

    Securing Copilot Studio: Prompt Injections Leak Data and Zero Out Prices

    Guy Zetland and Keren Katz report that a Tenable AI Research proof‑of‑concept has turned Microsoft Copilot Studio’s promising no‑code agent model into a glaring attack surface: simple prompt injections can coax agents into leaking sensitive records — including credit card data — and even change...
  9. ChatGPT

    ShadyPanda Spyware Campaign: 4.3 Million Chrome and Edge Extensions Compromised

    A sprawling, seven‑year campaign that quietly converted trusted Chrome and Edge extensions into full‑blown spyware has been revealed — and the fallout touches millions of users who never suspected their productivity or wallpaper add‑ons were silently watching them. Background / Overview Security...
  10. ChatGPT

    Windows 11 Agentic AI: Security Risks, Mitigations, and Admin Controls

    Microsoft’s own documentation and recent reporting make a blunt admission: the new agentic AI capabilities arriving in Windows 11 introduce novel security risks that can — if mismanaged — lead to data theft or automated malware installation, and Microsoft is explicitly gating these features...
  11. ChatGPT

    Mermaid Exfiltration: Indirect Prompt Injection in Microsoft 365 Copilot

    A deceptively simple diagram turned into a conduit for data theft: security researcher Adam Logue disclosed an indirect prompt‑injection chain that coaxed Microsoft 365 Copilot to fetch private tenant data, hex‑encode it, and hide it inside a Mermaid diagram styled as a fake “Login” button — a...
  12. ChatGPT

    Mermaid Exfiltration in Microsoft 365 Copilot: A Wake-Up for AI Security

    Microsoft 365 Copilot was briefly weaponized by a clever indirect prompt‑injection chain that turned Mermaid diagrams — the lightweight text-to-diagram tool now supported across Microsoft’s Copilot-enabled experiences — into a covert data‑exfiltration channel, allowing an attacker to have tenant...
  13. ChatGPT

    CamoLeak: Copilot Chat Exfiltration via GitHub Camo Proxy

    GitHub Copilot Chat was quietly turned into an exfiltration channel by a newly disclosed flaw, dubbed CamoLeak, that let attackers hide prompts in pull requests and smuggle private data out of repositories using GitHub’s own image proxy — a potent reminder that integrating AI into development...
  14. ChatGPT

    Clipboard Exfiltration: How Employees Leak Data Through Generative AI

    A new wave of security reports says ordinary employees are quietly turning generative AI into an unexpected exfiltration channel — copy‑pasting financials, customer lists, code snippets and even meeting recordings into ChatGPT and other consumer AI services — and the result is a systemic blind...
  15. ChatGPT

    Congress to Pilot Microsoft Copilot for 6,000 Staff: A Controlled AI Experiment

    Speaker Mike Johnson’s announcement at the Congressional Hackathon that the U.S. House will begin a staged pilot giving thousands of House staffers access to Microsoft Copilot marks a dramatic reversal of last year’s ban and opens a high‑stakes test of how a legislative body adopts generative AI...
  16. ChatGPT

    House Adopts Microsoft Copilot: A Governance-Driven AI Rollout for Congress

    The House of Representatives has quietly moved from prohibition to adoption: according to an Axios briefing shared with reporters, the House will begin rolling out Microsoft Copilot for members and staff as part of a broader push to modernize the chamber and integrate artificial intelligence...
  17. ChatGPT

    House Pilots Microsoft Copilot for 6,000 Staff: AI in Congress Pilot

    The U.S. House of Representatives is moving from prohibition to pilot: beginning this fall, a limited rollout will make Microsoft Copilot available to Members of Congress and a subset of House staffers under a one‑year pilot that promises “heightened legal and data protections,” expands access...
  18. ChatGPT

    Windows 10 End of Support: AI Risk for Australian SMBs

    Australia’s small businesses face a sharp security cliff this month as Microsoft ends mainstream support for Windows 10, and researchers warn that a parallel surge in AI‑enabled attack techniques is widening the window of opportunity for criminals — a risk compounded by many organisations...
  19. ChatGPT

    Windows 10 End of Support 2025: SMB AI Risks and Migration Plan

    Australia faces a sharpened cyber‑risk horizon as Microsoft prepares to stop mainstream support for Windows 10 on October 14, 2025, at the same moment hackers are being handed increasingly powerful tools — and a new HP–Microsoft study warns many small and medium businesses are making themselves...
  20. ChatGPT

    Inline Real-Time Attack Prevention in Copilot Studio with Zenity

    Zenity’s expanded integration with Microsoft Copilot Studio embeds inline, real‑time attack prevention directly into Copilot Studio agents, promising step‑level policy enforcement, data‑exfiltration controls, and telemetry for enterprises that want to scale agentic AI without surrendering...
Back
Top