Enterprise Risk: Malicious AI Extensions Steal Chat History via Chrome

  • Thread Author
Microsoft Defender’s recent investigation shows a deceptive new vector for corporate data leakage: malicious Chromium‑based browser extensions that impersonate trusted AI assistant tools and quietly siphon LLM chat histories and browsing telemetry from users — at scale and with real-world enterprise reach. ps://cybernews.com/security/chrome-extensions-steal-chatgpt-data/)

Background / Overview​

The rapid adoption of browser‑sidebar AI assistants has created a familiar user habit: install a lightweight extension, grant broad page permissions for convenience, and use the same browser session for both personal research and sensitive work. Attackers are exploiting that behavior by publishing look‑alike AI assistant extensions on the Chrome Web Store (which also function in Microsoft Edge). Multiple independent investigations and media reports confirm at least one campaign of malicious AI‑branded extensions reached roughly 900,000 cumulative installs.
Microsoft Defender’s analysis — which traces the lifecycle of these extensions from marketplace listing to persistent exfiltration — documents a pattern of behavior security teams must treat as a systemic risk for AI‑era enterprises: browser extensions with legitimate‑looking names, familiar UI, and plausible descriptions operating as long‑term telemetry colllly upload captured chat content and visited URLs to attacker‑controlled endpoints. The result is a persistent, stealthy data‑collection pipeline embedded inside everyday browsing.

Why this matters: the new surface for “prompt poaching”​

AI assistant usage changes what attackers can harvest. Where previously browser extensions might seek cookies, credentials, or browsing history, these malicious extensions harvest conversational data — the actual prompts, replies, and context people share with LLMs. In modern workflows, that chat content can contain:
  • Prototype code snippets and intellectual property,
  • Internal network URLs and application endpoints,
  • Strategic plans, meeting notes, and operational details,
  • Proprietary prompts and high‑value prompt engineering,
  • Personally identifiable information (PII) and credentials inadvertently pasted into chats.
Because chat text is rendered in the page DOM of web‑based AI services, any extension with page‑read permission can scrape it. Attackers therefore gain access to a stream of high‑value artifacts that earlier defenses (DLP on email, endpoint AV) weren’t designed to monitor. Independent reporting and technical analyses show the malicious extensions captured chat snippets, full tab URLs, and other navigation context, then encoded and exfiltrated that data on fixed intervals.

Attack chain: a practical walkthrough​

Reconnaissance​

Threat actors mapped the booming marketplace for AI sidebar extensions and the typical permission model users accept for such tools. They monitored top‑ranked and “to mimic UI, naming, and even publisher signals that engender trust. This reconnaissance targeted both consumer and corporate users, noting that many enterprise devices allow Chrome Web Store installs (and Edge supports Chrome extensions), creating a single distribution channel with broad reach.

Weaponization​

The malicious payload is a Chromium extension designed to:
  • Monitor tab activity and inspect page DOMs for AI chat UIs,
  • Extract chat messages and meta context (model name, surrounding text),
  • Record full visited URLs and navigation context (previous/next pages),
  • Stage collected items in local storage (Base64‑encoded JSON) and upload on a periodic schedule.
Technical reporting indicates the code performs minimal filtering of sensitive targets and exposes weak consent handling — updates could re‑enable telemetry after users had declined collection. The staging-and-periodic upload pattern minimizes persistent on‑disk artifacts and keeps the extension operating purely in user context.

Delivery​

Distribution leveraged the Chrome Web Store. Attackers used AI‑themed names and descriptions to impersonate legitimate productivity tools; at least one variant displayed a platform “Featured” badge, amplifying installs. Because Microsoft Edge can use Chrome Web Store extensions, a single listing propagated across both browsers without extra packaging. The combination of marketplace distribution and social proof allowed the extension to acquire hundreds of thousands of installs.

Exploitation & Collection​

Once installed and granted page access, the extension passively scraped visible chat text from AI pages (ChatGPT, DeepSeek, others). Data elements observed in technical analysis include:
  • Full URLs of open tabs (including internal sites),
  • Chat prompts and AI responses (snippets and full messages),
  • Model identifiers and contextual metadata,
  • Persistent identifiers (UUIDs) tying sessions and devices together.
Collected data was Base64‑encoded and staged in local storage prior to exfiltration. That approach allowed data collection to persist across browser restarts and service worker reloads without requiring system‑level persistence techniques.

Command‑and‑Control and Exfiltration​

The extensions uploaded data via HTTPS POST to attacker‑controlled domains (reported examples include chatsaigpt[.]com and deepaichats[.]com). The periodic uploads (reported roughly every 30 minutes by several analyses) and use of ordinary web protocols made telemetry resemble normal browser traffic, reducing suspicion. Local buffers were subsequently cleared, minimizing on.

Actions on Objective​

The primary objective appears to be continuous collection of browsing and AI chat telemetry rather than immediate lateral movement or ransomware — a stealthy intelligence collection operation optimized for long‑term value. By tying data to persistent UUIDs and maintaining periodic exfiltration, the actor builds an evolving picture of user behavior and organizational workflows, which is ideal raw material for targeted follow‑on attacks or resale. Microsoft Defender further reports activity across enterprise tenants, raising the prospect of systemic corporate leakage.

Technical analysis — what the extension code does (concise)​

  • Background worker monitors tab lifecycle and page navigation events.
  • Page‑read permissions let the extension inspect DOM nodes that render chat messages.
  • Scraped messages and URL context are serialized into JSON, Base64‑encoded, and stored in local extension storage.
  • A scheduled routine sends collected packages by HTTPS POST to hardcoded domains at fixed intervals; post‑upload, local buffers are cleared.
  • Consent toggles were superficial: telemetry could be disabled by the UI but re‑enabled automatically by updates or by obscure consent logic.
  • Minimal filtering meant internal web apps and intranet URLs were collected, multiplying the risk for enterprise users.

Verifying the scale and claims — what independent sources show​

Key scale claims deserve verification:
  • Install base (~900,000 installs): Multiple independent outlets and security teams reported a combined install base of roughly 900k across two extensions (estimates split ~600k and ~300k). This figure is corroborated by reporting from securiinstream tech outlets.
  • Periodic exfiltration cadence (≈ every 30 minutes): Technical writeups from independent analysts observed scheduled uploads on a half‑hour cycle and Base64 encoding of stored JSON.
  • Enterprise impact (activity across enterprise tenants): Microsoft Defender’s telemetry, referenced in the investigation summary provided, claims detection in tens of thousands of enterprise tenants. That specific tenant count is attributed to Microsoft’s telemetry; public third‑party articles focus primarily on overall install counts and technical mechanics. Where a claim is reported only by a single vendor’s telemetry, treat it as credible but subject to cross‑validation inside affected organizations. Microsoft’s reporting and shared hunting queries provide immediate investigative leads for defenders.
Caveat: while install figures and technical behaviors are well corroborated by multiple independent reports, some telemetry‑level claims (for example, precise enterprise tenant counts) stem from vendor telemetry and may not always appear verbatim in press coverage; defenders should verify local telemetry rather than rely on headline numbers alone.

Enterprise risk assessment: why browser extensions are now a data‑loss vector for AI content​

  • Browser extensions operate inside user browsers, so they inherit whatever context the user exposes to web apps. When users interact with web‑hosted LLMs, those interactions are rendered in the DOM — directly accessible to extensions that have broad page access.
  • Traditional DLP controls often focus on endpoints, email, or cloud connectors. They may miss DOM‑level scraping that occurs inside a user session and is uploaded as harmless HTTPS traffic from the browser.
  • The data type here (LLM prompts and completions) is high value: prop snippets, debug logs, and meeting summaries are all actionable intelligence.
  • Marketplace trust signals (ratings, badges, featured placement) are being weaponized to accelerate installs. A featured tag or many positive reviews are no longer reliable indicators of benign behavior.
  • Persistence is effectively built into standard extension lifecycles; no kernel or disk‑level persistence is required, making detection by standard AV less reliable.
In short: the attack surface bridges user behavior, marketplace governance, and browser permission models, producing a stealthy vector for data exfiltration that bypasses many traditional controls. (astrix.security)

Mitigations and hardening: technical controls and operational steps​

Immediate steps every security team should take:
  • Inventory and audit browser extensions across the organization.
  • Use enterprise management tooling (Mrowser Extension assessment, MDM, or GPO policies) to enumerate installed extensions and block or remove unknown/unsupported ones.
  • Enforce least privilege for browser extensions.
  • Restrict extension install rights to an allowlist; disallow arbitrary installs for standard users.
  • Monitor outbound connections and block known malicious endpoints.
  • Create network detections for POST requests to domains observed in the campaign (reported examples include chatsaigpt[.]com and deepaichats[.]com) and trigger device investigations on hits. Note: monitoring should focus on behavioral patterns (frequent POSTs from browser processes to unknown endpoints) rather than single domain lists.
  • Enable browser and network protections:
  • Turn on Microsoft Defender SmartScreen and Network Protection, or equivalent protections, to reduce exposure to malicious pages and scripts.
  • Apply Microsoft Purview / DLP for AI chat usage:
  • Use DLP controls and sensitivity labeling to limit what can be pasted into or submitted to browser‑based AI tools, and monitols from browsers into external destinations.
  • Educate users:
  • Train employees not to install unvetted extensions and to treat browser extension installation like any other software procurement. Encourage routine reviews of installed extensions in Chrome/Edge and immediate removal of unknown add‑ons.
  • For affected devices:
  • Quarantine devices that show network traffic to the attacker infrastructure.
  • Extract extension IDs and user profiles to enumerate scope (extension IDs observed in this campaign include fnmihdojmnkclgjpcoonokmkhjpjechg and inhcgfpbfdjbjogdfjbclgolkmhnooop).
  • Collect browser extension directories and memory snapshots for forensic analysis.
  • Rotate secrets and credentials that may have been exposed.

Detection recipes and hunting queries​

Security teams with Microsoft Defender XDR or comparable tooling should prioritize these signals:
  • Processes: Browser processes (chrome.exe, msedge.exe) with command lines referencing known malicious extension IDs. Microsoft published example hunting queries for this exact pattern that identify browser launches with the malicious IDs present. Use those queries to enumerate impacted endpoints.
  • Network: Frequent outbound HTTPS POSTs from browser processes to attacker domains (for example, chatsaigpt[.]com, deepaichats[.]com, chataigpt[.]pro, chatgptsidebar[.]pro) and unusual periodic uploads that align with the known cadence. Create network alerts for high‑frequency POSTs to unknown endpoints from user workstations.
  • File system: Changes inside known Chrome/Edge extension directories (AppData local extension folders) corresponding to the extension IDs observed. Monitor for file creation/modification events under those directories. Microsoft Defender provides detection IDs and file event queries as part of coordinated XDR coverage.
  • Behavioral: Browser extensions that read page content and then initiate background uploads on a schedule; instrument browser telemetry where possible to capture API calls from extensions that access page content and use fetch/XHR regularly.

Governance and policy recommendations​

  • Institute an enterprise extension policy: allowlist only, require vendor review and security sign‑off for any AI integration, and record a business justification for each extension.
  • Define "AI chat DLP" policies: classify prompts and chat content as sensitive by default where appropriate, and prohibit or monitor the use of external public LLMs for work involving regulated data.
  • Include extension audit checks in procurement and onboarding processes for new SaaS/agent tools.
  • Assign responsibility: endpoint, network, and data‑governance teams must coordinate. Extension risk spans multiple domains — IT, security, legal, and business owners.

The larger picture: market trust, platform governance, and long‑term fixes​

This campaign exposes a weakness that sits above pure technical controls: marketplace trust signals (reviews, install counts, featured badges) are being weaponized. That means platform operators (browser vendors and extension marketplaces) must strengthen vetting, telemetry analysis, and post‑publish monitoring for behavior that deviates from declared functionality. At the same time, enterprises must stop treating browser extension installs as low‑risk personal preference items — they are now an enterprise policy surface.
Two technical improvements would substantially reduce risk:
  • Browser vendors restricting access to page content for extensions by default and requiring more granular, just‑in‑time permission grants for pages that host AI chat elements.
  • Marketplaces adopting runtime behavioral auditing (detecting extensions that read DOM content and transmit it off‑device) and faster takedown/remediation processes for malicious or privacy‑violative behavior.
Until those system‑level changes arrive, mitigation depends on careful organizational policy, telemetry monitoring, and user education. Multiple independent security teams have already urgentory extensions, block unknown installs, and monitor for the exfiltration patterns described above.

What defenders should say to leadership (short brief)​

  • Impact: Browser extensions on corporate devices have been used to exfiltrate LLM chat transcripts and full visited URLs; this can leak IP, strategy, code, or credentials.
  • Scope: Public reporting indicates ~900k installs across two malicious extensions, and vendor telemetry highlights enterprise exposure; treat the issue as high priority for investigation.
  • Immediate asks:
  • Run an enterprise‑wide inventory of browser extensions and remove unapproved ones.
  • Block or quarantine devices that show traffic to known malicious domains or suspicious periodic POST behavior.
  • Review DLP controls for AI‑chat use and rotate any credentials potentially disclosed in chats.
  • Longer term: Implement allowlisting, tighten extension permissions, and fund monitoring that captures DOM scraping and extension network activity.

Caveats, open questions, and research gaps​

  • Attribution: Public reporting focuses on what the extensions did, not definitively who authored them. Attribution remains uncertain and typical of Chrome‑store abuse campaigns. Treat the actor as unknown and focus on containment and detection.
  • Tenant‑level claims: Some vendor telemetry claims (for instance, Microsoft’s tenant impact numbers reported in their internal analysis) are authoritative but depend on telemetry coverage; defenders must validate local exposure independently.
  • Extent of data re: The public record confirms exfiltration but is less clear on downstream usage, sale, or follow‑on exploitation of harvested chat logs. Organizations should assume worst‑case: captured data may be used for targeted social engineering or sold on criminal forums.

Final analysis and recommendations​

This campaign crystallizes a new operational reality: browser extensions can be turned into long‑term intelligence collection tools that harvest the outputs and inputs of modern AI assistants. The core technical problem is simple and persistent — browser extensions with broad page permissions can read web‑rendered content and send it elsewhere — but the defensive posture required is interdisciplinary, combining endpoint management, network monitoring, DLP for AI interactions, and employee behavior change.
Prioritize these actions now:
  • Inventory and remediate (allowlist/blocklist),
  • Monitor browser‑originated POST traffic for periodic uploads,
  • Enforce least privilege and tighten extension install policies,
  • Apply DLP and sensitivity labeling to AI chat usage,
  • Educate employees and embed extension reviews into procurement workflows.
If your organization uses browser‑based AI tools for business purposes, treat extension governance as part of your AI governance program. The convenience of sidebar assistants is real, but that convenience must be balanced with controls that prevent easy, silent leakage of the very content those tools were adopted to create.
The attack surface is now human + browser + marketplace. Stopping this class of theft requires coherent policy, vigilant telemetry, and an immediate inventory of what’s installed across your fleet. Detect, remove, and learn — because the next campaign will be faster, cheaper, and even more convincing.

Source: Microsoft Malicious AI Assistant Extensions Harvest LLM Chat Histories | Microsoft Security Blog