A family of popular browser extensions marketed as free VPNs and privacy tools secretly captured and exfiltrated complete conversations with ChatGPT, Google Gemini, Anthropic Claude and several other web-based AI assistants—affecting more than eight million installs and creating one of the most consequential consumer-AI privacy incidents in recent memory.
Security researchers at Koi Security discovered that a cluster of extensions published by the same developer added platform‑specific “executor” scripts that injected JavaScript into AI chat pages and intercepted every prompt and response. The capability was introduced in an automatic update (identified as version 5.5.0, pushed on July 9, 2025) and enabled by default, meaning millions of users received the new behavior without an explicit, contextual consent flow. The extensions named in public reporting include Urban VPN Proxy (the flagship listing with the largest install base), 1ClickVPN Proxy, Urban Browser Guard, and Urban Ad Blocker, available across the Chrome Web Store and Microsoft Edge Add‑ons. Combined install counts exceed 8 million, with Urban VPN Proxy alone reported at roughly 6 million Chrome installs and ~1.3 million Edge installs in public telemetry. Why this matters: the harvested dataset reportedly contained not only user prompts and model outputs, but also conversation identifiers, timestamps, session metadata and platform/model identifiers—information that makes re‑identification and downstream commercial or malicious use far more actionable than a simple clickstream leak.
Immediate action is straightforward and urgent: identify and remove the named extensions, rotate any secrets leaked via chats conducted while an implicated extension was installed after July 9, 2025, and harden enterprise extension controls. Longer term, the browser ecosystems and regulators must close the gap between static listing checks and dynamic, runtime abuses that monetize users’ most private digital conversations.
Source: Tech Digest Malicious VPN steals full ChatGPT and Gemini conversations of 8 million users - Tech Digest
Background / Overview
Security researchers at Koi Security discovered that a cluster of extensions published by the same developer added platform‑specific “executor” scripts that injected JavaScript into AI chat pages and intercepted every prompt and response. The capability was introduced in an automatic update (identified as version 5.5.0, pushed on July 9, 2025) and enabled by default, meaning millions of users received the new behavior without an explicit, contextual consent flow. The extensions named in public reporting include Urban VPN Proxy (the flagship listing with the largest install base), 1ClickVPN Proxy, Urban Browser Guard, and Urban Ad Blocker, available across the Chrome Web Store and Microsoft Edge Add‑ons. Combined install counts exceed 8 million, with Urban VPN Proxy alone reported at roughly 6 million Chrome installs and ~1.3 million Edge installs in public telemetry. Why this matters: the harvested dataset reportedly contained not only user prompts and model outputs, but also conversation identifiers, timestamps, session metadata and platform/model identifiers—information that makes re‑identification and downstream commercial or malicious use far more actionable than a simple clickstream leak. How the interception worked: technical breakdown
Executor scripts and targeted injection
Rather than using a generic scraper, the publisher deployed per‑platform executor scripts (for example chatgpt.js, gemini.js, claude.js) that were injected into pages hosting the target AI assistants when a matching tab was opened. These executor scripts executed in the page context and hooked into page internals to extract conversational content. Because the code runs where the page already has plaintext access to model prompts and outputs, transport encryption (TLS) offered no protection against this technique.Overriding browser networking APIs
The injected scripts wrapped or overrode native browser networking primitives—most notably fetch and XMLHttpRequest—so requests and responses could be parsed before the page rendered them to the user. In some implementations the flow was:- Tab monitoring detects an AI chat page is open.
- Inject the per‑platform executor script into the page context.
- Executor hooks fetch/XHR and parses request/response payloads to extract prompts, responses, and metadata.
- Package and forward the data via window.postMessage to the extension background worker.
- Background worker compresses and transmits the payload to remote analytics endpoints.
Exfiltration pipeline and analytics endpoints
Koi’s analysis and corroborating reporting trace outbound traffic from the extensions to analytics domains controlled by the publisher—reported examples include analytics.urban‑vpn.com and stats.urban‑vpn.com—where collected chats were stored and (according to privacy policy language and reporting) shared with an affiliated analytics/data‑broker entity named BiScience / B.I. Science. The publisher’s privacy documentation references collection and sharing of “AI inputs and outputs,” though the disclosure is buried in long legal text rather than presented as a contextual, user‑facing notice.Timeline and scale: what is verified
- July 9, 2025: an update identified as version 5.5.0 is reported to have introduced the executor‑based AI harvesting, enabled by default in distributed builds.
- July–December 2025: automatic extension updates propagated the new functionality to installed users across Chrome and Edge; reporting indicates the behavior persisted for months before public disclosure.
- Install footprint: Urban VPN Proxy ~6,000,000 Chrome installs and ~1,300,000 Edge installs; sibling extensions add hundreds of thousands more—aggregate > 8 million installs across stores. Reported figures vary slightly by outlet and timestamp but converge on multi‑million impact.
What was collected — and the privacy stakes
Koi’s technical analysis and subsequent coverage consistently report the exfiltrated package includes:- Every user prompt (plain text).
- Full assistant responses (model outputs).
- Conversation identifiers and timestamps.
- Session metadata and model/platform identifiers.
- Potentially contextual browsing state (cookies, localStorage, session IDs) depending on permissions.
Marketplace and policy failures
Trust signals betrayed
Many of the implicated extensions carried Chrome Web Store or Microsoft Edge “Featured” badges—trust indicators users assume signal human review and safety. The fact that a Featured extension could ship runtime code that only activates on specific AI pages highlights a structural weakness: static listing reviews and privacy‑policy audits are insufficient to detect dynamic, page‑triggered exfiltration.Platform policy conflicts
Google’s Limited Use policy and analogous Microsoft store rules restrict how extensions collect and transfer user data—explicitly forbidding undisclosed transfers to advertising platforms or data brokers without narrow, transparent controls. The buried privacy‑policy language that references sharing “AI inputs and outputs” may represent a legal disclosure, but it does not substitute for contextual consent at the time of function change, which platform policies and consumer protection principles expect. If the extensions transferred user conversations to a data broker for marketing analytics, the behavior would appear inconsistent with these policies.Why static review fails
- Runtime hooks can be written to activate only when specific domains are loaded, evading static scans.
- Minified or obfuscated code makes manual inspection unreliable at scale.
- Auto‑update mechanisms allow dramatic behavior changes without re‑consent.
These are not new failure modes, but the stakes rise when the commodity is private human conversations rather than browsing history alone.
Legal, regulatory and corporate implications
- Consumer privacy law: depending on jurisdiction and content, exfiltrated conversations containing health, financial, or identifying information could trigger GDPR, CCPA or similar protections and enforcement actions. Regulators may treat undisclosed mass harvesting as unfair or deceptive practices.
- Platform enforcement and remedies: stores can delist the extensions, suspend developer accounts, and force audits—but takedown does not remediate already‑infected browsers. Enterprises may need to demand deletion certifications and evidence of data destruction where they can prove leakage of regulated data.
- Contract and compliance risk: businesses using unmanaged extensions in employee browsers create plausible pathways for leakage of trade secrets, customer data or regulated information (e.g., PHI, PCI data), potentially violating contractual confidentiality clauses and industry regulations.
Practical remediation: immediate steps for users and IT teams
For individual users (urgent)
- Uninstall the implicated extensions immediately: Urban VPN Proxy, 1ClickVPN Proxy, Urban Browser Guard, Urban Ad Blocker. Manual removal is the only guaranteed way to stop an installed malicious extension.
- Assume chats are compromised: Treat any AI conversations conducted while the extension was installed after July 9, 2025 as potentially exposed. Rotate passwords, revoke and reissue API keys or tokens that were pasted into chats, and reset any credentials or secrets used during that period.
- Audit accounts and look for suspicious activity: prioritize financial, email, cloud provider, and workplace accounts that may have been referenced in conversations. Enable multi‑factor authentication where available.
For enterprise and security teams
- Block and allowlist: implement extension allowlists and enterprise blocklists via group policy or MDM to restrict which browser extensions can be installed on corporate profiles.
- Scan endpoints: use EDR and browser management tools to identify devices with the implicated extensions installed; orchestrate removal at scale.
- Incident response: assume potential data leakage for chats involving corporate secrets; conduct a data‑loss assessment, rotate any keys mentioned in compromised chats, and notify legal/compliance teams for breach evaluation.
- User education: issue immediate guidance discouraging the use of consumer privacy tools on managed devices and warn employees against pasting sensitive materials into public or consumer AI assistants without contractually guaranteed data protections.
Broader analysis: why this incident matters to Windows and browser ecosystems
This episode reframes two core assumptions in consumer AI security:- Client‑side software (extensions, plug‑ins, browser helpers) can be the weakest link. Users often install privacy tools hoping to reduce exposure—but privilege and code execution inside the browser make extensions uniquely powerful and dangerous when abused.
- End‑to‑end service encryption does not protect content that is available in the client context. If malicious code executes inside the browser before data is encrypted for transport or after it's decrypted, the service’s TLS and server‑side controls are bypassed.
Policy and platform recommendations
- Runtime analysis during review: stores should add dynamic, domain‑triggered behavioral analysis to extension review pipelines—simulating visits to known AI assistant domains and instrumenting runtime calls to detect API hooking and page context injections. Static manifest checks are not enough.
- Auto‑update governance: for features that materially change data collection, stores and browser vendors should enforce re‑consent flows or force a manual re‑review step before automatic updates can switch on new privacy‑sensitive capabilities.
- Better UX for consent: require clear, contextual disclosure when an extension starts to process AI inputs/outputs, with prominent runtime toggles rather than buried policy text. Legalese in a 6,000‑word privacy policy is not meaningful consent.
- Enterprise controls: browsers should expand enterprise APIs to let administrators centrally control extension permissions at the domain‑level (e.g., block extensions from injecting into .openai.com or .google.com chat pages).
Risks going forward: what to watch
- Copycat monetization: attackers and unscrupulous publishers may see AI chats as a new high-value commodity for ad intelligence and profiling. Expect similar approaches targeting other high‑value web apps.
- Data broker laundering: even if an extension owner claims anonymization, the combination of conversation text, timestamps and session identifiers can enable re‑identification or cross‑correlation with other datasets. Claims of “aggregated” or “anonymized” sales are not a reliable protection.
- Marketplace trust erosion: Featured badges and star ratings can be gamed; users and administrators must stop equating store placement with safety and instead apply least‑privilege governance.
Final verdict, caveats and what remains unproven
The technical core of the disclosure is well supported: per‑site executor scripts, API hooking inside the page context, and exfiltration to analytics endpoints operated by the extension publisher are detailed in Koi’s write‑up and reproduced by multiple independent outlets. The existence and scale of large‑install extensions that captured AI chats are credibly established. What remains less exhaustively verified in public reporting is the precise chain of custody for captured chats beyond the publisher’s analytics endpoints—specifically, a definitive public ledger listing downstream buyers, purchase contracts, or specific corporate consumers who purchased the datasets. Multiple reports tie the publisher to an affiliated data‑broker (BiScience) and document historical broker activity, but direct transactional proof of sales to named third parties has not been made public in full forensic detail; treat such claims as plausible and high‑priority for regulator or litigation discovery.Conclusion
This incident is a hard lesson: convenience and perceived "privacy tooling" can mask the very surveillance mechanisms users install to avoid. Browser extensions have the technical capability to intercept plaintext content in the page context—so when that content is human conversations with AI assistants, the potential for harm multiplies. Users, enterprises and platform owners must adapt: audit and restrict extensions, require stronger runtime review and re‑consent guarantees, and treat AI chats as data worthy of the same protections we apply to financial records, health records, and corporate secrets.Immediate action is straightforward and urgent: identify and remove the named extensions, rotate any secrets leaked via chats conducted while an implicated extension was installed after July 9, 2025, and harden enterprise extension controls. Longer term, the browser ecosystems and regulators must close the gap between static listing checks and dynamic, runtime abuses that monetize users’ most private digital conversations.
Source: Tech Digest Malicious VPN steals full ChatGPT and Gemini conversations of 8 million users - Tech Digest