AI Browsers Privacy Risks: Prompt Injection and ShadyPanda Exposed

  • Thread Author
A sharp, peer‑reviewed study and a string of security disclosures have exposed a worrying truth about the new generation of AI‑assisted web browsers: many of them collect and transmit highly sensitive browsing data — sometimes without clear consent — and the features that make these tools useful are exactly the features that make them dangerous. The implications span individual privacy, enterprise security, and the commercial economics of the web.

Hooded hacker at a glowing screen displaying an AI-brain and digital icons.Background / Overview​

AI‑assisted browsers — products that embed large language models (LLMs) or assistant agents directly into the browsing experience — appeared in force in 2025. They promise to transform search, summarization, multi‑tab synthesis, and even automated tasks (booking, form‑filling, transaction workflows) into a conversational or agentic experience. That promise hinges on an assistant being able to see the page, access context across tabs, and sometimes act on behalf of the user.
A large academic audit presented at the 2025 USENIX Security Symposium found that popular generative AI browser assistants can and do collect sensitive data, sometimes including health and financial information, and in several cases share or profile users in ways that raise legal and ethical concerns. The study — led by researchers from University College London, UC Davis and the Università Mediterranea of Reggio Calabria — analysed ten leading browser extensions and found widespread transmission of full page content, third‑party tracking, and profiling behaviour. At the same time, independent security vendors and researchers have published reproducible proof‑of‑concepts showing prompt injection and indirect prompt injection attacks against agentic browsers, demonstrating how assistants that ingest page content can be coerced into leaking secrets or performing unauthorized actions. Brave’s technical disclosure against Perplexity’s Comet is the clearest public example, and subsequent audits and reporting corroborate the fundamental attack surface. Finally, a separate but related class of problems — malicious or weaponized browser extensions — remains active. A long‑running campaign (dubbed “ShadyPanda” by researchers) turned seemingly benign extensions into large‑scale spyware and RCE backdoors affecting millions of Chrome and Edge users, underscoring that the extension ecosystem is still a high‑risk vector for mass surveillance and account compromise.

Why this matters: the new threat model for browsers​

Browsers as agents, not just renderers​

Traditional browsers are passive: they render HTML, run JavaScript in well defined origins, and provide well understood isolation primitives (same‑origin policy, CORS, etc.. AI browsers change that mental model in three ways:
  • They treat page content as input to an interpreter (the LLM). That content may include visible text, hidden comments, screenshots, or OCR’d images.
  • They can act on the web: open tabs, click buttons, fill forms, and follow navigational flows — sometimes using your active session and cookies.
  • They may ship with persistent memories or telemetry that centralize cross‑session behavior, making them a rich profile of a user’s interests, health, finances, and social activity.
Those three properties — content ingestion, agency, and memory — combine to produce an attack surface fundamentally different from typical XSS or DOM‑based attacks. Prompt injection converts benign content into instructions the assistant follows, and agentic actions convert those instructions into real‑world effects.

Confirmed privacy harms: what the USENIX audit found​

The USENIX study simulated realistic browsing personas and tasks (both logged‑out and logged‑in sessions) and intercepted encrypted traffic to see what assistants sent to their backends. Key findings:
  • Several assistants transmitted full page content and form inputs to their servers — including content that would normally be considered sensitive (banking, health portals, private messages).
  • Some assistants continued to record activity across “private” or logged‑in sessions where users would reasonably expect privacy; at least one captured form inputs such as bank or health data.
  • Multiple assistants shared data with third‑party trackers (e.g., Google Analytics), enabling cross‑site profiling.
  • Most assistants implemented personalization or profiling features that persisted between sessions; only one assistant showed no evidence of profiling in the tests.
These empirical results are not theoretical — the researchers documented specific data flows and behaviors, and called for stronger transparency, opt‑in defaults, and privacy‑by‑design approaches.

Case studies: where things already went wrong​

Perplexity’s Comet and indirect prompt injection

Brave published a step‑by‑step proof‑of‑concept showing how Comet’s “Summarize this page” flow could be exploited: hidden instructions placed in a page (for example, in a Reddit comment behind a spoiler) would be ingested as part of the assistant’s input and used to perform multi‑step actions. Brave’s PoC led the assistant to navigate authenticated tabs, extract an OTP, and exfiltrate credentials — all triggered by a single summarization request. Brave’s disclosure timeline ran through July–August 2025 and included several rounds of patching and retesting. Independent reporting and research reproduced the core mechanics and emphasized that the conceptual problem — treating page content as untrusted input — remains broadly unsolved. Perplexity and other vendors have disputed or pushed back on some claims (and vendors sometimes patch specific vectors quickly), but independent reproductions and additional vendor disclosures have repeatedly shown that new covert channels (images, zero‑width characters, URL fragment channels) keep appearing. In short: fixing one vector rarely eliminates all attack paths.

ShadyPanda: millions of installs turned into spyware​

Koi Security’s multi‑phase analysis of the “ShadyPanda” campaign exposed how a long‑trusted set of extensions (some even featured or verified in extension stores) evolved from useful utilities into spyware and RCE backdoors via automatic updates. The consolidated impact numbers — reported as roughly 4.3 million installs across Chrome and Edge — come from store install tallies and vendor telemetry; multiple outlets and CERTs corroborated the core claims. The malicious code harvested browsing histories, search queries, cookies, and fingerprints, and in some cases polled command‑and‑control servers to execute arbitrary JavaScript inside the browser. Removal from stores does not uninstall already‑infected profiles; user remediation is still required. These campaigns show that the extension supply chain remains a high‑risk path for mass surveillance and persistent cross‑device tracking, and the same mechanics can be combined with agentic capabilities for even greater impact.

What’s verified — and what needs caution​

  • Verified: Large‑scale, reproducible prompt‑injection and indirect prompt‑injection PoCs exist and have been published by respected security teams; vendors have patched some vectors but new channels emerge.
  • Verified: The USENIX study (presented Aug 2025) shows many popular AI browser assistants collect and transmit page content, sometimes including sensitive fields, and share data with third parties.
  • Verified: Extension supply‑chain abuses (e.g., ShadyPanda) have impacted millions and remain a practical threat vector.
  • Need caution: Broad claims that thousands of in‑the‑wild account takeovers happened specifically because of prompt injection should be treated carefully; public reporting to date documents PoCs and patched issues rather than proven, widespread exploitation at scale. Vendors and researchers disagree on specific attribution and timelines in some incidents.

Technical anatomy: how prompt injection and covert channels work​

Prompt injection basics​

An assistant typically builds a contextual prompt that includes a user’s instruction plus relevant page content or metadata. If that page content contains embedded instructions or data crafted to change the model’s behavior, the model may treat those instructions as authoritative. That’s prompt injection.

Indirect prompt injection and agentic danger​

Indirect prompt injection happens when a user asks the assistant to “summarize this page” and the summary process pulls in hidden or adversarial content that the model follows as an instruction. Because agentic browsers can act with the user’s session privileges, these coerced instructions can trigger actions that traditional web security protections do not prevent.

Covert channels and real‑world tricks​

  • Zero‑width characters and Unicode smuggling: invisible characters survive naive sanitization.
  • Image‑based channels: faint text inside images is OCR’d and treated as input.
  • URL fragments (HashJack): client‑side fragments can carry instructions that never reach server logs.
  • Clipboard/paste flows: ephemeral data pasted into assistants often bypasses standard DLP controls.
    These channels show that sanitization is necessary but not sufficient; complete solutions require architectural separation and least‑privilege designs.

Practical advice: immediate steps for Windows users and IT teams​

The following actions balance immediacy and feasibility for consumers and organizations. They align with recommended mitigations in independent advisories and enterprise playbooks.

For individual Windows users (prioritize first three)​

  • Audit browser extensions now: uninstall anything you don’t recognise — especially extensions named in public advisories (e.g., Clean Master, WeTab, Infinity V+/New Tab variants). Removal is the only guaranteed way to stop an installed malicious extension from running.
  • Disable agentic actions by default: turn off "agent can act" or "agent can operate on my behalf" settings and use assistants in read‑only or logged‑out mode for sensitive tasks.
  • Use private sessions / logged‑out modes for banking, health portals, and SSO‑backed services. Never paste credentials into an assistant.
  • Clear synced extension storage and sign out of browser sync if you suspect compromise; change passwords and enable MFA where possible.
  • Keep the browser updated and enable vendor blocklists; but remember store removal ≠ remote uninstall — you still must manually remove an installed extension.

For enterprise/IT teams (implement as policy)​

  • Treat agentic features as privileged automation: require admin approval, short‑lived tokens, and explicit per‑site grants. Apply least‑privilege controls and block agentic actions in managed profiles by default.
  • Enforce DLP and monitoring for clipboard and paste events; extend observability to agent‑driven uploads and API calls.
  • Use browser isolation or site allowlists for high‑risk workflows; require explicit human confirmations for any action that accesses credentials or financial flows.
  • Red‑team with prompt‑injection scenarios; add automated adversarial tests to CI/CD and incident response playbooks.

What vendors should — and must — do​

The technical community converges on a common set of design requirements to avoid catastrophic failures:
  • Deny‑by‑default for agentic actions and memory retention; require clearly documented opt‑ins for persistent profiling.
  • Canonicalize and sanitize all page inputs early in the prompt pipeline; explicitly treat scraped content as untrusted data.
  • Provide auditable logs and provenance metadata for any assistant output that synthesizes web content; make it trivial for a user to inspect the exact page text used to generate a response.
  • Offer enterprise contractual guarantees: non‑training clauses for sensitive data, data residency options, and third‑party auditability.
  • Ship consistent UI affordances (visible indicators when the agent is reading or acting) and implement per‑origin confirmation flows for high‑risk actions.
These are engineering essentials, not optional UX choices; without them, agentic browsers create systemic privacy and security risk.

The economic and regulatory dimension​

AI browsers change referral economics: synthesized, zero‑click answers may reduce publisher traffic and ad revenue, which in turn alters incentives around content licensing and data access. Regulators are already paying attention: privacy laws (HIPAA, FERPA, GDPR), consumer protection rules, and competition authorities will likely scrutinize both default placements and data retention practices. The USENIX researchers, security vendors, and independent reporting collectively argue that enforcement and clearer industry standards are necessary to protect consumers and maintain a functioning web economy.

Strengths, trade‑offs, and the path forward​

AI browsers bring real benefits: faster research, improved accessibility, and streamlined workflows for complex, repetitive web tasks. For knowledge workers, on‑demand summarization and multi‑tab synthesis are powerful productivity wins.
But the trade‑offs are material:
  • Convenience vs. control: defaulted telemetry and persistent memories concentrate risk.
  • Productivity vs. auditable safety: agentic automation reduces friction but requires guardrails if used with privileged sessions.
  • Innovation vs. ecosystem health: publisher economics, user trust, and security must be addressed to sustain a healthy open web.
The responsible path forward requires speed in both feature velocity and security: vendor diligence, independent third‑party audits, enterprise governance, and — crucially — informed user choices.

Conclusion​

The recent USENIX study and multiple security disclosures have made one thing clear: AI browsers are not merely new skins on old engines — they are new execution environments where language, data, and action intersect. That convergence unlocks tremendous capability, but it also multiplies privacy and security hazards in ways that existing web protections were not designed to handle. Users and administrators must treat agentic features cautiously, vendors must adopt privacy‑first architectures and auditable behaviors, and regulators must clarify obligations around profiling and data retention.
Until the ecosystem matures — with transparent defaults, independent audits, and robust enterprise controls — the safest posture is conservative: restrict agent privileges, vet and remove untrusted extensions, and assume that any page‑derived content you hand to an assistant could be retained, analyzed, or shared unless explicitly stated otherwise. The AI browser era holds enormous promise, but that promise depends on getting the safety and privacy fundamentals right.
Source: bgr.com AI Web Browsers Are Spying On You - Here's How - BGR
 

Back
Top