LG Copilot Web Shortcut on webOS: Privacy, Deletion, and Consumer Control

  • Thread Author
LG’s reversal on the Copilot shortcut is a rare but telling victory for consumer pushback: after days of viral complaints that a Microsoft Copilot icon had been pushed to many webOS TVs without a clear uninstall path, LG says it will add an option to let owners delete the Copilot shortcut — while insisting the tile is only a browser-based shortcut and that microphone use requires explicit consent.

A hand points at the Copilot app on a webOS TV home screen.Background / Overview​

The controversy began when owners of LG webOS televisions noticed a new Copilot tile on their home screens following a recent over‑the‑air webOS update. Screenshots and high‑traffic forum posts made it clear the tile behaved differently from ordinary apps: users could hide it in the UI but not uninstall it through the standard app manager, and several reported the tile reappeared after a factory reset — behavior consistent with a system‑level or firmware‑baked package rather than a removable content store app. Microsoft and TV OEMs had publicly signaled this direction earlier: Samsung and LG both promoted Copilot integration for 2025 TVs as part of a broader “AI TV” push, and Microsoft published guidance for Copilot on Samsung’s Tizen-based sets. On Samsung devices Copilot was rolled out as a built-in assistant in select 2025 models; LG’s webOS roadmap also flagged Copilot and other AI features for its 2025 lineup. What created the uproar in mid‑December was the delivery mechanism — a firmware push that placed the Copilot tile in ways many owners felt they didn’t consent to or control.

What LG announced (and why the message matters)​

LG’s public response — relayed to multiple outlets by spokesperson Chris De Maria — contained two key claims: the Copilot entry is a shortcut that opens the Microsoft Copilot web app in the TV’s browser, and features such as microphone input are only activated with the customer’s explicit consent. Crucially, LG also said it “respects consumer choice and will take steps to allow users to delete the shortcut icon if they wish,” but did not provide a firm timeline for when that deletion option would arrive. That clarification addresses two separate complaints: first, that Copilot was a deeply embedded native application (LG insists it is not), and second, that the company had no intention of returning user control (LG now promises to add a delete option). Both clarifications are material — the former affects technical risk modeling (what data flows where), while the latter affects trust and regulatory exposure.

How the implementation matters technically​

Web shortcut vs. native app​

  • A web shortcut is effectively a pinned link that opens a remote web interface inside the TV’s browser shell. Processing and model inference happen in Microsoft’s cloud; the TV is a thin client that streams UI and sends user input to servers.
  • A native app would run code on the TV, could include local binaries, and might integrate more deeply with system services and sensors.
The reported Copilot behavior — launching a Copilot web UI in the TV browser — aligns with LG’s claim that it is a web shortcut. That design lowers local compute requirements and streamlines deployment for many TV models, but it does not eliminate privacy concerns: a web shortcut still forwards voice queries, text inputs, and contextual metadata to remote servers, and the TV’s firmware or other system components can still generate telemetry that augments those cloud interactions.

Why some tiles look “undeletable”​

Embedded devices often use two mechanisms that make a tile effectively permanent:
  • Installing the component as a privileged system package outside the user sandbox, which can be disabled or hidden but not fully uninstalled via normal user flows.
  • Baking the component into the firmware image delivered by FOTA (firmware‑over‑the‑air), so a factory reset restores the firmware image (and the tile) automatically.
Community testing and multiple screenshots suggest LG’s Copilot tile was delivered in a way consistent with one of these mechanisms, which explains why ordinary app‑management tools didn’t offer a delete action. That’s a common manufacturing pattern for system services — useful for DRM or platform features — but it collides with consumer expectations for post‑purchase control.

The privacy angle: ACR, Live Plus, voice data and legal scrutiny​

The Copilot shortcut conversation coincides with a much larger privacy fight around smart TVs and their telemetry stacks. Central to that fight is Automated Content Recognition (ACR) — a class of features that can fingerprint or analyze what’s playing on screen to deliver personalized recommendations or advertising. LG markets its ACR tooling under names like Live Plus, and critics have long warned that ACR, combined with other telemetry, creates an expansive profile of users’ viewing behavior and household context. In mid‑December, the Texas Attorney General filed lawsuits against five major TV manufacturers — Sony, Samsung, LG, Hisense and TCL — alleging those companies unlawfully collected personal data using ACR and other mechanisms, and that these practices were deployed without adequate disclosure or consent. The Texas filings describe ACR systems that can capture viewscreen imagery multiple times per second, transmit it in real time, and use it for advertising or profiling — allegations that dramatically raise the stakes for any added AI assistant or system feature that heightens context capture. Texas later secured a temporary restraining order against Hisense for specific ACR practices. Taken together, these developments show why a seemingly small UI decision — pinning a Copilot shortcut to the home screen as a system tile — can cascade into broader regulatory and consumer‑rights scrutiny. Regulators are already investigating whether TV telemetry practices are transparent, opt‑in, and compliant with consumer protection laws; forced or opaque integrations risk adding claims of deceptive practices and “dark patterns.”

What this means for data flows and sensor use​

Even if Copilot is a web shortcut, the privacy calculus depends on at least three axes:
  • What telemetry the TV sends to LG (firmware logs, ACR fingerprints, usage signals).
  • What Copilot prompts and voice data are sent to Microsoft if users interact with the shortcut or sign into a Microsoft account.
  • Whether any microphone, camera, or on‑screen capture can be activated without clear, persistent opt‑in.
LG’s statement that microphone input is activated only with “explicit consent” reduces some immediate alarm, but the ecosystem still requires clarity about what counts as “explicit” and whether consent persists across updates, account linkages, or firmware provisioning. The regulatory suits show authorities will scrutinize not only what is captured, but how defaults, disclosures, and opt‑out paths are implemented.

Consumer practical steps right now​

For readers who own affected LG webOS TVs and want to reduce exposure while the vendor follows through on deletion options, the following mitigations are practical, ordered from low to high friction:
  • Hide the Copilot tile: Home → Edit/App mode → select Copilot → Hide. This removes visual prominence but does not uninstall the component.
  • Disable Live Plus / ACR and ad personalization: Settings → General → System / Privacy → find Live Plus or Content Recognition and toggle off. This reduces automatic screen‑fingerprinting telemetry.
  • Turn off voice recognition features unless needed: Settings → Privacy / Voice Recognition → Off. Avoid linking a Microsoft account on the TV to limit persistent account‑tied telemetry.
  • Use an external streaming stick (Apple TV, Roku, Fire TV, Nvidia Shield) as your primary interface and treat the LG set as a “dumb” display. This is the most reliable route to avoid OEM smart‑stack telemetry.
  • Place the TV on a segmented guest VLAN, or use router‑level DNS blocking or Pi‑hole rules for known telemetry domains — technically effective but may break updates and services. Advanced users only.
  • As a last resort, keep the TV offline — this disables streaming and updates but blocks outbound telemetry entirely.
These steps have been recommended across community threads and mainstream reporting as the immediate practical mitigations. They are imperfect workarounds until LG ships a formal delete option and publishes precise telemetry disclosures.

Industry and regulatory implications​

  • Forced or hard‑to‑opt‑out placements of partner services on selling hardware are already attracting regulatory attention. The Texas lawsuits and the temporary restraining order against Hisense are clear signals that state authorities are willing to pursue enforcement if they believe consumers were misled or defaulted into surveillance.
  • Deploying Copilot as a web shortcut is commercially efficient for LG and Microsoft, but it does not absolve either company from responsibility for deployment defaults, consent flows, and prescriptive disclosures. Regulators will look at the whole chain: firmware defaults, privacy UX, telemetry retention, and whether customers can meaningfully opt out.
  • The U.S. International Trade Commission and other federal bodies are paying closer attention to smart‑TV supply chains and the software stacks that bring third‑party services to mass devices. That broader attention increases the legal and reputational risk for vendors that make intrusive defaults a business tactic.

Assessment: strengths, weaknesses and the risk profile​

Notable strengths of LG’s response​

  • The company immediately clarified the technical nature of the Copilot tile (web shortcut vs native app), which reduces speculative technical fear about unknown local code running on devices.
  • LG committed to adding a delete option, which acknowledges the core consumer grievance: loss of post‑purchase control.

Persistent weaknesses and red flags​

  • No firm timeline for remediation was provided, leaving affected users and regulators waiting.
  • The presence of a system‑pushed shortcut that reinstates after a factory reset suggests the company’s update and packaging practices treat partner services as privileged system components, a practice that is likely to face sustained consumer resistance.
  • LG’s claim about “explicit consent” for microphone activation needs clearer operational definition and engineering documentation (for example: is consent per session, persistent, revocable, and transparently logged?. Absent that, skepticism will remain.

Risk profile​

  • Reputational risk: negative press and viral community backlash can depress brand sentiment for otherwise well‑regarded hardware.
  • Regulatory and legal risk: the Texas lawsuits and other investigations signal that vendor practices around ACR, defaults and post‑sale software changes are likely targets for enforcement and litigation. If regulators find misleading consent flows or dark patterns, penalties and injunctive relief are possible.
  • Operational risk: pushing system‑level changes via firmware without clear opt‑out or rollback paths invites user remediation (network isolation, external streamers) that undermines the very adoption metrics the company seeks to improve.

What LG, Microsoft and OEMs should do next (concise checklist)​

  • Ship a delete/uninstall option for the Copilot shortcut with a firm and public timeline.
  • Publish a clear technical and privacy notice describing what telemetry is generated when the shortcut is present, what is sent to Microsoft, retention windows, and third‑party sharing. Make it accessible and machine‑readable.
  • Default to privacy‑minimal settings for any new feature delivered post‑sale (ACR off, ad personalization off) and require an active opt‑in with a clear, one‑screen summary.
  • Provide a firmware rollback path and detailed update notes so independent auditors can verify what an OTA update changes.
  • Improve consent UX so microphone and other sensor permissions are explicit, revocable, and persistent across updates.
These steps would ease regulatory pressure, restore consumer trust, and create a clearer, safer runway for bringing helpful AI features to living‑room displays.

Closing analysis​

The Copilot‑on‑LG episode is not just about a single shortcut tile; it’s a case study in how modern connected devices erode a buyer’s expectation of post‑purchase software control when manufacturers monetize visibility and telemetry. LG’s about‑face — promising a delete option — is an important acknowledgement that consumers expect choice. Yet the underlying technical and business choices that allowed a non‑removable shortcut to ship in the first place remain a systemic problem across the smart‑TV industry.
Regulators are responding in real time: state attorneys general have filed lawsuits and sought temporary orders, and federal interest in smart‑TV software provision is evident. For consumers, the immediate path is simple: take pragmatic steps to reduce exposure (hide the tile, disable ACR/voice features, avoid account sign‑in, or run an external streamer) while demanding transparency about what data flows where. For vendors, the lesson is equally straightforward: ship convenience, but never at the cost of clear, persistent consumer consent and meaningful uninstallability.
This episode will be watched closely by privacy advocates, policymakers, and mainstream buyers alike because it sits at the intersection of AI adoption, software‑as‑service economics, and longstanding expectations about device ownership. How LG implements the promised delete option — and how candidly it discloses Copilot’s telemetry and consent mechanics — will determine whether this becomes a one‑off controversy or a structural inflection point for how AI companions are deployed in the home.
Source: channelnews.com.au LG Has Change Of Heart Over Copilot App On Their Spy Enabled WebOS TV’s – channelnews
 

Windows users are waking up to a familiar — and growing — performance headache: several of the most widely used Windows 11 applications are consuming far more RAM than they used to, in some cases ballooning from a few hundred megabytes to multiple gigabytes during normal use. The shift toward browser-based runtimes such as Chromium, Electron, and WebView2 is the primary technical cause, and the result is visible across messaging apps, chat/voice clients, and even some first‑party Microsoft clients. This trend not only degrades responsiveness on 8–16 GB machines but also has real economic consequences as memory prices and upgrade costs rise. The problem is well documented in hands‑on tests and community traces, and vendors are responding with short‑term mitigations while they work on deeper fixes.

Laptop screen with floating app windows and a RAM pressure gauge showing 8+ GB.Background / Overview​

Modern desktop apps increasingly ship as web‑driven clients: developers embed a browser engine (Chromium) or host web content via WebView2, or build cross‑platform shells using Electron. Those choices dramatically cut development time and simplify cross‑platform feature parity, but they also inherit the browser’s multi‑process architecture and memory behavior. Each renderer, plugin, and helper process comes with its own JavaScript heap, native buffers, and caches — and those add up fast over long sessions. The result is a proliferation of resident processes that behave like tabs in a browser: useful, but hungry.
Windows 11 itself sets 4 GB as the minimum memory requirement, but practical everyday usage is higher: 8 GB is the realistic minimum for light multi‑tasking and 16 GB (or more) is advisable for heavier workflows or gaming. The official minimum is a floor for installation, not a target for smooth multitasking. Two related failure modes are commonly observed:
  • Legitimate large working sets — e.g., streaming video, decoded audio buffers, or large conversation histories — that should be large while in active use but are expected to shrink when idle.
  • Memory retention and leaks — allocations that are not released properly, causing monotonic growth over hours or days and requiring an application restart to recover. Community traces and vendor telemetry show both patterns in the wild.

What’s happening right now: the headline cases​

Discord: an Electron client under scrutiny​

Discord — the game‑ and community‑focused chat/voice client — has been a lightning rod for these complaints. Multiple independent tests and user reports show the Discord desktop client climbing from under 1 GB to as much as 3–4 GB during routine activities such as voice streaming or screen sharing, and in many cases failing to release that memory until the process is restarted. That behaviour led Discord to test an automatic restart experiment in constrained circumstances to mitigate runaway memory growth. Why Discord stands out:
  • Built on Electron, which embeds Chromium and Node.js, so it carries the full browser process model.
  • Long‑running voice and streaming workflows allocate large buffers and media contexts that can be held in memory.
  • Memory often does not shrink back to baseline without a restart, suggesting retained state or leaks in certain stacks.

WhatsApp for Windows: a return to WebView2 and a jump in RAM​

WhatsApp’s Windows client has reportedly reverted from a native WinUI app to a WebView2 (Chromium) wrapper in some builds. Tests show large increases in resident memory: the modern WebView2‑based build can use hundreds of megabytes on the login screen and push into the gigabyte range when loading many chats. That’s a dramatic increase compared with the former native client footprint and a clear example of the cost of shipping web‑backed builds for desktop.

Microsoft Teams and other first‑party clients​

Even Microsoft’s own Teams has non‑trivial memory demands: launching the client can take around a gigabyte of memory in real‑world sessions, especially when meetings, shared screens, or device telemetry are active. Teams’ architecture has evolved to isolate media paths into separate processes, a design choice intended to limit full‑client failures — but the total system working set is still significant for long meetings. Guidance for reducing Teams memory usage (turning off animations, disabling hardware acceleration, clearing caches) can produce meaningful improvements for end users.

Why this matters now: technical and economic context​

  • Development velocity vs. resource efficiency. Developers choose Electron/WebView2 for speed, cross‑platform parity, and a single codebase. That trade‑off accelerates feature delivery but delegates memory management to a stack not optimized for lightweight native clients. The engineering effort to re‑architect heavy clients into lean native apps is non‑trivial and expensive, so many vendors prefer short‑term mitigations.
  • Memory market pressures. DRAM pricing and supply have become more volatile as manufacturers prioritize high‑bandwidth memory and server DRAM for AI workloads. That raises the real cost of “just add more RAM” as a practical mitigation for affected users, and increases the value of software-side memory efficiency. Community research has pointed to rising DRAM costs and constrained consumer availability in recent quarters.
  • User impact on mainstream hardware. Devices with 8 GB and 16 GB of RAM — still common in laptops and budget desktops — are the worst affected. Excessive memory use by chat clients and background services can push systems to heavy paging, producing stutters, input lag, and poor gaming performance. For many users the immediate practical mitigations are to close offending apps, use browser or PWA versions, or upgrade hardware — none of which is a perfect solution.

Evidence and verification — what tests and sources show​

Multiple hands‑on tests and independent articles document the problem and corroborate vendor statements and community repros:
  • WindowsLatest measured WhatsApp moving from tens or hundreds of megabytes on a native client to 1+ GB or more on a WebView2 build in routine use. Those tests are supported by PC Gamer and Tech publications reporting similar magnitudes.
  • TechRadar and other outlets reproduced Discord memory rising significantly during streaming and failing to release memory until a restart. Discord’s own engineering notes and experiments have also acknowledged these memory growth patterns.
  • Community telemetry, forum traces and internal reproductions highlight additional contributing factors, such as changes in service startup types (trigger → automatic), which expose previously dormant subsystems to long‑running residency and amplify observable memory footprints. This behaviour has been correlated with certain Windows cumulative updates that change service semantics, increasing the chance that a small leak becomes a large real‑world problem.
Caveat on numbers: reported peak and idle values vary by OS build, installed extensions, loaded chat history, GPU driver, codecs, and hardware. Some headlines quote a worst‑case 3–4 GB spike for Discord; other tests show more moderate increases. Treat specific gigabyte claims as indicative of real regressions rather than universal constants. Where possible, cross‑reference multiple hands‑on measurements before generalizing for every environment.

Technical anatomy: why Chromium/Electron/WebView2 drives higher RAM use​

  • Multi‑process design. Chromium isolates renderers, plugins, GPU and utility processes. Each renderer has its own JavaScript heap and native buffers. Electron apps instantiate multiple Chromium renderers plus Node.js contexts, increasing base memory even for a single app window.
  • Long‑lived JS heaps and retained DOM. Web apps commonly cache conversation history, decoded media frames, avatars, and attachments in memory to provide snappy UX. If the app fails to evict or release those caches correctly, the working set grows. Garbage collection in JS is non‑deterministic and can be delayed or prevented by lingering references.
  • Native add‑ons and codecs. Some desktop features (screen sharing, hardware encoders/decoders, voice pipelines) use native buffers and drivers outside of the JS runtime. These allocations can be large and are not subject to GC, so lifecycle bugs in teardown code cause persistent memory use.
  • Service and OS interactions. Windows services and brokered host processes (AppXSVC, Delivery Optimization, backgroundTaskHost) can become resident because of startup‑type changes or OS updates; when those subsystems hold caches or buffers, their long‑term residency amplifies memory pressure. Community traces have linked certain Windows updates that change startup semantics to increased steady‑state memory exposure.

Practical, safe steps for users and IT admins​

If an app is pushing your machine into swap and causing stutters, there are immediate, reversible steps you can take to restore responsiveness.
  • Identify the culprits
  • Use Task Manager, Resource Monitor, or Process Explorer to sort by memory and observe working set growth over time. Track which process increases steadily rather than peaks briefly.
  • Short‑term user fixes
  • Restart the offending app periodically (or use the web client in a browser which may expose better tab‑sleeping controls).
  • Disable Hardware Acceleration in the app (some apps expose this setting) to reduce GPU/driver interactions that can trigger large allocations.
  • Disable or trim background permissions for non‑essential Store/UWP apps: Settings → Apps → Installed apps → Advanced options → Background app permissions.
  • Manage startup/resident processes
  • Disable non‑essential startup entries: Settings → Apps → Startup or Task Manager → Startup.
  • For enterprise fleets, use Group Policy / MDM to control startup behaviour for known heavy clients.
  • Safe OS‑level mitigations
  • Monitor and, if necessary, limit peer update services or background services that appear to grow memory over time (e.g., Delivery Optimization). Document and test changes before rolling them into production images. Community reproductions have shown DoSvc/AppXSVC interactions can manifest as long‑running memory growth after certain updates.
  • Longer‑term options
  • If you regularly run multiple heavy Electron/Chromium clients and performance is critical, consider provisioning 16 GB or 32 GB machines for those users. This is a blunt instrument but effective while vendors patch leaks. Note that memory upgrades carry real costs given market dynamics.

What vendors are doing (and what they should do)​

Vendors are taking a layered approach: immediate tactical mitigations, added telemetry and profiling, and longer‑term architectural work.
  • Tactical: restart experiments (Discord), memory caps and watchdogs, user guidance to use the web client or disable features. These reduce short‑term pain but are not permanent fixes.
  • Instrumentation and profiling: adding long‑running session telemetry and memory profiling to reproduce leaks that only appear after hours or days. This is essential — leaks seldom show in short unit tests.
  • Architectural rework: modularizing memory‑heavy components into separate, restartable processes or native modules so that a leak in a codec or renderer does not bloat the entire client. This requires engineering investment and is the most durable route, but it is slow and costly.
Recommendations for vendors:
  • Ship explicit memory‑lifecycle tests in CI that run long sessions (hours) and watch for monotonic growth.
  • Offer a “lite”/native shell for constrained environments or configurable cache limits.
  • Publish memory‑profiled binaries or usage guidance so IT admins can choose the optimal build for fleet deployment.

Risks and trade‑offs​

  • User experience vs. resource footprint. Removing features to save memory can damage user experience. The realistic path is better lifecycle management and modularization, which costs development time and money.
  • Automatic restarts: temporary but risky. Automated restarts can recover memory, but applied incorrectly they may interrupt active sessions, cause data loss, or degrade trust. Any restart policy must be conservative and user‑controllable. Discord’s experiment includes safeguards (run only if idle, limited frequency) — a pragmatic compromise.
  • Silent regressions via OS updates. Changes in Windows service startup types or unexpected interactions with device drivers can amplify leaks that previously went unnoticed. Enterprises that enforce strict update policies should monitor memory metrics closely after cumulative updates.
  • Misdiagnosis. Not every high memory reading is a leak. Windows aggressively caches file data and sets memory aside for predictable performance improvements. Diagnose carefully: look for sustained working‑set growth in a single process over time as a key sign of a leak.

What to watch next (short‑ and medium‑term signals)​

  • Vendor changelogs and client updates that explicitly reference memory fixes, modularization, or replaced runtimes. When vendors commit to memory lifecycle fixes, that’s the durable signal users want.
  • Microsoft KB and service behaviour changes. Watch for Microsoft notes about service startup semantics and any advisories that document changes to AppXSVC/Delivery Optimization or similar services; those changes materially affect long‑running memory footprints.
  • Broader market signals on DRAM pricing and supply. If consumer DRAM remains constrained, the economic pressure to ship more efficient software increases.
  • Community reproduction reports and tools. Expect more wrapper scripts, performance tests and Sysinternals traces from the enthusiast community as they dig into which versions and workloads produce the worst behaviour.

Bottom line and clear takeaways for Windows users​

  • The problem is real and measurable: several popular Windows apps — notably Electron‑based clients and WebView2 wrappers — are using markedly more RAM than their native predecessors, and that behavior can push 8–16 GB PCs into painful swap activity. Independent tests and community traces document gigabyte‑class increases for clients such as Discord and WhatsApp in heavy scenarios.
  • Short‑term user actions are effective and safe: monitor Task Manager, restart problem apps, use the browser/web client when feasible, trim startup apps, and disable non‑essential background permissions. These steps are immediate and reversible.
  • Long‑term solutions require vendor investment: better memory lifecycle discipline, modularization of heavy subsystems, and targeted profiling for long-lived sessions are the engineering fixes that will deliver durable relief. Expect vendors to patch aggressively, but accept that deep architectural changes take time and money.
  • Treat “4 GB” as a minimum installation requirement, not a real‑world target for common multitasking. Budget at least 8 GB for light use and 16 GB for comfortable multitasking with modern web‑heavy clients and collaboration tools.

Final note on verification and uncertainties​

The measurements cited across community and media reports are reproducible in many environments, but absolute numbers vary by build, hardware, driver versions, and user data sets. Headlines that cite a single machine’s 4 GB spike may not reflect every user’s experience. Where possible, rely on multiple independent reproductions before concluding that a given number is universal. Vendors and communities continue to update their tooling and telemetry, and meaningful change is likely in the coming releases as engineers prioritize memory lifecycle work.
The memory crisis shaping parts of the Windows 11 ecosystem is a modern product of a long‑standing trade‑off: developer velocity versus native efficiency. Until vendors finish the hard engineering work to close leaks and modularize heavy stacks, practical maintenance and thoughtful device provisioning are the safest routes to restore predictable, responsive performance for Windows users.

Source: Inbox.lv Failure: Windows applications are running out of memory
 

Back
Top