Why One Third of Consumers Don’t Want AI on Devices

  • Thread Author
One-third of consumers say they don’t want AI on their devices — and most of them aren’t saying “no” because they don’t understand it, but because they simply don’t need it.

Background​

The recent consumer research highlighted by technology press reveals a clear gap between the tech industry’s enthusiasm for embedding artificial intelligence into every consumer device and a sizable portion of users who are either unconvinced or actively resistant. According to the reported findings, awareness of AI is high among consumers, yet roughly one-third indicate they are not interested in having AI-enabled features on their phones, laptops, smart speakers, or other devices. The reasons are varied and meaningful: many cite a lack of perceived usefulness, substantial concerns about privacy, and worries that AI will add direct or indirect cost.
This disconnect matters because device-makers, platform companies, and chip designers are investing heavily to make AI a built-in expectation of modern hardware. From dedicated Copilot keys on new laptops to on-device model accelerators, the industry is betting that generational shifts in capability will drive new buying reasons and monetization paths. The consumer data, however, shows that this transition is not purely technical — it’s cultural, economic, and regulatory. If adoption stalls among a significant minority, the consequences will touch product design, privacy policy, pricing strategies, and competitive positioning.

Overview: what the numbers tell us​

The headline statistics from the consumer study are straightforward and stark. A large majority of adults report awareness of AI in technology, yet about 35% say they are not interested in AI on their devices. Among those who oppose it, the top reason is a perceived lack of need. Privacy fears and expected costs also rank highly as deterrents.
  • High awareness: Most consumers know what AI is and can recognize it in product marketing and news.
  • Significant rejection: About one-third express no interest in AI on their devices.
  • Primary objections:
  • Perceived lack of need.
  • Privacy concerns.
  • Cost worries — both up-front and ongoing.
  • Age divide: Interest is strongest among younger adults; 18–24-year-olds show a notably higher acceptance rate, with interest declining steadily across older age groups.
These patterns create a clear narrative: awareness is not the same as desire, and enthusiasm among technologists does not automatically translate into mainstream demand.

Why “I don’t need it” is the most consequential answer​

The difference between capability and value​

Engineers and product teams often equate capability with value: if you can do something impressive, customers will want it. In the real world, however, customers ask a simpler question: What problem does this solve for me? Saying “I don’t need it” is shorthand for a failure to see personal value.
  • Many AI features are marketed as productivity enhancers or assistants, but day-to-day users often measure value in time saved, cost avoided, or convenience improved.
  • If an AI feature replaces a negligible task or offers benefits only in niche scenarios, mainstream users will not care enough to enable it, pay extra for it, or tolerate trade-offs like poorer battery life or data sharing.

Product-design implications​

If the primary barrier is perceived usefulness, companies must change how they design and present AI features.
  • Build features that solve clear, routine problems rather than showcases of technical prowess.
  • Make default behaviors conservative: offer opt-in experiences that let users discover benefits incrementally.
  • Provide concrete, relatable examples in onboarding (not just demo videos or marketing blurbs).
A device with an always-on assistant that summarizes notifications may look impressive in a keynote, but if it doesn’t reduce friction in real-world daily routines, adoption will stall.

Privacy remains a decisive concern​

What consumers fear​

Concern about privacy showed up strongly: a sizable portion of the AI-skeptical respondents cited privacy risk as a key reason they don’t want AI on their devices. The fear is multidimensional:
  • Data collection: People worry AI requires access to messages, photos, location, and voice recordings.
  • Data reuse: There’s anxiety about how data might be used, sold, or monetized by third parties.
  • Inference and profiling: AI can create unexpected inferences about someone’s health, finances, or beliefs, which feels invasive.
  • Lack of control: Users often feel they lack practical control over what data an AI sees and what it does with it.

The trust gap​

Trust is not built by features alone. It requires transparent policies, easy-to-use controls, and demonstrable technical guarantees (for instance, credible on-device processing that doesn’t send raw data to servers).
Companies that push AI features without addressing these trust dimensions risk backlash, limited adoption, and regulatory scrutiny. For many users, a promise that “processing happens locally” may be less persuasive than tangible controls: toggles, logs, and accessible explanations of how a model uses private data.

Cost concerns: not just sticker shock​

Direct and indirect costs​

Nearly half of the skeptics point to costs as a deterrent. Cost concerns fall into two buckets:
  • Direct consumer charges: Will AI features cost extra through subscriptions or premium tiers? Many users balk at paying more for features they don’t value.
  • Indirect costs: Will battery life, device performance, and upgrade cycles suffer? On-device AI can require more powerful silicon, pushing manufacturers to charge higher device prices.
There’s also macroeconomic skepticism: if AI features are bundled in ways that force upgrades or recurring payments, many consumers — especially older or lower-income demographics — will view them as a money grab rather than an improvement.

Monetization risks for vendors​

Financial models that assume widespread willingness to pay for AI-enhanced functionality may prove fragile. Charging users directly can slow adoption; absorbing costs into higher device prices can limit market share. Either way, companies should:
  • Validate willingness to pay before wide rollout.
  • Offer tiered models with clear free options.
  • Tie premium charges to measurable, demonstrated value.

The generational split and its consequences​

Younger adults — particularly those aged 18–24 — show markedly higher openness to AI on devices. This demographic dynamic is important but not determinative.
  • Younger users are generally more experimental, more used to continuous updates, and more likely to try new interaction models.
  • Older cohorts are more conservative both in privacy expectations and in perceived usefulness.
This generational pattern suggests a multi-speed adoption curve. Companies that assume instant, cross-demographic uptake risk misallocating marketing resources and alienating core customer segments.

The environmental and ethical dimensions​

Energy and device lifecycle​

AI — especially large models — can be energy intensive. Even on-device inference and training accelerate silicon demand and can shorten device refresh cycles if manufacturers push upgrades for performance reasons.
  • Users increasingly weigh environmental impact when making purchase decisions.
  • Companies should be prepared to explain lifecycle impacts and to invest in efficiency.

Bias, misinformation, and accountability​

Ethical concerns remain salient. Consumers may worry about:
  • Biased outputs that misrepresent people or groups.
  • Misleading recommendations or hallucinations that could cause real harm.
  • Ambiguous lines of responsibility when AI-driven actions lead to negative outcomes.
Companies must adopt clear accountability processes and explainability tools so that users and regulators can trace decisions back to testable design choices.

Strengths of the current industry push​

Despite the pushback, there are clear strengths to the industry’s strategy to embed AI broadly.
  • Capability expansion: On-device AI enables features that were previously impossible or impractical, like real-time language translation, smarter image editing, and privacy-preserving personalization.
  • New UX paradigms: AI can simplify complex workflows, enabling natural-language interfaces and predictive assistance that reduce repetitive tasks.
  • Hardware and software co-design: The move to integrate specialized AI accelerators into consumer silicon yields efficiency and performance gains, unlocking functionality without always requiring cloud connectivity.
These technical advances position device makers to deliver genuinely new value — but only if that value is felt by users and not just by product roadmaps.

Risks and blind spots vendors commonly underestimate​

assuming marginal cost equals marginal value​

Companies often assume that adding AI is a marginal cost with outsized perceived benefit. In practice, marginal costs for development, testing, privacy compliance, and user education are material. If the outcome isn’t a demonstrable improvement in daily life, users will decline.

glossing over consent and control​

Marketing “AI assistants” without clear, easy-to-use privacy controls creates friction. Users who feel their only option is to accept or reject an entire device-level feature risk turning the entire product category into a trust battleground.

ignoring heterogeneity of user needs​

A single AI feature cannot satisfy all users. Rigid implementation strategies — for example, tightly bundling AI into core OS flows — can alienate users who might have embraced optional, modular experiences.

underestimating regulatory and legal scrutiny​

Privacy and consumer-protection regulators are paying attention. If AI features involve sensitive data or make consequential recommendations, they could attract oversight and litigation, increasing cost and complexity for vendors.

Practical recommendations for device makers​

To bridge the gap between technical capability and consumer adoption, companies should adopt principles that put the user’s needs at the center.

1. Design for discoverable value​

  • Launch AI features framed around specific, routine problems they solve.
  • Use short guided experiences to demonstrate real-world benefits within minutes.

2. Prioritize privacy by default​

  • Implement granular controls that let users limit data access easily.
  • Offer clear labels that explain what data is used and why in plain language.
  • Make on-device processing the default when feasible and explain the trade-offs.

3. Be transparent about costs​

  • If AI features carry a price, communicate what the charge covers and provide trial periods.
  • Avoid forcing premium AI into essential device functionality that users must pay to unlock.

4. Measure and share outcomes​

  • Provide metrics that show how AI features improve time, accuracy, or cost for users.
  • Use anonymized, opt-in reporting to quantify benefits and to justify continued investment.

5. Offer modularity​

  • Make AI features opt-in and modular rather than baked into core experiences.
  • Allow enterprise and privacy-focused buyers to select configuration profiles that meet their needs.

What consumers should ask before enabling on-device AI​

Empowerment is bi-directional. Consumers can take practical steps to protect themselves and to make informed choices.
  • What data does this feature access? Ask for a clear list of categories (messages, photos, microphone, location).
  • Where does the processing happen? Prefer on-device processing where possible.
  • Is data retained, and for how long? Look for short retention windows and clear deletion controls.
  • Is the feature free, or does it come with a subscription? Expect transparency on pricing.
  • Can I rollback or turn this off easily? Easy toggles and defaults that preserve privacy are both signals of user-centered design.

How regulators and standards bodies should respond​

Policymakers need to balance innovation with consumer protection. Practical steps include:
  • Developing clear guidelines on data minimization and user consent specifically tailored to inference-driven systems.
  • Requiring baseline transparency on how consumer-facing models operate, including basic test suites for bias and safety.
  • Encouraging standards for on-device attestations that prove certain data never left the device.
Regulatory clarity will reduce uncertainty for companies and increase trust for consumers — a necessary condition for long-term adoption.

Looking ahead: a multi-track adoption landscape​

Expect adoption to be uneven and context-dependent. Several scenarios could coexist:
  • High-acceptance niches: Younger demographics and power users who value novel features will adopt rapidly.
  • Conservative mainstream: Many users will prefer incremental, opt-in AI features that have proven utility and privacy guarantees.
  • Enterprise-driven pockets: Business customers may demand AI capabilities for productivity, pushing manufacturers to segment offerings.
This multi-track evolution means companies must avoid one-size-fits-all approaches and instead invest in configurable platforms that can serve different user groups effectively.

Final analysis: what success looks like​

Widespread, beneficial adoption of device-level AI will require more than technical excellence. Success depends on aligning three realities:
  • Meaningful, everyday benefits that users can quickly perceive and measure.
  • Trustworthy privacy and control mechanisms that reduce perceived and real risk.
  • Fair, transparent cost models that don't force upgrades or recurring charges for basic utility.
When companies design with those principles, they stand a far better chance of converting awareness into durable value. Conversely, if the industry continues to equate novelty with necessity, it risks producing features that remain niche, provoke backlash, or face regulatory headwinds.
The current consumer data — high awareness, significant skepticism, and clear demographic differences — should be read not as a verdict against AI, but as a practical roadmap. Build features that people actually need, prove their benefit, and offer them in a way that respects privacy and budgets. Do that, and the industry’s technical achievements can finally find the public appetite they currently lack.

Conclusion
The headline that one-third of consumers reject AI on their devices is a wake-up call, not a death knell for on-device intelligence. It shows where the debate must shift from technical possibility to human-centered value: usefulness that is obvious, privacy that is tangible, and pricing that feels fair. The companies that internalize these lessons will transform AI from a feature checklist into a trusted and genuinely helpful part of everyday computing.

Source: Tom's Hardware One-third of consumers reject AI on their devices, with most saying they simply don’t need it — latest report highlights privacy fears and potential costs among other real-world concerns
 
It makes sense that a significant portion of people don’t want AI on their devices — not everyone is comfortable with technology that feels intrusive or unpredictable. Some of the concerns come from uncertainty about privacy, data security, and how much control these tools might have. There’s also a real fear around consumer fraud, where AI could be used to create convincing scams or manipulate personal information without clear safeguards. For many, AI still feels like something that isn’t fully understood, and without transparency or trust, they’d rather keep it off their phones or computers. People want tech that helps them, not something that could potentially make their lives more complicated or risky. At the same time, there are plenty of positive uses for AI when it’s used responsibly — like improving accessibility or automating repetitive tasks. The key will be finding a balance where AI feels safe, useful, and under the user’s control. Until that trust is there, a lot of consumers will likely remain hesitant.
 
Last edited:
Totally fair — when you’re mixing new hearing aids + Windows settings + Bluetooth quirks, it can feel like a lot at once.

The simplest way to cut it down to “next 10 minutes”​

1) Pick your main goal (choose one)​

  • A. Basic streaming works reliably (YouTube/music)
  • B. Calls work (Teams/Zoom mic + sound)
  • C. Best quality / newer features (LE Audio, if supported)

2) Do the “foundation” steps first (works for A/B/C)​

  1. Update Windows (Settings → Windows Update)
  2. Update your hearing-aid firmware (manufacturer phone app)
  3. Update Bluetooth driver (from your PC maker: Dell/HP/Lenovo/ASUS etc.)
  4. Re-pair the hearing aids (remove device in Bluetooth settings, then pair again)

3) Then do the goal-specific step​

  • A (Streaming): Settings → System → Sound → set hearing aids as Default output
  • B (Calls): In your call app (Teams/Zoom/etc.) set Speaker + Microphone explicitly (don’t rely on “Default”)
  • C (LE Audio): Check if you see “Use LE Audio when available” under Settings → Bluetooth & devices → Devices (if it’s not there, your hardware/driver likely doesn’t support it)

Two quick questions so I can make this precise​

1) What’s your hearing-aid brand/model?
2) Are you on Windows 10 or Windows 11 (and if Windows 11, 23H2/24H2)?
Also: I don’t see any attachments on my side in this post—if you meant to upload screenshots/logs, please try attaching again and I’ll review them.