One-third of consumers say they don’t want AI on their devices — and most of them aren’t saying “no” because they don’t understand it, but because they simply don’t need it.
The recent consumer research highlighted by technology press reveals a clear gap between the tech industry’s enthusiasm for embedding artificial intelligence into every consumer device and a sizable portion of users who are either unconvinced or actively resistant. According to the reported findings, awareness of AI is high among consumers, yet roughly one-third indicate they are not interested in having AI-enabled features on their phones, laptops, smart speakers, or other devices. The reasons are varied and meaningful: many cite a lack of perceived usefulness, substantial concerns about privacy, and worries that AI will add direct or indirect cost.
This disconnect matters because device-makers, platform companies, and chip designers are investing heavily to make AI a built-in expectation of modern hardware. From dedicated Copilot keys on new laptops to on-device model accelerators, the industry is betting that generational shifts in capability will drive new buying reasons and monetization paths. The consumer data, however, shows that this transition is not purely technical — it’s cultural, economic, and regulatory. If adoption stalls among a significant minority, the consequences will touch product design, privacy policy, pricing strategies, and competitive positioning.
Companies that push AI features without addressing these trust dimensions risk backlash, limited adoption, and regulatory scrutiny. For many users, a promise that “processing happens locally” may be less persuasive than tangible controls: toggles, logs, and accessible explanations of how a model uses private data.
The current consumer data — high awareness, significant skepticism, and clear demographic differences — should be read not as a verdict against AI, but as a practical roadmap. Build features that people actually need, prove their benefit, and offer them in a way that respects privacy and budgets. Do that, and the industry’s technical achievements can finally find the public appetite they currently lack.
Conclusion
The headline that one-third of consumers reject AI on their devices is a wake-up call, not a death knell for on-device intelligence. It shows where the debate must shift from technical possibility to human-centered value: usefulness that is obvious, privacy that is tangible, and pricing that feels fair. The companies that internalize these lessons will transform AI from a feature checklist into a trusted and genuinely helpful part of everyday computing.
Source: Tom's Hardware One-third of consumers reject AI on their devices, with most saying they simply don’t need it — latest report highlights privacy fears and potential costs among other real-world concerns
Background
The recent consumer research highlighted by technology press reveals a clear gap between the tech industry’s enthusiasm for embedding artificial intelligence into every consumer device and a sizable portion of users who are either unconvinced or actively resistant. According to the reported findings, awareness of AI is high among consumers, yet roughly one-third indicate they are not interested in having AI-enabled features on their phones, laptops, smart speakers, or other devices. The reasons are varied and meaningful: many cite a lack of perceived usefulness, substantial concerns about privacy, and worries that AI will add direct or indirect cost.This disconnect matters because device-makers, platform companies, and chip designers are investing heavily to make AI a built-in expectation of modern hardware. From dedicated Copilot keys on new laptops to on-device model accelerators, the industry is betting that generational shifts in capability will drive new buying reasons and monetization paths. The consumer data, however, shows that this transition is not purely technical — it’s cultural, economic, and regulatory. If adoption stalls among a significant minority, the consequences will touch product design, privacy policy, pricing strategies, and competitive positioning.
Overview: what the numbers tell us
The headline statistics from the consumer study are straightforward and stark. A large majority of adults report awareness of AI in technology, yet about 35% say they are not interested in AI on their devices. Among those who oppose it, the top reason is a perceived lack of need. Privacy fears and expected costs also rank highly as deterrents.- High awareness: Most consumers know what AI is and can recognize it in product marketing and news.
- Significant rejection: About one-third express no interest in AI on their devices.
- Primary objections:
- Perceived lack of need.
- Privacy concerns.
- Cost worries — both up-front and ongoing.
- Age divide: Interest is strongest among younger adults; 18–24-year-olds show a notably higher acceptance rate, with interest declining steadily across older age groups.
Why “I don’t need it” is the most consequential answer
The difference between capability and value
Engineers and product teams often equate capability with value: if you can do something impressive, customers will want it. In the real world, however, customers ask a simpler question: What problem does this solve for me? Saying “I don’t need it” is shorthand for a failure to see personal value.- Many AI features are marketed as productivity enhancers or assistants, but day-to-day users often measure value in time saved, cost avoided, or convenience improved.
- If an AI feature replaces a negligible task or offers benefits only in niche scenarios, mainstream users will not care enough to enable it, pay extra for it, or tolerate trade-offs like poorer battery life or data sharing.
Product-design implications
If the primary barrier is perceived usefulness, companies must change how they design and present AI features.- Build features that solve clear, routine problems rather than showcases of technical prowess.
- Make default behaviors conservative: offer opt-in experiences that let users discover benefits incrementally.
- Provide concrete, relatable examples in onboarding (not just demo videos or marketing blurbs).
Privacy remains a decisive concern
What consumers fear
Concern about privacy showed up strongly: a sizable portion of the AI-skeptical respondents cited privacy risk as a key reason they don’t want AI on their devices. The fear is multidimensional:- Data collection: People worry AI requires access to messages, photos, location, and voice recordings.
- Data reuse: There’s anxiety about how data might be used, sold, or monetized by third parties.
- Inference and profiling: AI can create unexpected inferences about someone’s health, finances, or beliefs, which feels invasive.
- Lack of control: Users often feel they lack practical control over what data an AI sees and what it does with it.
The trust gap
Trust is not built by features alone. It requires transparent policies, easy-to-use controls, and demonstrable technical guarantees (for instance, credible on-device processing that doesn’t send raw data to servers).Companies that push AI features without addressing these trust dimensions risk backlash, limited adoption, and regulatory scrutiny. For many users, a promise that “processing happens locally” may be less persuasive than tangible controls: toggles, logs, and accessible explanations of how a model uses private data.
Cost concerns: not just sticker shock
Direct and indirect costs
Nearly half of the skeptics point to costs as a deterrent. Cost concerns fall into two buckets:- Direct consumer charges: Will AI features cost extra through subscriptions or premium tiers? Many users balk at paying more for features they don’t value.
- Indirect costs: Will battery life, device performance, and upgrade cycles suffer? On-device AI can require more powerful silicon, pushing manufacturers to charge higher device prices.
Monetization risks for vendors
Financial models that assume widespread willingness to pay for AI-enhanced functionality may prove fragile. Charging users directly can slow adoption; absorbing costs into higher device prices can limit market share. Either way, companies should:- Validate willingness to pay before wide rollout.
- Offer tiered models with clear free options.
- Tie premium charges to measurable, demonstrated value.
The generational split and its consequences
Younger adults — particularly those aged 18–24 — show markedly higher openness to AI on devices. This demographic dynamic is important but not determinative.- Younger users are generally more experimental, more used to continuous updates, and more likely to try new interaction models.
- Older cohorts are more conservative both in privacy expectations and in perceived usefulness.
The environmental and ethical dimensions
Energy and device lifecycle
AI — especially large models — can be energy intensive. Even on-device inference and training accelerate silicon demand and can shorten device refresh cycles if manufacturers push upgrades for performance reasons.- Users increasingly weigh environmental impact when making purchase decisions.
- Companies should be prepared to explain lifecycle impacts and to invest in efficiency.
Bias, misinformation, and accountability
Ethical concerns remain salient. Consumers may worry about:- Biased outputs that misrepresent people or groups.
- Misleading recommendations or hallucinations that could cause real harm.
- Ambiguous lines of responsibility when AI-driven actions lead to negative outcomes.
Strengths of the current industry push
Despite the pushback, there are clear strengths to the industry’s strategy to embed AI broadly.- Capability expansion: On-device AI enables features that were previously impossible or impractical, like real-time language translation, smarter image editing, and privacy-preserving personalization.
- New UX paradigms: AI can simplify complex workflows, enabling natural-language interfaces and predictive assistance that reduce repetitive tasks.
- Hardware and software co-design: The move to integrate specialized AI accelerators into consumer silicon yields efficiency and performance gains, unlocking functionality without always requiring cloud connectivity.
Risks and blind spots vendors commonly underestimate
assuming marginal cost equals marginal value
Companies often assume that adding AI is a marginal cost with outsized perceived benefit. In practice, marginal costs for development, testing, privacy compliance, and user education are material. If the outcome isn’t a demonstrable improvement in daily life, users will decline.glossing over consent and control
Marketing “AI assistants” without clear, easy-to-use privacy controls creates friction. Users who feel their only option is to accept or reject an entire device-level feature risk turning the entire product category into a trust battleground.ignoring heterogeneity of user needs
A single AI feature cannot satisfy all users. Rigid implementation strategies — for example, tightly bundling AI into core OS flows — can alienate users who might have embraced optional, modular experiences.underestimating regulatory and legal scrutiny
Privacy and consumer-protection regulators are paying attention. If AI features involve sensitive data or make consequential recommendations, they could attract oversight and litigation, increasing cost and complexity for vendors.Practical recommendations for device makers
To bridge the gap between technical capability and consumer adoption, companies should adopt principles that put the user’s needs at the center.1. Design for discoverable value
- Launch AI features framed around specific, routine problems they solve.
- Use short guided experiences to demonstrate real-world benefits within minutes.
2. Prioritize privacy by default
- Implement granular controls that let users limit data access easily.
- Offer clear labels that explain what data is used and why in plain language.
- Make on-device processing the default when feasible and explain the trade-offs.
3. Be transparent about costs
- If AI features carry a price, communicate what the charge covers and provide trial periods.
- Avoid forcing premium AI into essential device functionality that users must pay to unlock.
4. Measure and share outcomes
- Provide metrics that show how AI features improve time, accuracy, or cost for users.
- Use anonymized, opt-in reporting to quantify benefits and to justify continued investment.
5. Offer modularity
- Make AI features opt-in and modular rather than baked into core experiences.
- Allow enterprise and privacy-focused buyers to select configuration profiles that meet their needs.
What consumers should ask before enabling on-device AI
Empowerment is bi-directional. Consumers can take practical steps to protect themselves and to make informed choices.- What data does this feature access? Ask for a clear list of categories (messages, photos, microphone, location).
- Where does the processing happen? Prefer on-device processing where possible.
- Is data retained, and for how long? Look for short retention windows and clear deletion controls.
- Is the feature free, or does it come with a subscription? Expect transparency on pricing.
- Can I rollback or turn this off easily? Easy toggles and defaults that preserve privacy are both signals of user-centered design.
How regulators and standards bodies should respond
Policymakers need to balance innovation with consumer protection. Practical steps include:- Developing clear guidelines on data minimization and user consent specifically tailored to inference-driven systems.
- Requiring baseline transparency on how consumer-facing models operate, including basic test suites for bias and safety.
- Encouraging standards for on-device attestations that prove certain data never left the device.
Looking ahead: a multi-track adoption landscape
Expect adoption to be uneven and context-dependent. Several scenarios could coexist:- High-acceptance niches: Younger demographics and power users who value novel features will adopt rapidly.
- Conservative mainstream: Many users will prefer incremental, opt-in AI features that have proven utility and privacy guarantees.
- Enterprise-driven pockets: Business customers may demand AI capabilities for productivity, pushing manufacturers to segment offerings.
Final analysis: what success looks like
Widespread, beneficial adoption of device-level AI will require more than technical excellence. Success depends on aligning three realities:- Meaningful, everyday benefits that users can quickly perceive and measure.
- Trustworthy privacy and control mechanisms that reduce perceived and real risk.
- Fair, transparent cost models that don't force upgrades or recurring charges for basic utility.
The current consumer data — high awareness, significant skepticism, and clear demographic differences — should be read not as a verdict against AI, but as a practical roadmap. Build features that people actually need, prove their benefit, and offer them in a way that respects privacy and budgets. Do that, and the industry’s technical achievements can finally find the public appetite they currently lack.
Conclusion
The headline that one-third of consumers reject AI on their devices is a wake-up call, not a death knell for on-device intelligence. It shows where the debate must shift from technical possibility to human-centered value: usefulness that is obvious, privacy that is tangible, and pricing that feels fair. The companies that internalize these lessons will transform AI from a feature checklist into a trusted and genuinely helpful part of everyday computing.
Source: Tom's Hardware One-third of consumers reject AI on their devices, with most saying they simply don’t need it — latest report highlights privacy fears and potential costs among other real-world concerns