Microsoft’s AI team has published the Copilot Usage Report 2025 — a large-scale, first-of-its-kind look at how people actually use Copilot in the wild — and the headline is unmistakable: health queries top the list, and people are increasingly treating chatbots as companions for personal advice, not just search tools.
Microsoft’s Copilot project has evolved from a productivity add-on into a cross-platform conversational assistant embedded across Bing, Microsoft 365, Edge, and standalone consumer apps. The company has steadily migrated Copilot from an enterprise-oriented tool into a mainstream consumer product, adding features such as richer memory, voice and visual modes, and deeper integrations with Office apps. That shifting posture makes a usage study like this unusually valuable: it’s an early, large-scale window into how millions of ordinary users deploy conversational AI in daily life.
This Copilot Usage Report 2025 claims to analyze a sample of 37.5 million anonymized conversations collected from January through September 2025. The analysis breaks sessions down by topic and intent, then maps how those pairings shift by device, hour of day, and time of year. The report is short and focused: it highlights patterns rather than publishing raw data, and it describes the team’s attempt to surface human-centered insights while protecting user privacy.
Why this matters:
Why this matters:
Concrete product implications:
Why this matters:
Key considerations:
The core takeaway is straightforward: as chatbots graduate from novelty to trusted companions for many users, the responsibilities of companies building and deploying those systems increase sharply. Design decisions that amplify empathy and engagement must be balanced with rigorous safety, provenance, and privacy practices — otherwise the very benefits that make Copilot useful could create new and predictable harms.
Source: Thurrott.com Microsoft AI Releases Its First-Ever Copilot Usage Study
Background
Microsoft’s Copilot project has evolved from a productivity add-on into a cross-platform conversational assistant embedded across Bing, Microsoft 365, Edge, and standalone consumer apps. The company has steadily migrated Copilot from an enterprise-oriented tool into a mainstream consumer product, adding features such as richer memory, voice and visual modes, and deeper integrations with Office apps. That shifting posture makes a usage study like this unusually valuable: it’s an early, large-scale window into how millions of ordinary users deploy conversational AI in daily life.This Copilot Usage Report 2025 claims to analyze a sample of 37.5 million anonymized conversations collected from January through September 2025. The analysis breaks sessions down by topic and intent, then maps how those pairings shift by device, hour of day, and time of year. The report is short and focused: it highlights patterns rather than publishing raw data, and it describes the team’s attempt to surface human-centered insights while protecting user privacy.
What the study says — headline findings
- Health is the single most common topic across mobile sessions. According to Microsoft’s report, users ask Copilot about health more often than any other category — outpacing technology/general search, society and culture, language learning, money, news, food and drink, art, entertainment, and science.
- Personal advice is rising sharply. People increasingly ask Copilot for relationship guidance, life decisions, and personalized help — signals that users treat chatbots as confidants or decision aids for everyday dilemmas.
- Time of day and calendar matter. The report maps predictable but revealing rhythms: religion and philosophy spike in the early morning hours, travel surges during afternoon commuting windows, and February shows a strong uptick in relationship and personal growth questions around Valentine’s Day.
- Device context changes intent. Desktop usage skews toward productivity and information density; mobile usage skews toward immediate, personal, and emotional interactions. Microsoft frames this as evidence that interface design should be context-aware: desktop agents should optimize workflows; mobile agents should prioritize empathy and brevity.
- Topic crossovers emerge. The study notes interesting overlaps — for example, programming-related chats peak on weekdays while gaming topics rise on weekends, and in August those two clusters begin to overlap in ways that suggest creative hybrid usage (coding mods, game scripting, and hobbyist projects).
- Privacy claims and methodology. Microsoft states that it did not analyze raw user messages; instead, the pipeline reportedly extracts and analyzes summaries of conversations to determine topic and intent, which the company positions as a stronger privacy-preserving approach than storing full text.
Methodology and what was (and was not) confirmed
Microsoft’s short report describes the dataset and the analysis approach in broad strokes. Key methodological points reported by Microsoft and corroborated in press coverage include:- Sample size and period: 37.5 million conversations analyzed, sampled from January–September 2025.
- De-identification: Microsoft says it extracts conversation summaries for topic-intent labeling rather than retaining raw message text, a step it presents as deliberate privacy protection.
- Topic-intent labeling: Conversations were categorized into broad topical buckets (health, programming, travel, etc. and paired with intent labels (search, advice, task execution, etc..
- Focus on consumer usage: Microsoft positioned the work as looking at consumer interactions; the company framed it as excluding enterprise training data and private reproduction of enterprise chats.
- Several media summaries reported that no commercial or educational Copilot conversations were included in the dataset, and other outlets relayed numerical splits (for example, claims about how many topic-intent combinations entered the "top ten" on desktop vs. mobile). Those specific exclusion and numeric-count claims were not present in explicit form in the short Microsoft write-up available to the public at the time of publication. Readers should treat such second-hand numeric details as plausible interpretations rather than fully verifiable facts unless Microsoft publishes the technical appendix or dataset metadata.
Deep dive: the most notable findings and why they matter
Health queries dominate — and why that’s important
Microsoft’s report places health at the top of the topic list — especially on mobile. That includes wellness tips, symptom questions, routines, and practical health management tasks.Why this matters:
- AI assistants are uniquely positioned to be always-on, low-friction sources of basic health information and triage. Users can ask quick questions outside of clinic hours and get immediate guidance.
- The volume of health queries raises both opportunity and risk: Copilot can help users find reliable resources and medication info, but it can also generate inaccurate or overly-general advice when clinical nuance is required.
- Designers and product teams must treat health as a high-stakes vertical: clarity about what Copilot can and cannot provide, clear signposting to vetted sources, and guardrails that escalate to qualified professionals are critical.
Advice and intimacy: chatbots as companions, not just tools
A rising share of sessions asked for personal advice—relationship counseling, career guidance, and life decisions. That tracks broader academic research showing people form attachments to conversational AIs and rely on them for emotional support.Why this matters:
- People may disclose sensitive personal information to bots expecting guidance or empathy but will not receive the legal protections of a clinical or legal consultation.
- Product design must balance empathetic conversational style with explicit disclaimers, contextual warnings, and options to contact human professionals when appropriate.
- There’s a design tradeoff between engagement (a “friendly” persona increases use) and safety (overly anthropomorphic behavior can create dependency or encourage poor decisions).
Temporal patterns: the clock and calendar of Copilot use
The study shows predictable rhythms:- Early-morning spikes in philosophical and religious queries.
- Afternoon commuting windows where travel and logistics questions rise.
- Seasonal spikes — notably, Valentine’s Day in February — for relationship and self-improvement questions.
- Context-aware assistants can adapt tone and response style to the time of day (less dense, more supportive late at night; more concise and actionable during commutes).
- Marketers and product managers can use temporal signals to prioritize features and content curation (e.g., mental health resources during late-night hours).
Desktop vs mobile: two different user mental models
Microsoft’s analysis emphasizes that desktop Copilot sessions skew toward productivity and dense, multi-step tasks, while mobile sessions skew toward quick, personal, conversational uses. That suggests a strategic product implication: user experience should adapt to device.Concrete product implications:
- Desktop Copilot: optimize for multi-step workflows, document generation, code editing, and direct integrations with Office and developer tools.
- Mobile Copilot: optimize for brevity, empathy, conversational follow-up, and features like quick health triage or reminders.
Programming and gaming overlap: a hobbyist frontier
The report’s observation that programming peaks on weekdays and gaming on weekends — with growing crossovers — signals more hybrid, hobbyist-oriented use cases: modding, game-development tutorials, and community-driven coding projects integrated into leisure time.Why this matters:
- New product scenarios could target weekend hobbyists: templates, code snippets, and short tutorials oriented to game modding and content creation.
Privacy, ethics, and practical limits
Privacy approaches are promising but not exhaustive
Microsoft’s claim that it extracts summaries rather than storing full messages is a stronger privacy posture than analyzing raw text, but it is not a panacea. Summaries still encode user intent, content categories, and — in some cases — sensitive details, especially for health and relationship queries.Key considerations:
- Summaries can still leak sensitive signals if labeling or storage is mishandled.
- The report does not publish a technical appendix showing retention policies, geographic sampling, or how they validated summary extraction accuracy, which are the details needed to independently assess privacy risk.
- Without full methodological transparency, independent researchers cannot audit potential biases (who’s represented in the 37.5 million conversations, which regions, which languages, or what demographic skews exist).
Safety and misuse risks are real
History shows conversational models can hallucinate, echo biases, or generate harmful content. When chatbots move into health and emotional support roles, those hazards multiply.- Misleading medical advice, confirmation bias, and overconfidence in AI recommendations could cause harm.
- The social and psychological dynamics of relying on a machine for emotional support are incompletely understood; emerging research shows attachments and changes in help-seeking behaviors can occur.
Accountability and guardrails
Microsoft will need explicit guardrails:- Source attribution and grounding for health and medical claims.
- Obvious, user-facing disclaimers for high-stakes domains.
- Easy escalation paths to professional help, plus content moderation and safety review pipelines for sensitive categories.
Business and product implications for Microsoft and competitors
- The report strengthens Microsoft’s argument that Copilot is moving beyond productivity into everyday life — a strategic positioning that supports bundling Copilot features into Microsoft 365 consumer plans and cross-selling into Windows and Edge.
- For competitors, these findings highlight the importance of contextual UX: companies that optimize tone and tools for device and time-of-day will likely gain engagement advantages.
- From a marketing standpoint, the study is useful collateral, but regulators and watchdogs may push back on overstated claims about productivity or ROI. Past industry reviews have already flagged ambiguous advertising claims related to Copilot’s benefits; objective, audit-ready data will reduce friction in these debates.
Limitations and caveats — what readers should be careful about
- Sampling bias: The report is based on a sampled set of conversations rather than a full census; it’s unclear how representative the sample is across geographies, languages, and user cohorts.
- Labeling subjectivity: Topic-intent pairs depend on taxonomy and classifier accuracy. The report doesn’t disclose inter-annotator agreement rates or classifier performance metrics, which complicates interpretation.
- Second-hand numeric claims: Some press stories and summaries add specific counts or exclusions (for example, exact numbers of topic-intent combinations that reached top lists on desktop vs mobile, or explicit statements that commercial and educational chats were excluded). Those details were not present in a verifiable way in Microsoft’s public write-up; treat them cautiously until Microsoft publishes full appendices or a technical paper.
- Privacy assurance vs. absolute privacy: Extracting summaries reduces exposure but does not eliminate all risks. The company’s privacy model still depends on secure handling and deletion policies that are not fully disclosed in the summary materials.
Practical advice for users, IT admins, and product teams
- For individual users:
- Treat Copilot as a helpful first stop for information and triage, not a substitute for professional medical, legal, or psychiatric advice.
- Be wary of sharing extremely sensitive personal information in free-form chats; use official channels for high-stakes consultations.
- Use built-in privacy controls and review stored memory or profile settings if you want to limit what Copilot remembers.
- For IT administrators:
- Expect different usage signals on desktop vs mobile; tailor Copilot rollouts, training, and governance controls accordingly.
- Monitor high-risk query categories (health, legal, HR) and consider policies to disable certain features in regulated contexts (classrooms, exams, patient care systems).
- For product teams and designers:
- Invest in contextual UX that adapts to device, time of day, and inferred user intent.
- Prioritize source grounding for medical and factual claims; surface citations and provenance for confident recommendations.
- Build escalation and referral flows for queries beyond the assistant’s safe scope.
Why this matters beyond Microsoft
This study — large and imperfect — is a milestone: it’s among the first major, public corporate efforts to quantify how people use general-purpose conversational AI at scale across everyday life. The findings underscore two broader trends in AI adoption:- Conversational AI is moving quickly from a productivity overlay to a personal utility that participates in everyday decision-making and emotional life.
- Product choices that increase convenience and emotional resonance increase engagement — but they also raise responsibility for safety, privacy, and transparency.
Conclusion
The Copilot Usage Report 2025 delivers a rare look at how millions of people are integrating a single conversational assistant into the rhythms of daily life. Health and personal advice have emerged as leading use cases, while device and time-of-day patterns point to the need for context-aware design. Microsoft’s privacy-forward approach — analyzing summaries rather than full messages — is an important step, but the report’s limited methodological transparency leaves unanswered questions about sampling, labeling, and potential biases.The core takeaway is straightforward: as chatbots graduate from novelty to trusted companions for many users, the responsibilities of companies building and deploying those systems increase sharply. Design decisions that amplify empathy and engagement must be balanced with rigorous safety, provenance, and privacy practices — otherwise the very benefits that make Copilot useful could create new and predictable harms.
Source: Thurrott.com Microsoft AI Releases Its First-Ever Copilot Usage Study