Microsoft's OneDrive is quietly testing a People grouping feature that uses AI to detect and group faces in users' photos — and an unexpected detail in preview builds has thrust it into the privacy spotlight: the People toggle appears to be opt‑out and, in at least some preview experiences, can only be turned off three times per year.
OneDrive's new People grouping is part of a broader push to make OneDrive a media‑first surface with deeper AI assistance: a unified gallery experience, semantic search, and person‑centric collections that mirror the behavior of modern photo services. Microsoft’s support documentation describes the feature as facial grouping technology that “uses AI to recognize faces in your photos and group them together,” and the company explicitly states that facial scans and biometric information collected for grouping will be stored for the account and removed when the feature is disabled.
Preview users first noticed the worrying UI text during mobile uploads: a privacy notice stated that OneDrive “uses AI to recognize faces in your photos to help you find photos of friends and family,” followed by a line: “You can only turn off this setting 3 times a year.” Independent reporting and community threads picked up the detail quickly, and multiple outlets subsequently ran stories documenting the claim and Microsoft’s limited public response.
However, the surprise preview detail that the People toggle can only be disabled “three times a year” — a message reproduced by multiple outlets and community reports — raises legitimate and unresolved questions about consent mechanics, enforcement windows, and transparency. Multiple independent news outlets and community threads documented the message in preview builds and reported Microsoft’s limited response. Until Microsoft publishes a clear technical rationale and precise policy text, the three‑times limitation should be treated as an unresolved, policy‑critical ambiguity.
For privacy teams, IT administrators, and users, the prudent posture is to treat the feature as a preview experiment: evaluate it in controlled pilots, insist on clear documentation from Microsoft, and prefer opt‑out or local processing for biometric identifiers where regulatory or trust risks are material.
Source: theregister.com Microsoft previews People grouping in OneDrive photos
Background
OneDrive's new People grouping is part of a broader push to make OneDrive a media‑first surface with deeper AI assistance: a unified gallery experience, semantic search, and person‑centric collections that mirror the behavior of modern photo services. Microsoft’s support documentation describes the feature as facial grouping technology that “uses AI to recognize faces in your photos and group them together,” and the company explicitly states that facial scans and biometric information collected for grouping will be stored for the account and removed when the feature is disabled. Preview users first noticed the worrying UI text during mobile uploads: a privacy notice stated that OneDrive “uses AI to recognize faces in your photos to help you find photos of friends and family,” followed by a line: “You can only turn off this setting 3 times a year.” Independent reporting and community threads picked up the detail quickly, and multiple outlets subsequently ran stories documenting the claim and Microsoft’s limited public response.
What Microsoft publicly says (and what it doesn't)
What Microsoft documents
Microsoft’s OneDrive support page for Group photos by people spells out several key points that are material to users and IT administrators:- Facial grouping is a planned OneDrive feature: it groups similar faces into a People section, lets users name people, and enables search by person.
- Data handling guarantees: Microsoft says it collects and stores facial scans and biometric information for the purpose of grouping, but that this data “is not used to train or improve the AI model overall” — it is used only to improve results for the specific account. The support page also states that facial grouping data will be permanently removed within 30 days after disabling the feature.
What remains unclear or unverified
- The “three times a year” toggle limit does not appear anywhere on Microsoft’s formal support page. It surfaced in preview UI text and subsequent reporting, and Microsoft has not offered a public technical rationale for the cap. Multiple tech outlets and community threads reported and reproduced the message in preview builds, but Microsoft’s public documentation has not clarified how the limit is counted or enforced. That discrepancy is the core of the controversy.
- Microsoft’s phrasing that facial scans “are not used to train or improve the AI model overall” is a policy statement; the company’s implementation and internal handling of derivative data or telemetry signals are not fully visible to outsiders. Independent verification of Microsoft’s assertions therefore depends on transparency from Microsoft and regulatory review. Until Microsoft publishes precise technical documentation (or an independent audit is available), claims about non‑use for model training should be treated as company declarations rather than independently verified technical facts.
Why the three‑times limit matters
At first glance, a cap on how often a user may disable a feature sounds like a benign product decision. In practice, when the feature in question is facial grouping — data that many regulators treat as biometric information with special protections — the mechanics of opt‑out and the limits on opt‑out have outsized implications.- Regulatory friction: European privacy law (GDPR) and several national rules treat biometric data used to identify individuals as sensitive. The law typically demands clear, freely given, and informed consent when biometric identifiers are processed. A toggle that is opt‑out and then gated by an annual limit invites questions about whether consent is truly free — especially where the user cannot change their mind at will without administrative friction. The three‑times rule raises the specter of lock‑in for sensitive processing without the same level of user control normally expected for biometric processing. Multiple outlets flagged regulatory concerns when the preview detail emerged.
- Implementation ambiguity: The preview text gives no start date or reference period for the “three times a year” measurement. Does the clock start when Microsoft first processes photos? When your account is enrolled? Is the period calendar year, rolling 12 months, or attached to a region? That ambiguity matters: policy‑minded users and IT admins cannot properly plan retention, compliance, or policy controls without exact definitions. Reports and community feedback showed users unsure how the counter was being tracked in practice.
- Usability and trust: Privacy features that are hard to change erode user trust. Many privacy advocates argue that opt‑in should be the default for any biometric processing; limiting opt‑out changes the calculus for users who wish to experiment or who later change their comfort level. Preview reports documented users encountering errors when toggling the setting, which deepened concerns that the three‑times rule might be enforced inconsistently or be part of server‑side gating.
How OneDrive says it handles the data (technical claims verified)
Microsoft’s support page contains several explicit commitments worth highlighting and verifying against reporting:- Data use: Microsoft says facial scans and biometric information collected via OneDrive are used to create grouping and are visible only to the account owner; such groupings are not included when a user shares an album or photo. This claim was restated by multiple outlets as Microsoft’s official position.
- No‑training assurance: Microsoft’s support text declares facial scans aren’t used to train its central AI models. This is an important policy statement intended to reassure users concerned about their images being fed into large‑scale model training. Reporters have relayed the statement as Microsoft’s promise, but independent technical verification is limited without audit access to Microsoft’s internal data pipelines. Treat the claim as Microsoft’s policy commitment rather than independently validated technical fact.
- Deletion window: When a user disables the People grouping feature, Microsoft says all facial grouping data will be “permanently removed within 30 days.” That figure is explicit in Microsoft’s docs and has been repeated in tech coverage; it is therefore a measurable promise users can hold the company to.
Privacy and legal landscape — why this is not trivial
Facial recognition and biometric data are regulatory hot spots. A short primer on the landscape:- GDPR and sensitive data: In the EU, biometric data used for identification is a special category requiring a lawful basis, often meaning explicit consent. Any limits on revoking processing (or traps that make revocation harder) can be interpreted as weakening the “freely given” quality of consent.
- US state laws: Several US states (notably Illinois under BIPA) have strict rules on biometric data collection and retention. While applicability varies depending on whether OneDrive’s processing meets statutory definitions, companies have faced class action litigation over opaque facial recognition features in consumer products.
- Corporate and enterprise concerns: Managed accounts — i.e., Microsoft 365/Entra identities used by businesses — add complexity. Enterprises must be able to assert control over data processing for compliance. Microsoft’s public messaging that OneDrive “inherits privacy features and settings from Microsoft 365 and SharePoint, where applicable” hints at differentiated behavior for business tenants, but preview users in managed environments reported inconsistent experiences. Enterprise admins will demand explicit MDM/GPO controls, audit logs, and policy gating before enabling such features at scale.
What the preview experience actually looked like (community and reporting)
Hands‑on previews and community posts show two common patterns:- Some users encountered the People toggle and the three‑times message directly in the OneDrive mobile app Privacy & Permissions screen; in some cases the user could not switch the toggle off (the UI reverted with an error), suggesting server‑side gating or bugs in the flight.
- Multiple outlets reproduced the language and contacted Microsoft; Microsoft confirmed limited preview rollouts and referred to inheriting Microsoft 365 and SharePoint settings where applicable but did not explain the rationale for the three‑times rule. Reporting outlets cross‑checked Microsoft’s support pages showing the 30‑day deletion and non‑use for broad model training.
Practical implications and recommended actions
For consumer users
- Check your OneDrive settings: If the People/face grouping is visible, review Photos > People in your OneDrive app. If you are uncomfortable, disable the feature immediately. According to Microsoft, disabling results in deletion of grouping data within 30 days.
- Verify the toggle behavior: If you see the “three times” language, record screenshots and timing. The absence of a clear counting window (calendar vs rolling year) means you should avoid toggling unnecessarily until Microsoft clarifies the policy. Several preview users reported toggle errors — treat such encounters as potential flight bugs.
- Consider local alternatives: If privacy is essential, consider disabling OneDrive photo backup entirely for sensitive albums and use local device storage or other services with clearer controls until the policy is documented. Community discussion and previews show this feature is still in flux.
For enterprise and IT administrators
- Audit account entitlements: Determine whether the People feature will be available to Entra/Microsoft 365 accounts in your tenant and whether admin‑level controls exist to block or allow facial grouping.
- Pilot before wide deployment: Run pilots with non‑critical accounts and evaluate telemetry, consent prompts, and the effect on managed devices.
- Demand transparency from Microsoft: Require clear documentation about:
- how the three‑times limit is measured and enforced (if it exists for managed accounts),
- whether biometric grouping is enabled for managed identities,
- audit/logging surfaces for consent changes and data deletion,
- data residency and on‑device vs cloud processing details.
For privacy teams and regulators
- Request precise technical documentation: Ask Microsoft to publish the counting window, the mechanism for deletion, and whether any telemetry or derived features are retained beyond the 30‑day deletion promise.
- Validate the “no‑training” claim: Seek assurances and technical evidence that facial scans used for grouping are isolated and not fed into any model training pipelines — ideally through independent audit or technical attestations.
Technical tradeoffs and likely engineering rationale (informed speculation)
Some industry commentary has suggested benign engineering reasons Microsoft might limit toggle frequency: repeated toggling could force repeated, expensive deletion and reprocessing of biometric indexes, which could be computationally costly at scale, particularly when complying with jurisdictional data deletion obligations. That said, operational cost is not a valid reason to restrict a user's ability to control sensitive processing — especially where the processing concerns biometric identifiers. Because Microsoft has not publicly explained the rationale, any suggested technical reasons should be treated as plausible hypotheses rather than confirmed facts.Assessment: strengths, risks, and the path forward
Notable strengths
- Feature parity: OneDrive adding People grouping brings parity with competing photo services. For users who want smart search and person‑centric albums, this is a functional win.
- Account isolation promise: Microsoft's explicit promise that grouping data is limited to the account holder and that it won't be used to train central models is a meaningful privacy commitment — if it is implemented as stated and audited.
Major risks
- Consent mechanics: Making a biometric processing feature opt‑out — and coupling that with a mysterious toggle limit — risks violating expectations of freely given consent in sensitive jurisdictions.
- Ambiguous controls: Lack of clarity about the three‑times rule, its measurement window, and the handling of managed accounts undermines governance for enterprises.
- Trust erosion: Users encountering an opt‑out biometric feature by surprise may lose confidence, especially if toggles fail or error out in previews. Community reproductions of these bugs raise red flags.
The path forward Microsoft should take
- Clarify the three‑times rule: Publish exact mechanics (start date, rolling vs calendar year, whether managed accounts are treated differently).
- Make it opt‑in in sensitive regions: Consider EU/EEA and jurisdictions with strong biometric rules as opt‑in by default, not just opt‑out.
- Publish an independent audit or technical whitepaper: Provide verifiable details about how facial scans are stored, isolated, and excluded from model training pipelines.
- Expose admin controls: For enterprise accounts, provide clear MDM/GPO settings and audit logs for consent and deletions.
Conclusion
OneDrive’s People grouping is a credible enhancement to photo discovery, and Microsoft’s documentation addresses several valid privacy controls — notably a 30‑day deletion promise and an explicit claim about not using face scans to train models.However, the surprise preview detail that the People toggle can only be disabled “three times a year” — a message reproduced by multiple outlets and community reports — raises legitimate and unresolved questions about consent mechanics, enforcement windows, and transparency. Multiple independent news outlets and community threads documented the message in preview builds and reported Microsoft’s limited response. Until Microsoft publishes a clear technical rationale and precise policy text, the three‑times limitation should be treated as an unresolved, policy‑critical ambiguity.
For privacy teams, IT administrators, and users, the prudent posture is to treat the feature as a preview experiment: evaluate it in controlled pilots, insist on clear documentation from Microsoft, and prefer opt‑out or local processing for biometric identifiers where regulatory or trust risks are material.
Source: theregister.com Microsoft previews People grouping in OneDrive photos