Outsourced Age Checks and the UK Online Safety Act: Key Lessons

  • Thread Author
Discord’s recent data-safety catastrophe — in which government IDs and age-verification images supplied by users were exposed after attackers compromised a third‑party support system — has ripped the thin veil off a problem every platform adopting the UK’s Online Safety Act must now confront: outsourcing age checks creates concentrated, high‑value targets that are intrinsically hard to secure. Microsoft’s Xbox, which has chosen to satisfy UK age‑assurance requirements by integrating a third‑party provider, faces the same architectural trade‑offs Discord discovered the hard way. This feature drills into what happened, explains how the relevant technologies work, evaluates Microsoft and Yoti’s public assurances, and lays out practical engineering, legal, and policy measures companies should adopt to reduce the odds of a repeat.

Background: what broke, why it matters​

In late September 2025 a threat actor gained unauthorized access to a third‑party customer support environment used by a major platform. That access persisted for a period measured in days and resulted in the exfiltration of support tickets, internal materials, and — critically — attachments submitted by users as part of age‑verification and appeals processes. Among those attachments were photos of government‑issued identity documents and selfies that users had uploaded to prove their age during appeals. The platform confirmed that a limited but significant number of users’ ID images were affected and that the incident was tied to the external support service rather than to the platform’s main production systems.
Why this is a systemic problem rather than an isolated failure:
  • Age‑verification by design concentrates sensitive personal data — names, dates of birth, ID scans, biometric selfies — into a small number of workflows.
  • The compliance imperative created by the UK’s Online Safety Act has pushed many services to adopt specialist identity vendors or outsourced support firms rather than build their own systems, increasing the attack surface through third‑party relationships.
  • Support workflows and ticketing systems are often overlooked when hardening production infrastructure — yet they routinely hold forensic copies of user‑submitted evidence for appeals, audits, or fraud investigations.
The result is a classic supply‑chain concentration: an attacker who compromises one vendor can harvest data for millions of users across multiple customer platforms. The fallout is both privacy harm to individuals (IDs are sensitive, long‑lived identifiers) and reputational, regulatory, and legal risk for the companies that depend on those vendors.

Overview: the legal trigger — the Online Safety Act and technical responses​

The UK’s Online Safety Act created an explicit obligation for services that publish or host certain adult or harmful content to deploy “highly effective age assurance” methods. Ofcom’s guidance operationalizes that requirement and lists a range of techniques considered capable of meeting the standard: photo‑ID matching, facial age estimation, carrier checks, open banking, and other identity‑assurance mechanisms.
Key points of the regulatory landscape that shaped how platforms responded:
  • Different categories of service face different deadlines and duties; platforms that publish pornographic content needed to implement highly effective age assurance processes by mid‑2025.
  • Ofcom expects providers to balance effectiveness against privacy and inclusivity, but the default enforcement posture pressures firms toward methods that produce strong attestation signals — and those methods often rely on document images or biometric data.
  • Non‑compliance carries steep penalties, which creates commercial incentives to outsource the heavy lifting to specialist identity providers rather than to attempt lengthy in‑house engineering projects.
Because platforms wanted reliable, auditable, and defensible implementations on a tight timetable, many selected vendors with public sector or high‑assurance contracts. That brings us to the vendor at the center of the Xbox conversation: Yoti.

How Yoti and similar solutions work — the technical model​

Third‑party identity services generally offer two families of capability for age assurance:
  • Photo‑ID matching: the user uploads a government‑issued document (passport, driver's license) and a selfie. The vendor matches the document image to the selfie, extracts the date of birth, and returns an attestation that the user is above or below the required threshold.
  • Facial age estimation: the vendor analyses a live selfie with a model trained to estimate biological age ranges and returns a binary decision or a predicted age plus uncertainty. This avoids requiring a document upload in many cases.
Typical implementation flow:
  • User begins verification flow on the platform (Xbox, social app, porn site).
  • The platform redirects the user to the vendor’s secure endpoint (or opens an embedded widget) where the user provides the requested input: selfie, document photo, or carrier/payment check.
  • The vendor processes the input, applies anti‑spoofing / liveness checks, and makes a decision against the configured age threshold.
  • The vendor returns a simple attestation (usually a yes/no token or a short JSON assertion) to the platform; the platform stores that token and proceeds to grant or restrict features accordingly.
Vendors market several privacy and safety claims intended to reduce risk:
  • Data minimization: only a yes/no attestation is retained by the platform.
  • Short‑lived processing: images are processed and deleted immediately after decision.
  • Independent testing and accreditation (some vendors publicize evaluations from neutral test labs to support claims of accuracy and fairness).
  • Anti‑spoofing to mitigate replay/deepfake attacks.
These features address a subset of risks, but they are not a panacea.

Microsoft, Xbox, and Yoti — the stated approach​

Xbox’s implementation for UK accounts uses a mix of verification options and relies on third‑party attestation for high‑assurance checks. Microsoft has stated publicly that:
  • Xbox will offer multiple verification methods (live photo/selfie age estimation, photo‑ID matching, carrier check, and credit‑card checks).
  • The platform is partnering with a third‑party identity provider that returns a binary attestation to Microsoft: the vendor tells Xbox whether the user meets the age threshold and does not provide the underlying ID images.
  • The vendor — widely used by governments and other platforms — states that images used for age estimation are deleted once processing completes, and that only the attestation token is returned.
From a compliance perspective, this is the expected blueprint: delegate the sensitive processing, store only an attestation, and limit persistent data.

Discord’s failure: the attack vector and the lessons learned​

The recent incident that exposed ID photos and selfies did not originate in the identity vendor’s verification engine. Instead it was a downstream failure of the support and appeals process: users who disputed an automated age decision submitted documents to the platform’s support channel for manual review; those support attachments were stored in a ticketing system managed by an outsourced vendor and — crucially — remained accessible to attackers who breached that support environment.
This distinction is important because it refutes a common assumption: even if the verification system itself deletes images immediately, other parts of the workflow can retain them. The lifecycle of an ID image rarely stops at the vendor’s API:
  • Users appeal or request manual review, which often sends images into support ticket systems that persist attachments for human review.
  • Support teams and BPO (Business Process Outsourcing) partners may download, copy, or locally store attachments for training or triage.
  • Ticketing systems are popular targets because they aggregate sensitive artifacts from many customers in one place.
The practical takeaway: locking down the verification engine is necessary but not sufficient. Any interaction where a user’s ID image can be copied, transferred, or referenced — especially human support — becomes a high‑risk dossier.

Evaluating the vendor assurances: strengths and limits​

Vendors like Yoti have a series of demonstrable strengths that make them attractive partners for platforms:
  • Operational scale and governance: they often have long experience processing identity documents and operate within strong compliance frameworks demanded by government contracts.
  • Technical controls: modern identity APIs use TLS 1.3, HSTS, and well‑engineered key management. Anti‑spoofing and liveness detection are standard.
  • Independent benchmarking: many vendors publish white papers and participate in independent test programs to validate accuracy and bias profiles of their models.
  • Data‑minimization features: returning a simple attestation avoids persisting raw identifiers on the platform side and narrows the attack surface.
However, these strengths do not erase real limitations and risks:
  • “Deletion” is a process, not an atomic property. Deleted files can be retained in backups, logs, or in copies cached by human operators.
  • The attacker model changes: a vendor’s API might be secure while adjacent systems — the platform’s support stack, the BPO contractor’s endpoints, or cloud backups — are not.
  • Accuracy thresholds are probabilistic. Age‑estimation models have measured margins of error that increase with age ranges, lighting, cosmetics, and other factors. They can be very accurate on group statistics but will still misclassify individuals.
  • Biometric data is immutable. A leaked selfie is more harmful than a leaked password because it is difficult or impossible to “change” later.
  • Accessibility and equity problems: not everyone has a passport or driver’s license, and some populations (including marginalized or gender‑diverse users) face real harms when asked to produce government ID or biometric images.
In short, vendor assurances are a good starting point — but they must be embedded inside an end‑to‑end process that anticipates human workflows, contractor relationships, logging practices, and legal obligations.

The legal and regulatory angle: accountability doesn’t vanish when you outsource​

Under data‑protection regimes such as the UK’s data‑protection framework and Europe's GDPR analogues, the organization that decides why and how personal data is processed (the controller) retains primary responsibility for compliance, even when it uses processors.
That has three practical implications for companies like Microsoft:
  • Contracts matter. Controller/processor contracts must contain clear, enforceable data‑handling obligations: limits on sub‑processing, mandatory incident notification timelines, audit rights, data‑locality requirements, and secure deletion clauses.
  • Due diligence is mandatory. Controllers must perform and document rigorous vendor risk assessments, security audits, and Data Protection Impact Assessments (DPIAs) for high‑risk processing such as age verification.
  • Supervisory bodies will hold controllers accountable. When a third‑party breach leads to user harm, regulators can and do investigate the controller’s choice of suppliers, contractual safeguards, and oversight.
In other words, outsourcing reduces engineering overhead but not regulatory exposure.

Practical engineering and operational mitigations Xbox (and others) should demand​

Some mitigations are immediately actionable and should be non‑negotiable when integrating age‑assurance vendors into large consumer platforms.
  • Enforce data minimization by design
  • Store only the attestation token and the minimal metadata required for troubleshooting (timestamp, method used, and non‑identifying transaction id).
  • Redact or block attachments to support tickets: when a user appeals a decision, create a secure ephemeral workflow that re‑verifies without relying on stored images.
  • Harden the support lifecycle
  • Prevent ID images and selfies from being forwarded into generic ticketing systems at all. Instead, use time‑limited, access‑restricted consoles with just‑in‑time viewing and no persistent storage.
  • Segregate access using Privileged Access Management (PAM) controls and require hardware backed MFA for human reviewers.
  • Require supplier security hygiene
  • Insist on strong contractual obligations: no sub‑processing without consent, mandatory 24–48 hour breach notification, right‑to‑audit, and escrowed source or forensic logs where appropriate.
  • Demand independent SOC2 / ISO27001 / penetration testing evidence and periodic re‑assessment.
  • Deploy cryptographic protections
  • Use end‑to‑end encrypted uploads to the identity vendor’s API with keys controlled by the vendor, and ensure no plaintext images transit or persist in the platform’s support stack.
  • Where possible, use cryptographic attestation or zero‑knowledge proofs that prove age bounds without transferring raw age evidence (emerging tech but worth piloting).
  • Prefer on‑device or local verification for age estimation
  • Where facial‑age estimation is acceptable, prefer models that run locally on the device and return a local pass/fail attestation — this removes a server copy from the data path.
  • If server processing is unavoidable, ensure strict ephemeral processing and confirm that vendor backups do not retain raw images.
  • Monitor for supply‑chain signals
  • Treat BPO contractors as high‑risk islands and adapt SIEM, anomaly detection, and privileged session recording to monitor support agents’ activity for signs of exfiltration.
  • Prepare a transparent breach playbook
  • Have notification, remediation, and identity‑protection steps pre‑planned and rehearsed. Offer affected users concrete support (fraud monitoring, identity restoration guidance), and provide regulatory authorities with full cooperation.
These are not theoretical controls; they are the minimum measures large platforms should document and enforce before rolling out any process that touches biometric or government ID data.

Usability and equity trade‑offs the industry has to face​

Security and privacy aren’t the only costs of these verification regimes. There are real, measurable harms tied to how the technology is deployed:
  • Exclusion: many people — immigrants, young adults, people in precarious circumstances — don’t have readily available government IDs or credit cards. Strict ID‑only approaches will lock legitimate users out.
  • Privacy erosion: requiring biometrics or government IDs for content access normalizes surveillance techniques that privacy advocates argue will outlast the immediate compliance need.
  • Harm to vulnerable populations: asking people to upload IDs or selfies can present safety and dignity risks for trans or closeted users in environments where revealing documents would lead to real‑world harm.
Platforms must offer diverse, low‑friction verification options (carrier checks, open‑banking, trusted payment instruments, or community‑based attestation) and retain robust parental controls so that children are protected without forcing adults through intrusive identity pipelines.

Hard truth: there is no perfect technical fix — but there are better architectures​

The Discord episode demonstrates an unavoidable reality: any system that aggregates highly sensitive personal evidence — whether identity docs, biometric selfies, or payment traces — will always be an attractive target. The question isn’t whether a breach can happen; it’s how to make it less catastrophic when it does.
Better architectures combine these elements:
  • Reduce centralization: keep attestations lightweight and avoid holding raw evidence.
  • Reduce human handling: prefer automated, auditable decisions with minimal manual review; when review is necessary, confine it to isolated, ephemeral consoles.
  • Strengthen supplier governance: control the contractual and operational link between platform and vendor, and treat support vendors as core security partners rather than as expendable contractors.
  • Increase transparency and auditability: publish external audits, allow independent verification of deletion claims, and provide users with clear, machine‑readable statements about what data is retained and for how long.
Adopting these changes costs time and money; the alternative is concentrated regulatory and reputational risk that escalates the moment media and users discover exposed IDs.

What platforms should tell users — realistic transparency​

Platforms must move beyond bland “we don’t store images” assurances and give people concrete, auditable answers that can be examined by regulators and civil‑society organizations:
  • Clearly publish the full data lifecycle for verification flows: what is uploaded, who processes it, what is returned, and what is retained (and for how long).
  • State which systems have access to images (verification engine, support ticketing) and demonstrate how support attachments are blocked or redacted by default.
  • Offer user controls and alternatives: a list of verification options, cost (if any), and an offline or in‑person redress path for users who cannot provide standard documents.
  • Provide independent attestations and allow audits by recognized third parties attesting to deletion claims.
Plain language plus independently verifiable evidence is the only way to rebuild trust after a concentrated supply‑chain breach.

Conclusion — the tradeoffs moving forward​

The UK’s Online Safety Act has created a compliance imperative that pushes platforms toward powerful, specialist identity tools. Those tools can — and in many cases do — reduce underage access effectively. But they also concentrate risk, and the weakest link is frequently not the forensic strength of the verification model itself but the human processes and external systems that touch user evidence.
Microsoft’s approach with Xbox — integrating an identity partner and storing only an attestation — is aligned with widely recommended practice. Yet the Discord incident shows why platforms must treat all participants in the verification chain, including support vendors and contractors, as integral parts of the attack surface. That requires stronger contracts, stricter technical segregation, better logging and monitoring, and alternative verification options to preserve access for users who legitimately cannot or should not submit government IDs.
The broader lesson for product and security teams is straightforward: assume that you cannot make information disappear once it has been uploaded to a human‑accessible system. Design verification flows that avoid creating persistent copies in the first place, invest in hardened support processes, and demand continuous, verifiable evidence of deletion and security from every partner. Doing so won’t eliminate risk, but it will make the next breach far less damaging — for users and for the platforms that increasingly shoulder responsibility for their data.

Source: Windows Central Will Xbox's age verification system avoid Discord's security pitfalls? We investigate