EU Probes Snapchat YouTube Apple and Google Play Under DSA for Child Safety

  • Thread Author
The European Commission has opened fresh lines of inquiry under the Digital Services Act, formally asking major platform operators — Snapchat, YouTube (Google/Alphabet), Apple (App Store) and Google Play — to explain how they prevent children from being exposed to illegal or harmful content, how age limits and ratings are enforced, and what technical safeguards are in place to stop the sale of drugs, vaping products and tools for non-consensual sexual imagery to minors. This move comes as a growing number of EU governments push Brussels to consider a single “digital majority” age or stronger age-verification measures for social media access, and as national proposals — most notably Denmark’s plan to restrict access for under‑15s — thrust child safety back to the top of the regulatory agenda.

Background / Overview​

Why the EU acted now​

The Digital Services Act (DSA) created a new enforcement architecture that gives the European Commission direct supervisory powers over the very largest online platforms and marketplaces. Under the DSA, the Commission can issue information requests, order interim measures, and eventually impose fines or other remedies where it finds non-compliance. The maximum administrative sanction for DSA breaches can reach up to 6% of a company’s worldwide annual turnover, and the Commission can apply additional periodic penalties for delay or refusal to cooperate. That enforcement backdrop is the reason Brussels now has a powerful toolkit to press platforms for specific technical and policy answers on child safety.
What Brussels asked for in this recent round of investigative actions covers three broad areas:
  • How platforms prevent and detect under‑age access (e.g., accounts created by children under 13).
  • How app stores and marketplaces prevent minors from downloading harmful or illegal apps (gambling, sexually explicit tools, “nudify” apps that generate sexualized images without consent).
  • How recommender systems and algorithmic feeds avoid amplifying harmful content to minors, and what parental controls or age‑appropriate defaults are in place.
The Commission describes these requests as investigative actions — a step that does not, on its own, show that laws were broken, but which can lead to formal probes and remedies if responses are inadequate. The line between “information request” and “formal sanction” is a short one in the DSA framework: failure to reply, misleading answers, or evidence of systemic non‑compliance can quickly escalate.

What the Commission asked the platforms to explain​

Snapchat: under‑13 access and in‑app commerce​

Brussels has asked Snapchat to detail the technical and organisational measures it uses to prevent children under 13 from creating accounts, and to explain how the service curbs in‑app commerce that could expose minors to illegal goods — specifically drugs and vaping products that have appeared in reporting and political briefings. Snapchat told press outlets it would cooperate and pointed to privacy and safety features it already maintains; Brussels’ request seeks the operational documentation and metrics that back those claims.
Why this matters: ephemeral messaging apps and closed‑group commerce features can be hard to moderate. If age gates are weak (easy to circumvent via fake birthdates) and marketplace features are poorly monitored, regulators worry about both illegal sales and exposure to addictive substances for young users.

Apple App Store & Google Play: store vetting, age ratings, “nudify” tools​

The Commission asked Apple and Google to explain how their mobile app ecosystems prevent children from downloading apps that are illegal or harmful — that includes gambling, pornography, and apps that can be used to create non‑consensual sexualized content (the press has used the shorthand “nudify apps”). The questions probe everything from how age ratings are applied and enforced to marketplace review, keyword policing, and takedown processes. Regulators want to see policies, enforcement metrics, and escalation timelines.
Why this matters: app marketplaces are the primary distribution channels for smartphone software. If harmful tools make it through store review — or remain live because enforcement relies on user complaints — the exposure vector to minors is real and measurable.

YouTube / Google: recommender systems and minors​

YouTube has been asked to provide information about its recommender systems “following reporting of harmful content being disseminated to minors.” The Commission wants transparency on how algorithmic ranking treats content that is age‑sensitive and what defaults are applied for young users and parental controls. Google’s public responses have emphasised existing family safety features and continued investment in protections; Brussels now seeks the technical evidence and policy playbooks behind those claims.
Why this matters: recommender systems can rapidly amplify fringe or harmful content. Regulators are focusing on the systemic risk that algorithms present to children — not only individual pieces of content, but the way feeds learn and escalate engagement patterns.

Where member states stand: the push for a “digital majority”​

A distinct but overlapping debate concerns whether the EU should set a common “digital majority” age — a single threshold below which social media access would be restricted across the bloc. Several member states (France, Spain, Greece and others) have been vocal in favour of stricter rules; Brussels has proposed convening an expert panel to evaluate options and solutions. Some reporting indicates a wide consensus among member states: press coverage cites that 25 of the EU’s 27 countries, plus Norway and Iceland, signed a declaration supporting von der Leyen’s plan to assess a potential bloc‑wide digital majority age, with Belgium and Estonia reportedly not signing. Those numbers come from aggregated news reports and official statements circulating in Brussels. The exact practical implications — whether an EU‑wide legal minimum would follow, or whether the Commission would stop short of imposing a single age and instead provide common tools — remain to be defined.
National policy is moving faster in some capitals. Denmark’s government has announced plans to restrict social media access for under‑15s, with parental opt‑ins proposed as a partial carve‑out; France has pursued parental consent rules and other member states have signalled support for robust age verification. Those national initiatives are shaping the political momentum behind pan‑EU proposals.
Caveat: the tally of “25 of 27” signatories in press reporting is consistent across several outlets, but readers should treat such counts as political snapshots that can change rapidly as delegations negotiate text and carve‑outs. Some member states publicly emphasise education and digital literacy rather than hard access bans.

Technical realities and enforcement challenges​

Age verification is technically hard and privacy‑sensitive​

Effective age verification at internet scale requires reliable attestation of a user’s age without creating privacy harms. There are several technical approaches:
  • Lightweight checks (self‑declared DOB) — inexpensive but trivially easy to falsify.
  • Document checks or identity‑provider attestation — stronger, but raises privacy and data‑protection issues and requires secure storage/processing.
  • Device‑ or SIM‑based signals and parental attestations — intermediate, with trade‑offs in inclusivity.
The EU itself has been piloting prototype solutions and exploring ties with the emerging EU digital identity framework, but any large‑scale age‑verification roll‑out must balance child protection against the risk of centralised identity collection or data leakage. Independent reporting and official EU prototypes show the Commission exploring privacy‑preserving verification as an interim route, but a full operational system is not yet universally deployed.

Algorithms, recommender systems and the “addictive” risk​

Recommender systems optimise for engagement; for minors, that can mean faster exposure to sensational or harmful material. Regulators are therefore asking platforms not only about policy settings, but about how algorithmic objectives are weighted, what safeguards (e.g., content‑type filters, age‑aware ranking) exist, and how platforms measure and mitigate systemic harms.
These are difficult questions because algorithmic models are proprietary and continuously trained. Enforcement therefore requires access to internal metrics, reproducible audits and — potentially — independent technical verification. Under the DSA, the Commission can compel access to documentation and, in some cases, inspect systems or apply interim measures.

Marketplace moderation and “nudify” apps​

App stores operate at enormous scale; automated screening and human review are both used, but both have limits. Apps that transform images or remove clothing (“nudify” tools) occupy a legal and policy gray zone: some implementations claim editorial or artistic intent, while others can be abused to produce non‑consensual images. The Commission’s questions to Apple and Google squarely target how classification, metadata policing, keyword detection and age‑rating enforcement prevent minors from acquiring such tools.

Legal, business and operational implications for platforms​

  • Regulatory risk is real and immediate. Under the DSA’s enforcement regime, inadequate or misleading responses to requests for information can trigger fines and further actions; the stakes are not hypothetical.
  • Operational costs will rise. Platforms may need to invest in improved age‑verification options, stronger app‑store vetting, dedicated review teams for youth‑facing risks, and more transparent audit trails.
  • Privacy trade‑offs will be argued in public. Any substantial age‑verification system raises civil liberties questions; platforms and privacy authorities will have to reconcile parental protection goals with data‑minimisation obligations.
  • Product changes may follow. Expect shifts such as stricter default privacy settings for new accounts, locked‑down discovery for under‑age profiles, and expanded parental‑control toolsets.

Strengths of the EU approach — what regulators get right​

  • Legal teeth and targeted supervision. The DSA provides a built legal process to demand technical details and to escalate where necessary, forcing firms to produce documentation they would otherwise keep internal. That concentrates accountability.
  • Systemic focus, not just takedowns. By questioning algorithmic design, age verification and marketplace moderation, regulators are addressing root causes rather than only removing individual content items — a more durable approach to child safety.
  • Political momentum and cross‑national alignment. National proposals and an EU expert panel create an ecosystem for shared standards and best practices, which can reduce cross‑border fragmentation. The recent political push — including high‑profile national plans — has created a policy environment in which platforms must respond.

Key risks and unintended consequences​

  • Over‑reliance on intrusive verification. Heavy-handed age verification could centralise sensitive identity data and create new privacy risks, especially if services store or mishandle identity attestations. The EU’s own pilot work recognises this trade‑off.
  • Workarounds and displacement. If major Western platforms tighten access, determined users (or bad actors) can migrate to less‑regulated services, encrypted apps, or offshore marketplaces — shifting harm rather than eliminating it.
  • Collateral censorship and design fragility. Rigid rules or algorithmic suppression designed to protect minors could inadvertently limit legitimate speech or educational/bodily‑autonomy content, creating a chilling effect if enforcement is blunt.
  • Enforcement friction across jurisdictions. Member states may adopt divergent approaches (education vs. bans), and multinational platforms will face complex compliance costs; single‑vendor enforcement may not address the broader ecosystem.

Practical steps platforms should take — an operational checklist​

  • Inventory and mapping
  • List all features, in‑app marketplaces, and discovery surfaces that expose minors to risk.
  • Tag features by exposure risk (e.g., direct messages vs. public feeds vs. app purchases).
  • Improve age‑aware defaults
  • New accounts default to the strictest privacy and content settings when age is unknown or under‑verified.
  • Preemptively restrict in‑app commerce for unverified accounts.
  • Strengthen app‑store controls
  • Tighten metadata scanning for keywords and image‑processing libraries that enable non‑consensual imagery.
  • Expand quick‑action takedown thresholds for apps reported for abuse.
  • Algorithmic accountability
  • Maintain reproducible logs of recommender experiments that affect minors and make high‑level summaries available to regulators.
  • Implement “safety layers” in recommendation pipelines that filter or down‑rank content flagged as age‑sensitive.
  • Parental and educational tooling
  • Build accessible parental dashboards and offer age‑appropriate learning modules for families and schools.
  • Transparent engagement with regulators
  • Respond promptly to information requests; set up dedicated liaison teams for DSA interactions to avoid penalties for delay.

Practical steps regulators and policymakers should take​

  • Invest in privacy‑preserving age attestation standards and pilot interoperable solutions.
  • Fund independent algorithmic audits and create a legal pathway for third‑party verification under strict privacy guarantees.
  • Prioritise cross‑border enforcement coordination to avoid jurisdiction shopping by bad actors.
  • Pair rules with funding for digital literacy in schools — bans without education risk driving children underground rather than protecting them.

Critical analysis: does this approach strike the right balance?​

The EU’s current strategy is legally well‑designed to force transparency and accountability. The DSA’s investigatory powers are intentionally broad: the Commission can compel operational detail that civil‑society and journalists cannot access on their own. For that reason, the Commission’s targeted information requests are an effective first step to surface internal controls, enforcement metrics, and engineering trade‑offs.
However, several structural weaknesses remain:
  • Evidence gaps. Regulators rarely have access to live system logs or internal model weights without cooperation or court orders. That limits their ability to independently verify some of the most consequential claims — for example, whether a recommender system systematically amplifies harmful content to defined age groups. Critics have pointed out that some high‑profile enforcement cases under the DSA have relied on control‑plane signals rather than content‑level audits; those limitations persist unless independent forensic mechanisms are agreed.
  • Technical trade‑offs. Age verification that is reliable enough to prevent widespread under‑age registration typically requires strong identity signals or document checks — which contradicts privacy minimisation goals and can be exclusionary (for example, for children of migrants or low‑income households without robust ID). The EU’s own pilot work shows the technical challenge: privacy‑preserving approaches exist, but scaling them across billions of accounts is non‑trivial.
  • Fragmentation risk. Even if Brussels arrives at a single set of recommendations or a prototype identity tool, implementation will depend on member states’ choices and local statutory complements. That can mean unequal protection across the bloc — a patchwork of national rules rather than a single seamless standard.
Taken together, the EU’s enforcement push is necessary and powerful, but it is not a silver bullet. Effective protection of minors requires a mix of technical controls, robust auditing and cross‑sector social policy (education, mental‑health resources, research funding), combined with careful privacy safeguards to prevent new harms.

What to watch next (roadmap and likely outcomes)​

  • Platform responses and transparency packs. Expect Apple, Google, Snapchat and YouTube to submit detailed documentation over coming weeks. The Commission may then open formal investigations or request additional data if answers are incomplete. Failure to adequately respond can lead to fines for providing incorrect or incomplete information under the DSA.
  • The experts’ panel and the “digital majority” debate. Brussels is setting up expert groups to weigh options; their recommendations will shape whether the Commission proposes an EU‑level tool or simply issues harmonised guidelines. Political pressure from national capitals — including proposals from Denmark and France — will influence the panel’s remit.
  • Technical pilots for age verification. The Commission’s prototype work and discussions around the EU digital identity wallet will be central. Any pilot that shows privacy‑preserving attestation can be scaled will likely accelerate policy moves; conversely, failure to find acceptable technical solutions will push governments toward national measures and parental‑consent laws.
  • Litigation and policy pushback. Expect legal challenges and advocacy campaigns on both sides: tech firms and civil‑liberties groups will litigate over privacy and platform obligations, while child‑safety advocates will press for faster, stricter measures.

Conclusion​

The European Commission’s information requests to Snapchat, YouTube, Apple App Store and Google Play are the first, concrete regulatory salvo in a broader campaign to make the online ecosystem measurably safer for children. The move leverages the DSA’s investigatory powers to demand operational transparency on age gates, marketplace vetting, and recommender‑system safeguards — all of which are legitimate, urgent concerns given the documented risks of drug and vape exposure, non‑consensual image tools, and algorithmic amplification of harmful content.
Yet this is a deeply complex policy problem. Strong enforcement must be paired with privacy‑preserving age‑verification technology, independent algorithmic audit capacity, and investment in education and mental‑health services. Without those complementary strands, the Commission risks producing rules that are either too blunt (triggering overblocking and privacy harms) or too weak (leaving loopholes for bad actors to exploit). The DSA gives Brussels the legal leverage it needs; whether that power will be used to craft practical, scalable, and rights‑respecting protections for children now depends on technical choices, member‑state politics, and the platforms’ willingness to publish the operational details behind their safety claims.

Source: The Express Tribune EU grills Apple, Snapchat, YouTube over risks to children | The Express Tribune