
Somewhere between “Welcome back!” and “Accept all cookies,” the modern internet has found a new favorite prompt: “Prove you’re old enough to be here.”
Age verification has been drifting around the edges of the web for years—mostly in the adult-content corners where pop-ups go to reproduce. But over the last couple of years, it’s started marching into mainstream platforms and services. And in 2026, it’s no longer a niche compliance checkbox. It’s becoming an architectural feature of how the internet works.
This piece is inspired by and builds on “Let me see some ID: age verification is spreading across the internet” by Stevie Bonifield at The Verge (updated February 24, 2026). The Verge has been tracking the issue with the kind of persistence usually reserved for firmware updates and why your printer is angry again.
What follows is the bigger picture: the laws pushing age checks into more services, the technologies being deployed (and how they fail), why app stores are being dragged into the fight, and the uncomfortable reality that “protect kids online” often collides with “don’t build an ID checkpoint for the entire internet.”
Age verification is moving from “adult sites” to “adult internet”
The shift isn’t subtle. When you start seeing age verification show up in contexts like social platforms, gaming communities, and general-purpose services, you’re no longer talking about a single industry compliance problem. You’re talking about a new identity layer being welded onto the internet.
Platforms are increasingly using age assurance to gate access to:
- Adult content (obviously), including NSFW communities and explicit material
- Messaging features (DMs, voice/video chat), which can be abused for grooming
- User-generated content that may be “harmful to minors” depending on jurisdiction
- Account creation itself (in some legislative proposals)
- Commerce and app downloads, via app-store-level age checks
The new normal looks less like “a couple of sketchy sites ask for your birth year” and more like “your entire online experience depends on whether an algorithm thinks you’re 16.” Which is… not a sentence most of us expected to say out loud in the 2020s.
What’s driving the age-gating wave?
1) Laws are getting more specific—and more enforceable
In the UK, the Online Safety Act has moved age assurance from “best effort” into “do it properly or pay up.” Ofcom has been explicit that services in scope need highly effective age assurance for pornography access, and it has published guidance on what methods can qualify—while also noting what doesn’t (like self-declaration). The deadlines and enforcement programs are now real, not theoretical. citeturn0search0turn0search1turn0search4
In the EU, the Digital Services Act (DSA) requires platforms to take proportionate measures to protect minors. The European Commission has also been working on a more standardized, privacy-preserving approach to age verification, including an age-verification blueprint released in July 2025 and an enhanced second version published October 10, 2025. citeturn2search2turn2news17
In the US, the legal terrain changed dramatically when the Supreme Court upheld Texas’s porn-site age verification law in Free Speech Coalition, Inc. v. Paxton on June 27, 2025. The Court held the law was subject to intermediate scrutiny and allowed Texas to require age verification for access to content obscene to minors. That decision has been interpreted as opening the door for many more age-gating statutes nationwide. citeturn2search1turn2news16turn2news14
2) Political pressure: child safety is a rare bipartisan magnet
Few issues unite lawmakers like protecting kids. Age verification is politically attractive because it’s easy to explain, hard to oppose without sounding suspicious, and pairs nicely with “big tech won’t act unless we force them.” Even when the technical implementation is messy, the messaging is simple: “If a bar checks IDs, why shouldn’t the internet?”
The snag is that the internet isn’t a bar. It’s a planet-sized set of services, many of which are free, anonymous, and global. And unlike a bartender, websites can’t glance at your face and decide you’re probably 23 unless the lighting is terrible.
3) Platforms are pre-empting regulation (or panicking)
Once a few major jurisdictions force robust age assurance, platforms face a choice:
- Build localized solutions for each region
- Adopt a global standard to simplify operations
- Geoblock entire regions (rare for large platforms, common for small ones)
The Verge’s rolling coverage shows this playing out in real time, with services experimenting, rolling back, delaying, and clarifying what they’re actually doing—especially when users interpret “age verification” as “mandatory face scans for everyone.” citeturn1view0turn0news13
Age verification vs. age assurance: the terms matter
Regulators and platforms increasingly use age assurance as an umbrella term for methods that can either verify or estimate age. Broadly:
- Age verification: “Prove you are over/under X” using evidence like government ID, a digital identity credential, or a payment instrument.
- Age estimation: “We think you’re probably over/under X” using signals like facial analysis or behavioral patterns.
Ofcom’s UK guidance emphasizes “highly effective” approaches and lists methods it considers capable of meeting that bar, including open banking, photo ID matching, facial age estimation, mobile operator checks, and digital identity services. It also explicitly says self-declared age is not highly effective. citeturn0search1turn0search0
That’s a critical point: the era of “type your birthday and promise you’re 18” is being legislated out of existence in multiple places.
The methods: how age checks actually work (and why each one is controversial)
If your mental model of age verification is “upload a driver’s license,” you’re not wrong—but that’s only one option. Many systems now combine multiple methods, triggered only when a service is uncertain about a user’s age.
1) Government ID + selfie match
This is the most straightforward approach and the one that makes privacy advocates reach for the fire extinguisher. The flow typically looks like:
- User uploads a government ID image
- User takes a selfie (or short video)
- A vendor checks the ID’s authenticity and matches selfie to ID photo
- The service receives an “over 18 / under 18” signal (ideally) rather than the raw documents
Pros: strong assurance. Cons: centralized collection of highly sensitive identity data creates a juicy target for breaches and misuse, especially if vendors store anything longer than necessary.
2) Facial age estimation (no ID)
Facial age estimation tries to infer age from a selfie without requiring the user to submit an ID. Proponents like it because it can be less intrusive than collecting documents. Critics dislike it because it’s still biometric processing, can be biased or inaccurate, and is hard to audit.
Discord’s recent back-and-forth shows how quickly public trust collapses when users believe they’re being forced into face scans. Discord has said it will expand verification options, publish more detail about vendors, and explain how its age estimation works—after backlash and confusion. citeturn0news13turn1view0
3) Payment instruments (credit card checks)
Credit card checks are sometimes used as a proxy for adulthood. They’re imperfect (minors can use family cards; adults may not have one), but they avoid uploading identity documents.
Regulators differ on whether these checks count as “highly effective” depending on implementation. Ofcom’s guidance discusses credit card checks as a potential method, but the big caveat is always: effectiveness and robustness matter. citeturn0search1
4) Mobile network operator (MNO) age checks
Some mobile operators maintain age flags associated with accounts (often for content filtering). A service can query an MNO (directly or via a broker) to confirm whether an account is adult. This can be privacy-friendly if designed well, but it can also be error-prone if accounts are shared or registered under a parent.
5) “Open banking” / financial checks
This approach uses regulated financial identity rails to confirm adulthood without revealing too much detail. It’s more common in jurisdictions where open banking is mature and widely used. It can be strong, but it may exclude people without bank access and raises questions about whether financial identity should become a universal internet pass.
6) Behavioral / account-signal inference (“age prediction”)
This is where things get extra 2026. Some platforms are leaning on AI-driven age inference using account metadata: tenure, device signals, activity patterns, and community-level aggregates.
Discord has described an “age inference” approach that uses account and device/activity data and explicitly says it does not use message content for the process—while still requiring stronger proof (like selfie/ID) when its model can’t be confident. citeturn0news13turn1view0
The Verge has also reported on other services using AI models to identify suspected underage users and restrict accounts until verification occurs. citeturn1view0
That raises a new class of questions: What happens when a model is wrong? What’s the appeal process? And how do you audit a system designed to infer age from patterns most users don’t understand?
Case study: the UK’s Online Safety Act and “highly effective age assurance”
The UK is one of the clearest examples of a government pushing “serious” age assurance into production at scale. Ofcom began an enforcement program focused on preventing children from encountering pornographic content through highly effective age assurance, with duties coming into force in January 2025 for certain services and broader expectations tied to July 25, 2025 for others. citeturn0search0turn0search1
Ofcom’s guidance (as summarized by the UK Parliament’s House of Commons Library) lays out several practical expectations that matter beyond the UK:
- Age checks must be technically accurate, robust, reliable, and fair.
- Self-declaration isn’t good enough.
- Pornographic content should not be visible before or during the age-check process.
- Privacy rights must be respected while still protecting children.
In other words: do age gating properly, and don’t turn your verification screen into a teaser trailer for the content you’re trying to restrict.
Enforcement isn’t just theoretical money penalties
In the UK, penalties can be severe: the commonly cited maximum is the greater of £18 million or 10% of global revenue for certain failures, and regulators can also seek orders to restrict access. citeturn0search1turn0search8turn0search4
And UK regulators have shown a willingness to take action. On February 24, 2026, the UK Information Commissioner’s Office (ICO) announced a £14.5 million fine against Reddit over child data handling issues tied to inadequate age assurance practices in the period before Reddit introduced stronger checks in July 2025, according to reporting by the Associated Press and The Guardian. Reddit has said it plans to appeal. citeturn0news15turn0news12
This is an important reminder that age assurance isn’t just a content policy question. It’s a data protection issue too.
Case study: the US Supreme Court and the new permission structure for age-gating
The Supreme Court’s June 27, 2025 decision in Free Speech Coalition, Inc. v. Paxton matters because it didn’t just bless one Texas law—it provided a constitutional framework other states can cite.
The Court upheld Texas H.B. 1181, which requires certain commercial porn sites to verify users are 18+ (using government ID or commercially reasonable methods relying on transactional data). citeturn2search1turn2news16
Supporters frame this as long-overdue modernization: physical-world age gates exist; online should not be an exception. Critics argue it chills adult speech, pushes users toward riskier corners of the web, and creates new privacy and security hazards.
Either way, the decision has accelerated the “copy/paste lawmaking” effect: once a legal model survives the Supreme Court, state legislatures move fast.
Why app stores are being pulled into age verification
If you want to understand why age verification is spreading, look at the shifting target: it’s moving from individual websites to centralized chokepoints.
For years, lawmakers tried to regulate content at the publisher level (the websites/apps themselves). But enforcing age verification site-by-site is difficult and creates a game of whack-a-mole. So some lawmakers have turned to app stores: Apple’s App Store and Google Play are among the largest distribution gates in consumer tech.
In the US, proposals like the App Store Accountability Act would push age verification responsibilities onto app store operators. The Verge’s February 24, 2026 live story explicitly calls out this trend, noting lawmakers pushing bills that would have app stores verify user ages. citeturn1view0
Texas tried it—and a federal judge blocked it (for now)
Texas passed an app-store age verification law (SB 2420), but a federal judge blocked it from taking effect on January 1, 2026. Judge Robert Pitman granted a preliminary injunction and compared the law to requiring a bookstore to card every customer at the door. citeturn2news12turn2search0
That’s a vivid analogy—and it gets to the heart of the app-store approach: it potentially forces everyone to provide age information (and possibly parental consent flows) just to download any app, including completely benign ones.
Even if you support stronger child safety standards, broad app-store mandates raise hard questions:
- Do we want a system that requires identity checks to download a weather app?
- Who holds the data, and for how long?
- What happens to users who can’t or won’t verify (e.g., no ID, privacy concerns)?
- Do app stores become de facto identity providers for the entire mobile internet?
The legal battle around Texas’s law also shows that age verification policies can collide with constitutional speech and access concerns, especially when implemented at broad distribution layers. citeturn2news12turn2search0
The security problem nobody can ignore: identity data is breach bait
Whenever age verification involves collecting IDs, selfies, or other sensitive identifiers, it creates a concentration of high-value data. And in cybersecurity, we have a technical term for that: “a very bad day waiting to happen.”
The Verge’s February 24, 2026 story mentions a breach of a former vendor that leaked some scanned IDs, which is exactly the nightmare scenario critics point to: the harm from a breach isn’t “someone saw your email.” It’s “someone has a copy of your passport and a face scan.” citeturn1view0
Even when vendors promise not to retain data, the practical reality is messy. Systems need logs, fraud detection, dispute resolution, and compliance evidence. Every “just delete it” promise tends to grow exceptions over time unless regulators and auditors enforce strict minimization.
Mitigations that actually help (if implemented)
Some design and governance choices can reduce risk:
- Data minimization by design: return only an age token (over/under threshold), not raw documents.
- On-device processing: where feasible, keep biometric processing local.
- Short retention windows: and a clear, public deletion policy.
- Vendor transparency: publish who processes what, where, and why.
- Independent audits: including security audits and fairness/bias assessments for estimation models.
Notably, Discord has said it intends to increase transparency about third-party vendors and publish more technical detail about its age estimation systems as part of its delayed rollout. citeturn0news13
The privacy problem: anonymous speech and “ID to browse” don’t mix well
There’s a reason digital rights advocates keep sounding the alarm: age verification, if done broadly, can create a world where anonymous browsing becomes an exception rather than the default.
And it’s not just about embarrassment. In many contexts, anonymity protects legitimate speech:
- People seeking health information
- Victims of abuse researching resources
- Whistleblowers and activists
- LGBTQ+ users in hostile environments
- Anyone who simply doesn’t want their identity tied to their reading habits
Age verification systems can be built to avoid storing identity data, but the act of verifying still creates a potential linkable event. If the system leaks, is subpoenaed, or is misused internally, the privacy impact can be enormous.
The Supreme Court’s decision upholding Texas’s porn-site age verification law acknowledged that adults have First Amendment rights to access legal content, but still allowed age gates as an “incidental” burden under intermediate scrutiny. That doesn’t make the privacy concerns disappear—it just means they’re less likely to block laws in court. citeturn2search1turn2news14
The accuracy problem: kids are clever, adults are collateral damage
Age verification systems have two simultaneous failure modes:
- False negatives: minors get through (the system fails at its primary goal)
- False positives: adults get blocked or restricted (the system harms legitimate users)
Self-declaration fails on the first point. Aggressive ID checks and biometric estimation often fail on the second—especially for users who don’t have standard documents, have changed names, have poor camera hardware, or simply don’t want to submit sensitive data.
Facial age estimation introduces additional complexity: age isn’t a stable visual attribute. It varies with genetics, lighting, makeup, camera quality, and demographic differences. Even a “high accuracy” model produces real-world edge cases at scale, and those edge cases turn into customer support tickets, angry posts, and occasionally lawsuits.
Europe’s “privacy-preserving” blueprint: can the internet have age checks without mass ID collection?
The EU is attempting something many policymakers elsewhere have struggled to articulate: age verification without turning platforms into ID repositories.
The European Commission has published an age-verification blueprint intended to help platforms implement “robust, user-friendly and privacy-preserving” age verification methods, with the aim of harmonizing approaches across Member States. The Commission released a second version of the blueprint on October 10, 2025, adding features like using passports and ID cards for onboarding, in addition to eIDs, to generate proof of age. citeturn2search2
In parallel, EU regulators have shown willingness to enforce child protection expectations: the Commission opened formal investigations into multiple large porn sites in 2025 for allegedly failing to adequately protect minors under the DSA. citeturn2news17
Blueprints are not the same thing as a universally adopted standard, but this is the direction many privacy engineers favor: cryptographic proofs and minimal disclosure, where the platform learns only what it needs (“over 18”) and nothing more.
Discord’s delay: a glimpse of how hard global age verification really is
Discord is a useful bellwether because it sits at the intersection of community chat, gaming culture, and youth usage. It also hosts a mix of content types—some innocuous, some not—and supports servers that can swing from homework help to, well, not that.
According to The Verge’s February 24, 2026 live story and a Verge report published the same day, Discord is delaying a global age verification rollout to the second half of 2026 after backlash and confusion, while still planning to expand methods (including credit card checks), publish vendor documentation, and share technical details about how its age estimation works. citeturn0news13turn1view0
This is what happens when “compliance” meets “user trust”:
- Users fear a forced biometric dragnet.
- Platforms struggle to communicate nuance (“only some users” sounds like “for now”).
- Vendors become part of the brand risk.
- Every breach story becomes a cautionary tale.
From a product perspective, the hard part isn’t just verifying age. It’s building a system that doesn’t feel like a surveillance feature—and doesn’t become one by accident.
So what happens next? The likely future of age verification online
Unless there’s a major policy reversal, age assurance is headed in a few predictable directions.
1) “Teen-by-default” experiences will spread
Instead of blocking access outright, platforms may default uncertain users into restricted experiences—limiting DMs, reducing content recommendations, disabling certain features, and restricting access to age-gated communities. This reduces risk while avoiding making ID upload the default first step.
But it also shifts the burden: adults who value full access must prove it, and people incorrectly categorized as minors lose functionality until they appeal.
2) More “age tokens,” fewer raw documents (hopefully)
We’ll likely see growth in systems that issue a reusable proof (“over 18 token”) so users don’t re-verify repeatedly across sites. This is attractive for usability, but it risks creating a cross-site identifier if implemented poorly. The EU’s blueprint direction suggests a push toward more privacy-preserving implementations. citeturn2search2
3) Vendors become critical infrastructure—and regulators will treat them that way
As age verification vendors become embedded across platforms, they start resembling payment processors: third-party services that, if compromised or mismanaged, have ecosystem-wide impact. Expect increasing regulatory scrutiny on vendor security, retention practices, and auditability.
4) The web fragments further: regional experiences, geoblocks, and compliance gating
For smaller sites, compliance costs can be existential. Some will geoblock high-regulation jurisdictions. Others will rely on off-the-shelf vendors. Either way, the “one web” ideal takes another hit.
We’ve already seen the pattern with GDPR: compliance pressure can consolidate power in larger firms that can afford legal and engineering overhead. Age assurance could repeat that dynamic—only with IDs and biometrics in the mix.
Practical guidance: what platforms should do if they’re forced into age checks
If you run a platform or build software that may become subject to age assurance requirements, here are some principles that can reduce harm without pretending there’s a perfect solution:
- Minimize data: collect the smallest set of attributes needed for the decision.
- Make verification proportional: don’t require ID for low-risk features.
- Offer multiple methods: ID, estimation, payment checks, and trusted digital IDs—so users have options.
- Be transparent: publish vendors, flows, retention, and appeal processes.
- Plan for mistakes: build a real remediation path when adults are misclassified.
- Secure the pipeline: treat verification infrastructure like a high-value security system, because it is.
And if you’re a policymaker reading this (hello, and please stop using “the TikToks” in hearings), the technical reality is: you can mandate age verification, but you can’t mandate that it’s harmless. The best outcomes come from privacy-preserving architectures, narrow scope, strong enforcement against data abuse, and realistic expectations about what technology can and can’t do.
Conclusion: the internet is building an ID layer—whether it wants one or not
Age verification is spreading because it solves a real problem policymakers care about and because the legal system is increasingly tolerating it. The UK is operationalizing “highly effective age assurance” with concrete guidance and enforcement mechanisms. The EU is pushing toward standardized, privacy-preserving models. And in the US, a Supreme Court decision has lowered the legal barrier for age-gated adult content online.
The unresolved question isn’t whether age checks will exist—it’s whether we can deploy them without creating a permanent surveillance infrastructure, an irresistible data honeypot, and a paywall for anonymous speech.
If we get this wrong, the “papers, please” internet won’t just be annoying. It will be less free, less safe, and far more centralized. And ironically, kids will still find workarounds—because the only force more powerful than regulation is a teenager with Wi‑Fi and a group chat.
Sources
- The Verge — “Let me see some ID: age verification is spreading across the internet” (Stevie Bonifield, updated Feb 24, 2026)
- The Verge — “Discord is delaying its global age verification rollout” (Feb 24, 2026)
- Ofcom — Enforcement programme on age assurance for pornographic content
- UK House of Commons Library — “Implementation of the Online Safety Act”
- Ofcom — Roadmap to regulation (Online Safety Act)
- European Commission — “Commission releases enhanced second version of the age-verification blueprint” (Oct 10, 2025)
- Associated Press — Supreme Court upholds Texas porn age verification law (June 27, 2025)
- Justia — Free Speech Coalition, Inc. v. Paxton (decision details and syllabus)
- The Verge — Analysis on the Supreme Court ruling and implications for age-gating (June 27, 2025)
- The Verge — Texas app store age verification law blocked (Dec 23, 2025)
- MacRumors — Texas App Store age verification law blocked (Dec 23, 2025)
- Associated Press — UK ICO fine against Reddit tied to children’s data and age assurance (Feb 24, 2026)
- The Guardian — UK ICO fine against Reddit over children under 13 data (Feb 24, 2026)
- Le Monde — EU investigation into major porn sites under the DSA (May 28, 2025)
Bas Dorland, Technology Journalist & Founder of dorland.org