Countries Moving to Ban Social Media for Children: Australia’s Under‑16 Crackdown Spurs Europe and Southeast Asia

AI generated image for Countries Moving to Ban Social Media for Children: Australia’s Under‑16 Crackdown Spurs Europe and Southeast Asia

On March 6, 2026, TechCrunch consumer reporter Aisha Malik published a tidy list of governments “moving to ban social media for children.” It’s the kind of roundup that starts as a public-policy story and quickly mutates into a product requirements document for every platform on Earth: age verification, parental consent flows, enforcement dashboards, appeals processes, privacy guarantees, and—because the internet never misses an opportunity—political arguments about censorship.

This article uses Malik’s TechCrunch piece as a starting point and then expands the picture with additional reporting and primary-source context. If you want the original list, read it at TechCrunch. (As always, don’t blame the messenger; blame the scroll.)

What’s new in 2025–2026 is not the idea that kids should be protected online—every politician, parent, and pediatrician has said that since the first AOL chatroom—but the shift from “design guardrails” to “hard age gates.” Australia operationalized a national under‑16 account prohibition in December 2025. Spain, Denmark, Malaysia, and Indonesia have moved toward similarly blunt instruments, while France, Germany, Greece, Slovenia, and the UK are exploring variants that range from outright bans to stricter platform duties and compulsory age assurance.

Below, I’ll walk through the countries highlighted in the TechCrunch item, explain what each one is actually doing (or proposing), and then zoom out: how age verification works, why executives are suddenly getting mentioned in legal drafts, what the EU is building to standardize age checks, and what this all means for platforms, parents, schools, and—most importantly—the teenagers who can bypass half the internet with a group chat and a VPN.

Why 2026 suddenly feels like the year of the “social media minimum age”

There are three overlapping forces pushing lawmakers from “please be nicer to children” to “no account for you”:

  • Evidence and anxiety about harms. Governments are citing mental health concerns, compulsive design patterns, exposure to self-harm content, cyberbullying, sexual exploitation, and the algorithmic amplification of harmful material. The Dutch government, for example, has publicly warned about psychological and physical problems among children using social media (including panic attacks, depression and sleep issues), even while stopping short of a legal ban. citeturn2news14
  • A new regulatory toolbox. In Europe, the Digital Services Act (DSA) gives regulators a framework to demand stronger protections for minors. The European Commission has also published guidelines under the DSA and worked on an age-verification approach intended to be privacy-aware and interoperable. citeturn3view2turn2search6turn2search10
  • Australia broke the glass. Once one major democracy implements a national age ban, every other government has a live example to point to—either as a model or as a cautionary tale. Multiple countries explicitly reference Australia when explaining their own moves. citeturn3view0turn2news13turn1news14turn1news13

Also, there’s a political reality: proposing an “under‑16 social media ban” is a rare piece of modern governance that can sound tough on Big Tech and supportive of families in a single headline. It’s policy catnip.

The countries moving toward bans (and what they’re actually proposing)

One important caveat: the phrase “ban social media for children” often masks a wide spectrum of approaches. Some countries target account creation (no profiles under a certain age). Others target access (blocking use). Some allow parental consent carve-outs. And some mostly rely on platform duties to change feeds, limit addictive design, and prevent exposure to harmful content.

With that in mind, here is the current landscape based on TechCrunch’s list plus supporting reporting and primary sources.

Australia: the world-first national under‑16 account prohibition (now a reference point)

Australia is the policy domino everyone else keeps pointing at. TechCrunch notes that Australia became the first country to ban social media for children under 16 in December 2025, covering major platforms including Facebook, Instagram, Snapchat, Threads, TikTok, X, YouTube, Reddit, Twitch, and Kick, while excluding WhatsApp and YouTube Kids. citeturn3view0

Australian consumer advocacy group ACCAN summarizes the practical intent: the law targets platforms (not kids), and it is the platforms that face penalties if they don’t take reasonable steps to prevent under‑16s from holding accounts. ACCAN also lists the major platforms expected to be covered and emphasizes the policy start date of December 10, 2025. citeturn0search6

What “ban” means in practice

Australia’s approach is especially instructive because it highlights the hardest part of any age gate: you can write “under 16 may not have accounts” in a statute in one afternoon, but you can’t enforce it without reliable (and lawful) age assurance.

That creates a cascade of technical and policy questions:

  • What counts as a “social media platform” versus messaging, gaming, or video hosting?
  • Is the requirement “no accounts” or “no access,” and what about view-only browsing?
  • How do you verify age without building a national database of faces and IDs?
  • How do you handle edge cases: kids using a parent’s device, teens misrepresenting age, or shared family tablets?

Australia’s law created the global template for those debates, including the reality that enforcement pressure often lands on platforms’ identity and trust systems, not on family decisions in the living room.

France: from parental consent under 15 to a proposed under‑15 ban

France has been tightening the screws for years. Since 2023, France has required parental consent for children under 15 to register on social media, and platforms are supposed to implement verification systems to ensure that consent exists. citeturn0search1

In late January 2026, French lawmakers in the National Assembly approved a bill aimed at banning social media for minors under 15, though it still needs to proceed through the Senate and further steps to become law. citeturn0news14turn3view0

Why France matters beyond France

France sits at the intersection of two stories:

  • National-level child protection policy that is increasingly “age-limit first.”
  • EU-level regulatory infrastructure where the DSA, guidelines, and age-verification tooling can influence how enforcement works across the bloc.

If France ends up with an under‑15 ban and the EU simultaneously standardizes privacy-preserving age checks, platforms could face a much more coherent enforcement environment in Europe than in the U.S., where rules often vary state-by-state.

Spain: under‑16 access ban proposal plus personal accountability for executives

Spain has been unusually explicit about both the age gate and the broader governance agenda. Prime Minister Pedro Sánchez publicly announced a package of measures on February 3, 2026, including “banning access to social networks in Spain for minors under the age of sixteen” by forcing platforms to implement effective age verification systems. citeturn3view1

TechCrunch’s summary adds that Spain’s government is also seeking a law that would make social media executives personally accountable for hate speech on their platforms. citeturn3view0

The executive-liability angle (and why platforms should care)

Most platform regulation focuses on corporate fines, compliance audits, or content moderation duties. The moment lawmakers talk about personal liability for executives, the risk profile changes—boardrooms pay attention, insurance policies get rewritten, and compliance becomes a “top 3” KPI instead of a quarterly slide deck nobody reads.

Even if such provisions are watered down during legislative review, the trend signals that governments are not only concerned about what children see; they’re increasingly focused on who is responsible when systems amplify harm.

Denmark: a proposed under‑15 ban that could become law mid‑2026

Denmark has openly positioned itself as following Australia’s lead. The Associated Press reported that Denmark’s government secured an agreement with coalition and opposition parties to ban access to social media for anyone under 15, with the plans potentially becoming law as soon as mid‑2026. citeturn2news13turn0news13

The Danish plan also illustrates a classic European compromise approach: stricter limits for younger teens, with some discussion of whether parents might have limited ability to allow use for older children (details still evolving in public reporting). citeturn1search10turn1news21

Germany: conservatives float under‑16 restriction; coalition hesitancy remains

Germany’s debate shows that these bans aren’t politically inevitable, even when anxiety is widespread. TechCrunch notes that Chancellor Friedrich Merz’s conservatives discussed a proposal to bar children under 16 from social media, while coalition partners appeared hesitant. citeturn3view0

Reuters coverage syndicated via Yahoo News also describes Merz as open to restricting children’s social media use, with a proposal from a regional CDU branch in Schleswig-Holstein to set the minimum age at 16 alongside mandatory age verification. citeturn2search1

Why Germany’s position matters

Germany is often influential in EU regulatory debates. If Germany pushes for a strict age limit across major platforms, it could accelerate EU-wide standards—especially around age verification and the privacy implications of turning every sign-up flow into a mini KYC process.

Greece: reported to be close to an under‑15 ban announcement

TechCrunch reports that Greece is said to be close to announcing a social media ban for children under 15, citing Reuters. citeturn3view0

At the time of writing (March 6, 2026), Greece’s exact legislative vehicle and enforcement model are not as clearly documented in the primary sources surfaced in this research as Spain’s or Australia’s. That doesn’t mean it isn’t happening—only that details can remain fluid until a bill is tabled.

Slovenia: drafting legislation to prohibit under‑15 access

Slovenia is drafting legislation to prohibit children under 15 from accessing social media, according to TechCrunch’s summary of Reuters reporting, citing platforms like TikTok, Snapchat, and Instagram. citeturn3view0

This is another example of a government aiming at the “big three” youth-facing apps rather than trying to define the entire universe of online interaction—though, inevitably, policy definitions tend to broaden once lawyers start drafting.

Malaysia: a planned under‑16 account ban starting in 2026

Malaysia has been explicit about timeline: the Associated Press reported that Malaysia plans to ban social media accounts for people under 16 starting in 2026. citeturn1news13turn0news13

Malaysia’s move fits into its broader regulatory direction. For instance, the Malaysian Communications and Multimedia Commission (MCMC) has described a regulatory framework aimed at a safer internet for children and families, including licensing requirements for large social media and messaging services operating in Malaysia. citeturn1search17

Southeast Asia’s distinct enforcement environment

Compared with the EU, Southeast Asian enforcement can look less standardized but sometimes more directly operational, in the sense that governments may rely on licensing, platform permissions, and local compliance obligations. That can give regulators leverage—though it also raises concerns about overreach and the blurring line between child safety and broader speech control.

Indonesia: a new under‑16 restriction targeting “high-risk digital platforms”

On March 6, 2026, the Associated Press reported that Indonesia’s communications minister said the government signed a regulation meaning children under 16 can no longer have accounts on “high-risk digital platforms,” naming YouTube, TikTok, Facebook, Instagram, Threads, X, Bigo Live, and Roblox. citeturn2news12

TechCrunch similarly notes Indonesia’s plan to ban children under 16 from using social media and other popular platforms, starting with the same set of services. citeturn3view0

The Roblox and YouTube question

Indonesia’s list is notable because it blends classic “social media” (Instagram, TikTok, X) with services that are sometimes treated as adjacent categories:

  • YouTube (video hosting and recommendation algorithms, often used passively)
  • Roblox (a game platform with social features and messaging)

That’s a preview of how these bans may evolve: lawmakers increasingly focus on social functionality + algorithmic feeds + messaging, regardless of whether the company’s marketing says “we’re entertainment” or “we’re gaming.” If the product behaves like a social network, it may get regulated like one.

United Kingdom: considering a ban, while already enforcing tougher child-safety rules

TechCrunch notes the UK is “weighing a ban” on social media for under‑16s, including a plan to consult parents, young people, and civil society and to consider limiting compulsive features like endless scrolling. citeturn3view0

Separately, the UK has already taken enforceable steps under its online safety regime. The UK government has outlined changes “from 25 July 2025” requiring robust age checks for certain harmful content categories and describing enforcement powers and large fines for noncompliance via Ofcom. citeturn3view3

The UK approach, at least so far, looks less like “no accounts under X” and more like “if you run a service with risk, you must prove you’re protecting children.” But the consultation and political pledges suggest the UK could still converge with the stricter age-ban model.

A quick “status board” (March 6, 2026)

Because this story moves fast, here’s a grounded snapshot as of Friday, March 6, 2026:

  • Implemented / in force: Australia (under‑16 restrictions effective December 10, 2025). citeturn0search6turn3view0
  • Announced / moving to implement: Indonesia (under‑16 accounts restricted on named platforms via regulation; announced March 6, 2026). citeturn2news12turn3view0
  • Planned for 2026: Malaysia (AP reports under‑16 account ban starting in 2026). citeturn1news13
  • Proposed / in legislative process: France (under‑15 bill passed National Assembly; Senate steps pending). citeturn0news14turn3view0
  • Proposed / announced intent: Spain (PM announcement of under‑16 access ban via age verification requirement). citeturn3view1turn3view0
  • Proposed / political discussion: Denmark (under‑15 plan could become law mid‑2026). citeturn2news13
  • Under discussion: Germany (proposal discussed by conservatives; coalition hesitancy). citeturn3view0turn2search1
  • Drafting / reported close: Slovenia (drafting under‑15 legislation); Greece (reported close to under‑15 ban announcement). citeturn3view0
  • Considering / consulting: UK (weighing ban; also tightening duties under online safety rules). citeturn3view0turn3view3

How age verification actually works (and why everyone hates it)

Age verification sounds straightforward until you try to implement it at internet scale without turning every login into a passport checkpoint. In reality, most systems fall into a few buckets:

1) Self-declared age (a.k.a. “please don’t lie”)

This is what most social networks historically used: the user enters a birthdate. It’s cheap, accessible, and almost entirely ineffective against motivated teenagers. It’s also why lawmakers are no longer impressed.

2) Document-based verification (ID scans, eKYC, passport checks)

This is closer to financial KYC. It can be accurate, but it raises obvious privacy and data-retention concerns. It also increases friction and can exclude users who lack documentation. Some proposals in different jurisdictions have referenced eKYC-style approaches, though implementation details vary widely.

3) Biometric age estimation (face scans / “how old do you look?”)

Facial age estimation can be fast and avoids storing an ID document—if implemented with privacy safeguards and minimal retention. But it still involves biometric processing, which is sensitive in many legal regimes. The UK government itself has described “facial age estimation” among examples of highly effective age assurance methods for blocking minors from certain harmful content categories. citeturn3view3

4) Device-based methods and privacy-preserving tokens

This is the direction many policymakers want: verify once, then present a minimal “over/under” proof. The European Commission has described blueprints and an age-verification app prototype under the DSA, and positioned EU Digital Identity Wallets as part of the long-term solution (with interim methods before wallets become available). citeturn3view2turn2search6turn2search10

5) Parental verification and delegated consent

France’s parental-consent model for under‑15 account creation reflects this approach, though the recurring question is enforcement: how do you verify that consent is real and not forged by a highly resourceful 13-year-old with access to a parent’s email account? citeturn0search1

The trade-off nobody can avoid: child protection vs. privacy vs. inclusion

Whenever governments demand stronger age gates, three concerns show up immediately:

  • Privacy: Age verification can become a backdoor identity system if not designed carefully. Centralized databases of IDs or biometrics are tempting targets for attackers and can be misused.
  • Inclusion: Strict verification can lock out legitimate users (including adults) who can’t or won’t provide certain credentials. It can also disproportionately affect migrants, low-income families, and anyone without stable documentation.
  • Effectiveness: If verification is too easy to bypass, the system becomes theater. If it’s too strict, it becomes exclusionary and triggers backlash.

That’s why the EU’s work on guidelines and blueprints matters. It signals a push toward solutions that are both enforceable and compatible with privacy and security norms—at least in theory. citeturn3view2turn2search10

What platforms will have to build (whether they like it or not)

If your product team works on sign-up flows, trust & safety, or identity systems, “age bans” aren’t a political headline—they’re a multi-quarter engineering program. In practical terms, platforms facing these regulations may need to implement:

  • Age assurance at account creation with jurisdiction-aware rules (a user in Sydney isn’t the same as a user in Stockholm).
  • Ongoing age checks when usage patterns suggest a minor is using an adult-claimed account.
  • Parental consent tooling (where legally required), including revocation, auditing, and safe default settings.
  • Minor-safe defaults: restrictions on DMs, search discoverability, contact suggestions, and recommendation systems.
  • Appeals and remediation processes when adults are mistakenly flagged as underage, or when teens are locked out incorrectly.
  • Data minimization and retention policies to avoid storing more identity information than necessary.

And because the internet is global, the compliance team will inevitably have the same conversation in 17 time zones: “Is this feature ‘social media’?”

Will bans work? The realistic outcomes (and the likely unintended consequences)

Whether these bans reduce harm depends on how you define success and how you measure it. Some realistic possibilities:

Outcome A: fewer underage accounts on mainstream platforms

This is the most straightforward success metric. If platforms enforce age gates, many kids will be pushed off the biggest services. For policymakers, that may be enough.

Outcome B: displacement to smaller, less regulated platforms

If mainstream apps become hard to access, teenagers don’t stop being social; they migrate. That can mean niche apps with weaker moderation, poor security practices, and higher exploitation risk. This is one of the core worries critics raise whenever bans are proposed.

Outcome C: increased account-sharing and “borrowed identities”

Expect a growth in:

  • Parents creating accounts “for” children
  • Older siblings serving as identity proxies
  • Secondary devices with weaker controls

This complicates enforcement and can reduce visibility into how minors are actually using the platforms.

Outcome D: healthier defaults for everyone

One optimistic scenario is that even if bans are imperfect, they force platforms to redesign harmful engagement patterns—especially for minors. TechCrunch notes the UK is considering limiting features that drive compulsive use like endless scrolling. citeturn3view0

If platforms must prove they’re not algorithmically feeding harmful content to children—an idea embedded in multiple regulatory approaches—then the impact could extend beyond age gates.

Industry context: why “social media” is now “high-risk digital platforms”

Indonesia’s wording is revealing: “high-risk digital platforms.” citeturn2news12

Regulators are increasingly focusing on risk profiles rather than labels. A platform can be called:

  • a video site,
  • a game,
  • a messaging app,
  • a livestream service,
  • or a community forum,

…and still expose minors to the same underlying hazards: strangers, DMs, grooming, algorithmic rabbit holes, toxic content, and pressure loops built around engagement metrics. Once lawmakers accept that, the definition of “social media” expands to anything with social graphs, UGC, and recommendation engines.

Expert perspectives (what researchers and regulators keep emphasizing)

Across jurisdictions, the consistent themes from regulators and policymakers include:

  • Algorithms and engagement design matter—not just content moderation after the fact.
  • Age assurance needs standards, otherwise every platform will build its own brittle, invasive system.
  • Enforcement must be credible: large fines, clear duties, and regulator capacity.

The EU’s DSA-linked guidance process underscores this: the Commission has described guidelines meant to help assess compliance with DSA obligations for platforms that allow minors, alongside an age-verification app prototype and blueprints. citeturn3view2turn2search10

In the UK, the government’s online safety communications emphasize enforceable requirements and large penalties for noncompliance, framing online harms as “real” and calling out “toxic algorithms.” citeturn3view3

What U.S. readers should watch (even if this is “over there”)

Even though the TechCrunch list is international, the implications are global because platforms typically ship one product with localized gates—not 193 different products.

In the United States, the policy environment has been fragmented, with various states exploring age verification, parental consent, or restrictions for minors. But one clear trend is that lawmakers are watching what Australia and Europe implement—and then asking why their own regulators can’t do the same.

For U.S. companies, this translates into three immediate pressure points:

  • Compliance complexity: a patchwork of state laws plus international bans will drive “geo-compliance” systems deeper into core account infrastructure.
  • Litigation risk: both constitutional challenges and privacy suits can emerge when identity verification expands.
  • Product design constraints: youth experiences may be forced into default “safe modes,” with limits on recommendations, DMs, and discoverability.

Practical takeaways for parents, schools, and developers

If you’re a parent

Even with bans, your practical toolbox remains the same: device-level controls, family media plans, and ongoing conversations. Age gates can reduce exposure, but they can’t replace literacy and supervision—especially when kids can “outsource” identity to older friends.

If you’re a school leader

Expect policy whiplash. If national bans reduce mainstream usage, social dynamics may move to messaging and gaming. Schools may see less TikTok drama and more encrypted group-chat conflict. Plan digital citizenship efforts accordingly.

If you’re a platform engineer or product manager

Assume you will need a robust “age assurance abstraction layer” in your identity stack. If you build it as a hard-coded, one-country compliance patch, you’ll rebuild it again next quarter when another government announces a new threshold (15, 16, 18) and a different definition of “covered service.”

The bottom line

The global shift is unmistakable: governments are no longer satisfied with “minimum age 13” buried in terms of service and enforced by a birthday drop-down. Australia’s under‑16 ban has created a reference model. Spain is promising an under‑16 ban plus sharper accountability for platform leadership. France and Denmark are moving toward under‑15 restrictions. Malaysia and Indonesia are taking decisive steps in Southeast Asia. And the UK and EU are building regulatory and technical scaffolding—guidelines, enforcement regimes, and age-verification prototypes—that could make these bans more practical (and more intrusive) at the same time.

If you’re a social platform, 2026 is the year your sign-up screen becomes a policy battleground. If you’re a parent, 2026 is the year you learn more about age assurance than you ever wanted. And if you’re a teenager… well, you’ll probably just teach everyone else how it works.

Sources

Bas Dorland, Technology Journalist & Founder of dorland.org