Breaking: Australia Bans Social Media for Under 16s : A Global First

4 mins read

Australia social media ban: On 10 December 2025, Australia became the first country in the world to enforce a national law that prevents individuals under the age of 16 from holding or creating accounts on major social media platforms.

The law, formally known as the Online Safety Amendment (Social Media Minimum Age) Bill 2024, amends the existing Online Safety Act 2021 to impose new online safety obligations on technology companies.

It targets what the government defines as “age‑restricted social media platforms” and places the responsibility for enforcement squarely on the platforms themselves.

The Albanese government introduced the law with the stated goal of protecting children and teenagers from documented harms associated with social media use (Reuters Files)

The Albanese government brought in the ban, saying research shows it can harm teenagers’ mental health (Reuters File)

These harms include exposure to cyberbullying, psychological stress, addictive design elements, and harmful or manipulative content that could affect mental health.

Supporters of the law say that requiring platforms to prevent under‑16s from having accounts will reduce these risks and give young people more time to develop digital resilience before they join online social networks.

What the Law Requires

Under the amended legislation, platforms must take “reasonable steps” to ensure that Australians under the age of 16 do not have social media accounts on designated services.

The law does not criminalize children or their parents; instead, it imposes compliance duties on platforms.

Platforms that fail to implement appropriate age‑restriction measures risk fines of up to A$49.5 million (about US $32–33 million) for each breach.

Government regulators have identified an initial list of 10 age‑restricted social media platforms that are subject to the law, including:

  • Facebook
  • Instagram
  • TikTok
  • Snapchat
  • X (formerly Twitter)
  • YouTube
  • Reddit
  • Twitch
  • Kick
  • Threads

These platforms must deactivate any existing under‑16 accounts and prevent the creation of new ones for users under 16 in Australia.

To comply, companies must adopt a range of age assurance and age‑verification processes.

How Age Verification May Work

The legislation says that platforms must take reasonable steps to restrict under‑16s, but it does not prescribe a single method for age verification or enforcement, leaving companies flexibility in how they comply.

Possible approaches include:

  • Document‑based Verification: Platforms can allow users to provide official identification, such as passports, driver’s licenses, or government ID cards, for age verification through trusted third‑party services.
  • AI and Behavioral Inference: Companies may use artificial intelligence and behavioral data to infer user age based on patterns of use and metadata.
  • Biometric Age Estimation: Some platforms could deploy facial analysis or biometric solutions to estimate age from a user’s photo or video selfie.

While these methods offer flexibility, they also raise concerns about privacy, data security, and accuracy.

For example, critics note that AI‑based age estimation can produce errors, and extensive ID collection could create risks if user data is mishandled.

Nevertheless, the law itself does not mandate a specific technology, and platforms may innovate within the regulatory framework to meet the “reasonable steps” standard.

What Under‑16s Can Still Access

Despite the account restrictions, children under 16 are still permitted to view publicly accessible content on social media platforms without logging in.

For example, they may still watch videos on YouTube or browse public pages on Instagram as long as the content does not require a registered account to access.

This distinction reflects the law’s focus on preventing under‑16s from interacting and posting while allowing access to passive content consumption.

Additionally, several types of digital services are exempt from the age restrictions, including educational platforms, professional tools, email services, and messaging apps that do not primarily enable social interaction between users.

This ensures that communication and learning tools commonly used by children remain accessible.

Government and Regulator Roles

The eSafety Commissioner, Australia’s national online safety regulator, will play a central role in overseeing compliance.

The regulator will issue guidance to platforms, monitor enforcement actions, and may require regular reporting from companies on how many under‑16 accounts they remove or block.

The Commissioner’s office also assists platforms in understanding how to apply age‑restriction definitions and enforcement procedures in practical terms.

Communications Minister Anika Wells has stated that monthly reporting and six‑month follow‑ups are expected as part of the government’s oversight to ensure that compliance improves over time.

This structured approach signals that regulators intend to treat enforcement as an ongoing process rather than a one‑off action.

Supporters’ Perspective

Proponents of the law argue that it aligns with broader efforts to protect children’s mental health and well‑being.

The Australian eSafety Commission has highlighted research linking younger adolescents’ social media use with increased stress, anxiety, reduced sleep, and other well‑being concerns.

Delaying introduction to social media until age 16 may allow minors more time to develop social and emotional skills while reducing early exposure to potentially harmful digital environments.

Advocates also emphasize that the law places the responsibility for safety where it belongs: with platforms that design and operate environments where interactions occur.

By forcing companies to innovate and implement age‑appropriate safeguards, Australia hopes to set a precedent for global internet safety standards.

Criticism and Practical Challenges

Despite support from some child safety advocates, the law has drawn substantial criticism from technology companies, legal experts, and digital rights organizations.

Tech giants such as Google and Meta have called the enforcement framework “extremely difficult” to implement and argued that simply deactivating accounts will not address underlying safety risks.

They caution that users circumventing age checks may lose access to built‑in safety features like content filters, which often require logged‑in accounts to function effectively.

Critics also warn that stringent age restrictions could inadvertently push young users toward unregulated or less safe corners of the internet, including private chat apps, anonymous forums, or peer‑to‑peer communication services where moderation is limited or absent.

These spaces may pose equal or greater risks than regulated platforms, diminishing the intended protective effect of the law.

Moreover, legal challenges have already emerged.

Advocacy groups, such as the Digital Freedom Project and even some platforms like Reddit, have raised constitutional concerns, arguing that the ban may conflict with Australia’s implied right to political communication and free expression.

These challenges are likely to be tested in the High Court of Australia, potentially shaping the future of the law’s enforcement and scope.

Global Influence and Next Steps

Australia’s social media age restriction has attracted international attention and debate.

Some jurisdictions, including parts of Europe and policymakers in the United States and United Kingdom, are closely monitoring the outcomes as they consider similar child safety measures.

Early public opinion surveys in other countries suggest growing support for age‑based limits on social media, although approaches vary widely depending on legal, cultural, and technological contexts. The Times of India

Looking ahead, the effectiveness of Australia’s law will depend on how platforms implement age‑restriction systems, how regulators enforce compliance, and whether the policy truly reduces harm without creating unintended negative consequences.

As the law continues to operate in practice, both supporters and critics will watch closely to assess its impact on children, families, and the global digital landscape.

Leave a Reply

Your email address will not be published.