
The technology meant to enforce Australia’s under‑16 social media ban has stumbled, hard. What began as a watershed moment for online safety—protecting young Australians online—risks becoming a cautionary tale of haphazard technology deployment and a generation pushed into digital shadows.
When Australia’s social media ban for children under the age of 16 went live on 10 December, it carried the weight of parental and global expectations. The Australian government determined it needed to take back power from major technology companies and platforms and protect children online. The first-of-its-kind legislation had bipartisan support and the world’s attention.
Two months later, the reality looks less like a carefully orchestrated policy triumph and more like inexperienced bouncers checking IDs at a nightclub, unable to distinguish between a mature-looking underage teenager and a youthful-looking adult.
In the rush to comply, and because the law is deliberately tech-neutral, platforms rolled out a mixed bag of solutions: Meta used selfie-based facial age estimation; TikTok used existing account data to estimate age; and Snapchat, YouTube and others used user-declared age data. There is even a third-party age assurance service based out of Singapore that combines government ID, bank SMS confirmation, or selfies in a layered approach. As long as platforms justified their chosen method as ‘reasonable steps’, they could implement whatever method suited them.
The problems started almost immediately (and have been well documented). Facial recognition tools, meant to be a privacy-friendly verification option, have proven spectacularly unreliable. Eleven-year-olds were being identified as 30. Sixteen-year-olds legitimately old enough for access were getting locked out. Tech-savvy teens discovered that drawing on fake facial hair with makeup, and in one case recruiting a pet dog, was enough to fool the algorithms.
Meanwhile, the use of virtual private networks has surged. Downloads of fringe apps such as Yope and Lemon8 have exploded, increasing by up to 251 percent. And platforms exempted from the ban—including Discord, Roblox and Steam—have quietly become new teenage town squares, complete with all the unmoderated risks the legislation was meant to address.
The government can point to 4.7 million accounts disabled in the first month as proof the system is working. But researchers warn that figure is deeply misleading because many teens maintain multiple accounts across platforms. The number of actual children affected is very likely far lower. Critically, the number of disabled accounts tells us nothing about where those displaced teens went next (and it is perhaps not bike-riding or to the library).
The government’s own Age Assurance Technology Trial concluded that of the verification methods available, there wasn’t a single ubiquitous solution that would suit all use cases, nor did the study find solutions that were guaranteed to be effective in all deployments.
But part of the problem lies in that the law prohibits platforms from compelling or requiring government-issued ID as the only option. While the privacy intention was laudable, it has yielded unintended outcomes.
The law mandates ‘ringfencing’: platforms must segregate age verification data from their advertising algorithms and destroy it after verification. Good in theory. But try enforcing technical data architecture requirements when you haven’t specified what verification methods are actually acceptable. The eSafety Commissioner is attempting oversight without clear technical standards. Platforms are implementing whatever they think might pass muster and nobody really knows if any of it will survive legal scrutiny.
Unable to use the most privacy-preserving verification method available—government digital identity wallets with zero-knowledge proofs—platforms have been forced into exactly the invasive alternatives the law was meant to prevent: behavioural profiling of children, error-prone facial recognition, and offshore commercial third-party services that increase data-breach risks.
Better solutions exist right now. Zero-knowledge proof systems, actively implemented in France, Denmark, Greece, Italy and Spain as part of the European Union’s age verification blueprint, could provide exactly what Australia needs: a way to prove you are above a chosen age threshold without creating surveillance infrastructure or sharing your name, address or full date of birth.
Here’s how it could work: a government agency issues a reusable digital age credential. You store it in a secure wallet on your device. When a platform requests age verification, you present cryptographic proof that you’re over 16. Nothing more. The platform gets a simple yes/no answer. Neither the government nor age verification solution provider learns which platforms you access. The platform never learns your identity. And you’re not uploading your passport to foreign commercial services hoping they don’t get breached.
France calls it ‘double anonymity’. Others call it privacy-preserving age assurance. Whatever the label, it addresses both the child-safety imperative and civil-liberties concerns without forcing impossible trade-offs.
Australia already has digital identity infrastructure through myGovID. Mandating platforms to use government digital identity and expanding it with age credential functionality and multi-modal biometrics could deliver the genuine reform—not just the optics of reform—that Australian families deserve. Add to this flexible pathways, such as parental consent for 14- to 15-year-olds rather than an absolute ban, and Australia would have a child-safety framework genuinely worthy of the world-first label it has already claimed.
The live experiment we are in will continue through two High Court challenges currently working through the system. Those challenges, filed by the Digital Freedom Project and Reddit, strike at the heart of the policy’s legitimacy. A graduated, consent-based approach alongside a strengthened digital identity framework would not only narrow the constitutional vulnerability at the heart of both challenges but also deliver what no court ruling can: a system that actually works.