Canberra: Australia’s ambitious plan to block children under 16 from using social media has hit a critical roadblock, as a new government-commissioned report raises doubts about the accuracy of the very technology meant to enforce the law. With the ban set to take effect in December, policymakers now face pressure to balance child safety, user privacy, and technological limitations.
The report, released on Monday, evaluated selfie-based age-verification systems, which scan facial features to estimate a user’s age. While the technology generally performs well for adults, it was found to be unreliable near the critical cutoff point of 16 years. According to the findings, individuals exactly 16 years old were misclassified as underage in 8.5% of cases, an error rate deemed too high for national enforcement.
Beyond accuracy concerns, the software also showed inconsistencies across different demographics. Experts noted that non-Caucasian users, female-presenting individuals, and teenagers close to the age threshold were more likely to be misidentified. This raises concerns about fairness, potential bias, and the risk of young people being wrongfully excluded from platforms despite meeting the legal age requirement.
University of Sydney media researcher Justine Humphry warned that the technology’s inconsistencies could undermine the rollout. “The government is pushing for a December deadline, but the verification systems are still in a grey zone. Without greater reliability, the policy risks technical and ethical failures,” she said.
The government, however, remains firm on the timeline. Communications Minister Anika Wells acknowledged the shortcomings but stressed that the trials proved the tools were viable, privacy-conscious, and a promising solution. “No system is perfect, but there are practical, private, and effective methods available,” she said, suggesting that ID checks or parental consent could supplement the digital scans.
Under the new law, major platforms including Instagram, TikTok, Snapchat, and YouTube will be required to take “reasonable steps” to block under-16 users. Non-compliance could result in fines of up to A$49.5 million (US$32 million), marking one of the most stringent regulatory frameworks on youth internet access in the world.
As the deadline looms, the debate highlights a deeper global dilemma: how to protect young users from harmful online content while respecting privacy and ensuring technological fairness. Australia’s experiment could become a test case for other nations, but its success will depend on whether regulators and tech firms can bridge the gap between child protection policy and the imperfect realities of artificial intelligence.