Australia’s internet regulator has accused the world’s biggest social platforms of not adequately implementing the country’s prohibition preventing under-16s from accessing their platforms, despite legislation that came into force in December. The eSafety Commissioner, Julie Inman Grant, has expressed “significant concerns” about adherence by Facebook, Instagram, Snapchat, TikTok and YouTube, highlighting inadequate practices including allowing banned users to repeatedly attempt age verification and inadequate safeguards to stop new account creation. In its initial compliance assessment since the ban took effect, the regulator found numerous deficiencies and has now shifted from observation to active enforcement, cautioning that platforms must show they have put in place “appropriate systems and processes” to prevent children under 16 from accessing their services.
Non-compliance Issues Exposed in Initial Significant Review
Australia’s eSafety Commissioner has outlined a concerning pattern of failure to comply among the world’s biggest social media platforms in her inaugural review since the ban took effect on 10 December. The report demonstrates that Meta, Snap, TikTok, YouTube and Snapchat have jointly failed to implement sufficient safeguards to prevent minors from accessing their services. Julie Inman Grant expressed particular concern about structural gaps in age verification processes, highlighting that some platforms have allowed children who initially declared themselves under 16 to subsequently claim they were older, thereby undermining the law’s intent.
The findings indicate a notable intensification in the regulatory action, with the eSafety Commissioner moving beyond monitoring to direct enforcement. The regulator has made clear that merely demonstrating some children still maintain accounts is inadequate; platforms must instead provide concrete evidence that they have established robust systems and processes designed to prevent under-16s from opening accounts in the first place. This shift demonstrates the government’s determination to hold tech giants responsible, with potential penalties looming for companies that fail to meet the statutory obligations.
- Permitting previously banned users to confirm again their age and regain account access
- Allowing multiple tries at the identical verification process without consequences
- Insufficient mechanisms to block new under-16 accounts from being established
- Inadequate notification systems for families and the wider community
- Shortage of clear information about enforcement efforts and user account terminations
The Scope of the Problem
The substantial scale of social media activity amongst Australian young people highlights the regulatory challenge facing both the authorities and the platforms themselves. With millions of accounts already restricted or removed since the ban’s implementation, the figures paint a picture of extensive early non-compliance. The eSafety Commissioner’s conclusions suggest that the technical and procedural obstacles to enforcing age restrictions have turned out to be considerably more complex than anticipated, with platforms struggling to differentiate authentic age confirmations from fraudulent ones. This complexity has placed enforcement authorities grappling with the core issue of whether current age verification technologies are adequate to the task.
Beyond the operational challenges lies a broader concern about the readiness of companies to place compliance ahead of user growth. Social media companies have long resisted stringent age verification measures, citing data protection worries and the genuine difficulty of confirming age online. However, the Commissioner’s report suggests that some platforms may not be making sufficient effort to implement the systems required by law. The move to active enforcement represents a critical juncture: either platforms will significantly enhance their compliance infrastructure, or they stand to incur significant penalties that could reshape their business models in Australia and potentially influence regulatory approaches internationally.
What the Statistics Demonstrate
In the opening month subsequent to the ban’s introduction, Australian authorities reported that 4.7 million accounts had been suspended or removed. Whilst this figure initially looked to demonstrate enforcement effectiveness, subsequent analysis reveals a more layered picture. The considerable quantity of account removals implies that many under-16s had been able to set up accounts in the first place, indicating that preventative measures were lacking. Additionally, the data casts doubt about whether deleted profiles constitute genuine enforcement or simply users removing their accounts of their own accord in reaction to the latest limitations.
The minimal transparency surrounding these figures has disappointed independent observers seeking to assess the ban’s actual effectiveness. Platforms have revealed scant details about their enforcement methodologies, performance indicators, or the nature of suspended accounts. This opacity makes it challenging for regulators and the wider public to evaluate whether the ban is operating as planned or whether teenagers are just locating other methods to use social media. The Commissioner’s demand for thorough documentation of systematic compliance measures reflects growing frustration with platforms’ reluctance to provide complete details.
Sector Reaction and Opposition
The major tech platforms have responded to the regulator’s enforcement action with a mixture of compliance assurances and scepticism about the ban’s practicality. Meta, which runs Facebook and Instagram, emphasised its commitment to complying with Australian law whilst simultaneously arguing that accurate age determination continues to be a significant industry-wide challenge. The company has advocated for a alternative strategy, proposing that robust age verification and parental approval mechanisms put in place at the app store level would be more efficient than enforcement at the platform level. This stance reflects broader industry concerns that the current regulatory framework puts an impractical burden on individual platforms.
Snap, the creator of Snapchat, has taken a more proactive public stance, stating that it had locked 450,000 accounts following the ban’s implementation and claiming to continue locking more daily. However, industry observers dispute whether such figures reflect authentic adherence or merely reactive account management. The fundamental tension between platforms’ commercial structures—which historically relied on maximising user engagement and growth—and the regulatory requirement to systematically remove an whole age group remains unresolved. Companies have long resisted stringent age verification, citing privacy concerns and technical limitations, creating a standoff between authorities and platforms over who carries responsibility for execution.
- Meta contends age verification ought to take place at app store level rather than on individual platforms
- Snap asserts to have locked 450,000 accounts since the ban’s implementation in December
- Industry groups cite privacy concerns and technical obstacles as barriers to effective age verification
- Platforms contend they are doing their best whilst challenging the ban’s overall effectiveness
Wider Questions About the Ban’s Efficacy
As Australia’s under-16 online platform ban moves into its implementation stage, key concerns remain about whether the legislation will accomplish its stated objectives or merely drive young users towards less regulated platforms. The regulatory authority’s first compliance report reveals that following implementation, substantial gaps exist—children continue finding ways to bypass age verification mechanisms, and platforms have had difficulty stop new underage accounts from being created. Critics argue that the ban’s success depends not merely on regulatory oversight but on whether young people will truly leave major social networks or simply shift towards alternative services, encrypted messaging applications, or virtual private networks designed to conceal their age and location.
The ban’s international ramifications add another layer of complexity to assessments of its impact. Countries including the United Kingdom, Canada, and multiple European countries are observing Australia’s initiative closely, considering similar laws for their own citizens. If the ban proves ineffective at reducing children’s social media usage or fails to protect them from dangerous online content, it could undermine the case for comparable regulations elsewhere. Conversely, if enforcement becomes sufficiently rigorous to genuinely restrict underage usage, it may inspire other administrations to pursue similar approaches. The result will probably shape worldwide regulatory patterns for many years ahead, making Australia’s implementation efforts scrutinised far beyond its borders.
Who Gains and Who Loses
Mental health supporters and child safety organisations have backed the ban as a necessary intervention against algorithmic manipulation and exposure to harmful content. Parents and educators contend that taking young Australians off platforms designed to maximise engagement could reduce anxiety, improve sleep patterns, and reduce exposure to cyberbullying. Tech companies’ own research has recognised the risks to mental health associated with social media use amongst adolescents, lending credibility to these concerns. However, the ban also eliminates valid applications of social media for young people—maintaining friendships, accessing educational content, and engaging with online communities around common interests. The regulatory framework assumes harm outweighs benefit, a calculation that some young people and their families question.
The ban’s practical impact extends beyond individual users to impact content creators, small businesses, and community organisations that rely on social media platforms. Young people who might have taken up creative careers through platforms like TikTok or Instagram now face legal barriers to participation. Small Australian businesses that are dependent on social media marketing lose access to younger demographic audiences. Community groups, charities, and educational organisations find it difficult to engage young people through channels they previously employed effectively. Meanwhile, the ban inadvertently advantages large technology companies with resources to develop age verification infrastructure, potentially strengthening their market dominance rather than reducing it. These unforeseen effects suggest the ban’s effects go well past the simple goal of child protection.
What Follows for Regulatory Action
Australia’s eSafety Commissioner has indicated a significant shift from passive monitoring to direct intervention, marking a key milestone in the execution of the age restriction. The regulator will now compile information to determine whether services have neglected to implement “reasonable steps” to block minors from using, a regulatory requirement that extends beyond simply recording that children remain on these platforms. This strategy requires tangible verification that companies have established appropriate systems and procedures meant to keep out minors. The Commissioner’s office has signalled it will conduct enquiries systematically, building cases that could lead to substantial penalties for non-compliance. This move from monitoring to action reveals increasing dissatisfaction with the companies’ present approach and suggests that willing participation by itself is insufficient.
The rollout phase highlights critical issues about the sufficiency of sanctions and the operational systems for ensuring platform accountability. Australia’s legislation provides enforcement instruments, but their efficacy hinges on the eSafety Commissioner’s readiness to undertake formal action and the platforms’ ability to adapt substantively. International observers, particularly regulators in the Britain and Europe, will closely monitor Australia’s enforcement strategy and outcomes. A robust enforcement effort could establish a blueprint for additional countries contemplating equivalent prohibitions, whilst shortcomings might weaken the overall legislative structure. The next phase will prove crucial whether Australia’s pioneering regulatory approach produces real safeguards for young people or becomes largely performative in its influence.

