Australia's world-first ban on social media for under-16s sounded decisive when it kicked in on 10 December 2025. Platforms like Facebook, Instagram, TikTok, Snapchat, and YouTube were told to take "reasonable steps" to block accounts for anyone younger than 16. The goal was clear and supposedly compassionate: protect developing brains from predatory algorithms, cyberbullying, body-image distortion, exposure to explicit content, and the relentless dopamine loops that fuel addiction and anxiety.

The eSafety Commissioner's March 2026 compliance update report tells a more sobering story. Major platforms are under investigation for poor practices. Over five million accounts suspected of belonging to under-16s have been removed or restricted, but the report reveals systemic gaps: platforms prompting kids to keep trying age-assurance checks until they pass, weak or inaccessible reporting tools for parents, and insufficient barriers to new account creation. A parent survey showed roughly one in three children still had accounts shortly after the ban. Meanwhile, downloads of alternative apps (RedNote, Lemon8, and others) spiked as tech-savvy teens simply migrated elsewhere.

This is not surprising. It echoes the failed alcohol Prohibition experiments of the early 20th century — a well-intentioned ban on a harmful substance that proved almost impossible to enforce without addressing the underlying demand and the ingenuity of suppliers. During the Great Depression, America's attempt to ban "grog" (liquor) didn't eliminate drinking; it drove it underground, enriched criminal networks, corrupted enforcement, and left society with the same social ills plus new ones. Australia's social media ban risks a similar fate in the digital age.

Enforcement Challenges Are Predictable — and Structural

Social media companies face fines of up to $49.5 million for serious breaches, yet the law relies on them policing themselves with "reasonable steps." Age-assurance technology is imperfect — facial analysis, ID uploads, and behavioural checks all have error rates that clever (or desperate) teenagers can game. Platforms have been accused of allowing repeated attempts at verification and failing to make it easy for parents to report violations. Some even nudge users to keep trying after they declare themselves underage.

The landscape shifts constantly. Kids move to fringe platforms, use VPNs, borrow older siblings' devices, or create fake profiles. Gaming apps and messaging services often fall outside the strictest rules. Roblox was initially in the crosshairs but later exempted (now under review again over grooming concerns). The ban cannot realistically cover every emerging app, and self-reporting by companies leaves obvious loopholes.

Account removals look impressive on paper — over five million in the first few months — but they don't reveal how many new accounts are being created daily or how much activity has simply shifted to unregulated spaces. A ban enforced mainly through corporate compliance in a borderless internet is like trying to dam a river with sandbags while the water finds every crack.

The Deeper Flaw: It Doesn't Fix the Addictive Design

Here is the ban's most serious limitation, as critics (including some experts quoted in coverage) have noted from the start. Even if perfectly enforced, simply raising the age gate does not address the core harms baked into the platforms' design: infinite scroll, algorithmic recommendation engines optimised for maximum engagement, likes and notifications that trigger social comparison and anxiety, disappearing stories that create urgency, and content feeds that push extreme material to keep users hooked.

Recent government moves acknowledge this by expanding the definition to target platforms with "addictive or otherwise harmful design features." Yet meaningful reform here requires a broader "digital duty of care" — forcing companies to redesign products to be less manipulative, especially for young users. That work is still in consultation phase and faces fierce resistance from an industry that profits enormously from addiction-like behaviour.

Banning access for under-16s treats the symptom (kids on the platforms) while leaving the addictive product largely unchanged. It is akin to banning teenagers from pubs without addressing how alcohol is marketed, formulated for palatability, or sold cheaply in supermarkets. The temptation and the engineered craving remain.

Social and Cultural Consequences

The vague, hard-to-pinpoint anxiety many young people (and parents) feel today is partly fuelled by this environment. Constant comparison, sleep disruption, cyberbullying, and exposure to idealised or harmful content erode mental resilience. Studies link heavy social media use in teens to higher rates of depression, anxiety, body dissatisfaction, and attention issues. In a society already wrestling with social entropy — declining family formation, weakened community ties, and civilisational self-doubt — addictive digital platforms act as both escape and accelerant.

A flawed ban risks several outcomes:

Displacement, not reduction: Teens migrate to less moderated spaces where risks may be higher.

Erosion of trust: When laws look tough but prove porous, cynicism grows among parents and youth alike.

Missed opportunity: Focus on age restrictions can distract from harder but more effective measures — parental responsibility, school-based digital literacy, family meal habits without screens, and pressure on platforms to redesign for human flourishing rather than engagement metrics.

Uneven impact: Tech-savvy or determined kids bypass it easily, while stricter families bear the compliance burden.

A Better Path Forward

The impulse behind the ban is right: children's developing brains need protection from industrial-scale manipulation. But good intentions require realistic execution.

Real progress demands a multi-layered approach:

Stronger incentives (and penalties) for platforms to redesign addictive features — limit infinite scroll for younger users, reduce algorithmic amplification of harmful content, and make "time well spent" the default.

Cultural and family-level renewal: Parents reclaiming authority over devices, prioritising real-world activity, face-to-face relationships, and healthy nutrition (real food over dopamine snacks, as discussed by Mrs. Vera West).

Honest education about the trade-offs of digital life without moral panic or naive optimism.

Selective, enforceable rules that target the worst abuses rather than pretending a blanket age ban can magically restore childhood.

Like Prohibition, the under-16 social media ban is a nice idea on paper that exposes the limits of top-down prohibition in the face of powerful incentives, human ingenuity, and addictive product design. It has achieved some short-term disruption and public awareness, but nearly four months in, compliance is patchy and the deeper problems persist.

We need to move beyond symbolic bans toward addressing why these platforms are so compelling in the first place — and why so many young people, amid vague background anxiety and social fragmentation, turn to them for connection, validation, or escape. Protecting the next generation requires more than age gates. It demands reforming the product, strengthening the family, and recovering the civilisational confidence to raise children who can navigate the digital world without being consumed by it.

The ban was a bold first step. Its evident shortcomings should now push us toward more honest, comprehensive solutions rather than doubling down on enforcement theatre. Our kids — and the future health of Australian society — deserve better than "Frankenstein" algorithms dressed up as harmless entertainment.

https://theconversation.com/social-media-giants-are-not-complying-with-under-16s-social-media-ban-new-report-finds-279555