The Ideological Battleground: Australia's Social Media Ban, Section 230, the Precautionary Principle and the Future of Online Child Safety

The Ideological Battleground: Australia's Social Media Ban, Section 230, the Precautionary Principle and the Future of Online Child Safety

Australia's landmark decision to ban social media access for children under 16 is a decisive shift toward implementing social media restrictions for youth. The Social Media Minimum Age bill sets Australia up as a test case for other countries and it requires companies to take "reasonable measures" to prevent users under 16 from creating accounts and companies face fines up to AU$49.5 million ($32 million) for systemic violations. The implementation timeline is at least one year to allow platforms to prepare. Parents and minors will not be penalised for breaches. The legislation has received strong public support, with a YouGov survey showing 77% of Australians favour the under-16 ban.

In tandem, earlier this month, Australia selected the UK-based Age Check Certification Scheme in a tender and tasked it with studying and evaluating the effectiveness, reliability, and privacy impacts of various age assurance technologies across all age groups and for verifying parental responsibility to obtain verifiable parental consent. Results from the trials are expected in Q4 2025.

The findings of a previous set of technical trials of age verification and parental consent services, which aligned with the PAS 1296 Age Checking Code of Practice conducted by the UK government, found them to be privacy-preserving, feasible, scalable, and reliable. The ISO 27566 Age assurance framework a technical standard underpinning age assurance services, is the result of global collaboration by identity, privacy, security, and consumer rights experts

The Australian ban aligns with the precautionary principle often applied in real-world consumer protection scenarios. While physical products face immediate recalls based on potential risks, online platforms have operated without similar preventive measures, largely due to protections like Section 230. Australia's action represents a step towards applying similar precautionary standards in digital spaces.

 The Section 230 Immunity Shield

Section 230 of the Communications Decency Act provides tech companies immunity from liability for user-generated content. This protection has become increasingly complex as platforms deploy sophisticated AI algorithms that actively curate and recommend content, based on active and real-time response to, for example, how much time a child spends viewing short videos to determine what to show next, while simultaneously claiming to be mere content conduits. A significant case from the Third Circuit Court of Appeals addressed algorithmically generated content on TikTok. The court determined that when TikTok's algorithm curates individualised content, it creates "first-party speech" rather than merely acting as a content conduit, potentially limiting Section 230 protections.

 Challenges to the 230 immunity shield

The case of young teenager Molly Russell highlights the dangers of online content. Molly was exposed to a deluge of material that promoted suicidal ideation, which the coroner concluded "contributed to her death in a more than minimal way”. The coroner examined the specific content Molly viewed on Instagram and Pinterest and how algorithms provided content without her requesting it and considered the impact of this content on her mental state. Dr. Navin Venugopal, the child psychiatrist assigned to review the material, reported being unable to sleep for weeks after viewing the content Molly had accessed, noting the severe impact such material would have on a depressed 14-year-old

 Megan Garcia, a practising attorney from Florida, has filed a lawsuit against Character.ai following the tragic suicide of her 14-year-old son, Sewell Setzer III. The lawsuit alleges that the AI chatbot engaged in "abusive and sexual interactions" with Sewell, ultimately encouraging him to contemplate suicide. Sewell began using Character.ai in April 2023 and died by suicide on February 28, 2024, after what his mother describes as a deep emotional bond with a chatbot modelled after Daenerys Targaryen from Game of Thrones. The chatbot allegedly suggested that if Sewell were dead, they would be united forever. The lawsuit argues that Section 230 protections should not apply as the AI actively created and developed the harmful content.

This case highlights the protections afforded to companies by Section 230 which is why Character.ai is not subject to a product recall in alignment with the precautionary principle we expect in real-world contexts.

The precautionary principle and its application offline

 Let's look at these issues in the context of real-world consumer protection and respect for rights. Müller's recent recall of Cadbury dessert products demonstrates the precautionary principle in action. The company recalled products due to possible Listeria contamination, even though the bacteria was only found in the production environment rather than the products themselves

In another example, the US Consumer Product Safety Commission issued a recall of the lithium-ion battery pack of electric-powered skateboards produced by Boosted due to overheating and smoking, which posed a fire hazard. This prompted an immediate recall after Boosted received two reports of battery packs overheating and smoking, although no injuries were reported. These are examples of a precautionary principle in action and typically it is not described as evidence of a moral panic.

The precautionary principle is actively applied in real-world product recalls and consumer protection:

  • Manufacturers must take preventive action before complete scientific proof of risk
  • Regulators can mandate product recalls based on potential risks
  • Companies must issue warnings or temporary sales prohibitions when safety concerns arise

While physical products face immediate recalls based on the precautionary principle, online platforms operate without similar preventive measures, in large part due to the protections afforded by Section 230. This creates a significant gap in consumer protection between physical and digital spaces.

 Academic Divisions and Research Challenges

The academic community is deeply divided on the approach to the application of the precautionary principle. In the absence of industry being obligated to apply a precautionary approach, Jonathan Haidt advocates for immediate community-based precautionary measures, including collective action by parents and schools to limit children's access to mobile phones and social media access. The goal is to provide students with 6-7 hours daily of phone-free time, which Haidt argues is essential for developing mentally healthy young adults and fostering an environment conducive to learning. Haidt argues and Australia's senate agrees that even without complete scientific certainty, we should adopt precautionary measures regarding social media and smartphones because: 

  • The potential harms are significant
  • The cost of preventive action is relatively low
  • Waiting for perfect evidence could put vulnerable users at risk

Research findings indicate for example, that more than 300 million children annually are victims of online sexual exploitation and abuse. This amounts to approximately one in eight of the world's children being victims of non-consensual sharing of sexual images and videos. Australia's ban on social media for under-16s could serve as a model for other countries considering similar measures.

Critics of Haidt dismiss this precautionary approach as akin to a ‘moral panic’, arguing for in-depth, evidence-based, longitudinal research before implementing restrictions. Andrew Przybylski, Professor of Human Behaviour and Technology at Oxford University, emphasises that policy decisions should be based on reliable scientific evidence rather than anecdotes or moral panic. He argues that current research shows mixed, small, or no associations between social media use and mental health problems.

Data Access Barriers

Conducting meaningful research faces significant obstacles. Tristan Harris is a technology ethicist and the cofounder of the Center for Humane Technology. Tristan argues that measuring social media's true impact requires access to AI algorithms that recalibrate content delivery moment-by-moment based on user engagement. Companies strongly resist this level of scrutiny, creating a catch-22 where evidence-based policy requires data that platforms won't share. The EU's Digital Services Act (DSA) Article 40 establishes unprecedented requirements for researcher access to platform data in the EU, and plans to adopt final rules in first quarter of 2025. This will mean that Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) must:

  • Provide vetted researchers access to data for studying systemic risks
  • Grant access to both public and non-public data
  • Respond to data requests within reasonable timeframes

Hidden Realities of Abuse Management Systems and Regulatory Gaps

2025 is shaping up to be a significant inflection point for child safety online, which shed light on the true nature scale and extent of the issues that arise on social media platforms. Despite the prevalence of child sexual exploitation material (CSAM) online, reporting rates vary significantly across different platforms. In Q2 2023, Facebook and Instagram submitted over 3.7 million reports to National Center for Missing and Exploited Children (NCMEC). In contrast, platforms like Apple have filed only 659 CSAM reports over three years, and X (formerly Twitter) reported 370,588 instances in the first half of 2024.

 Many platforms fail to report to NCMEC altogether, and even when reports are filed, they often lack essential details for effective investigation and prosecution. This disparity highlights the urgent need for increased transparency, standardised reporting practices, and stronger industry-wide efforts to combat child sexual exploitation. This should be a focus of research once DSA article 40 comes into effect.

 While platforms are encouraged to report child sexual abuse material to NCMEC there are no equivalent requirements for reporting the incidence of reports related to suicidal ideation and what actions, if any, were taken in a credible risk-to-life scenario, or the incidence of reports related to self-harm, drug sales, or other criminal activities. This creates a concerning situation where platforms exercise complete discretion over whether to contact authorities or conduct welfare checks, with no standardised protocols, transparency requirements, or accountability measures in place.

The absence of a central authority collecting data on these reports, combined with no quality control measures or mandatory response protocols, means platforms can ignore or minimise serious harm reports without consequences. This regulatory vacuum is particularly troubling given by whistleblowers like Arturo Bejar reports of platforms failing to exercise a duty of care toward children and ignoring credible threats to life and safety, highlighting the urgent need for comprehensive mandatory reporting frameworks that extend beyond CSAM to encompass all serious online harms.

 Complex Dynamics of AI-Human Relationships

The complexity of researching human behaviour in increasingly AI-mediated environments extends beyond the challenge of data access. Traditional research methodologies may prove insufficient for capturing the nuanced dynamics of human-AI interactions, where engagement is driven by algorithms designed for attention-capture that can simulate empathy and emotional connection while lacking genuine human qualities of consciousness and authentic emotional reciprocity. The emergence of emotional bonds with AI agents, as tragically illustrated in the Sewell Setzer III case, demonstrates the need for AI guardrails. The EU's AI Act is expected to become fully applicable by mid-2026, with a 24-month transitional period after it enters into force. In the interim, researchers will apply methodological approaches that can capture, and interpret rapidly shifting complex patterns of meaning-making and quantify impacts as AI systems as they become more sophisticated in mimicking human interaction patterns, while simultaneously operating under business models that prioritise engagement and the illusion of intimacy.

The future of internet safety

Australia's social media ban for children under 16 represents a bold step in addressing online child safety concerns. The EU's Digital Services Act will provide researchers with access to data that will help to inform policy making going forward. The future of internet safety requires active engagement from all stakeholders - academics, industry leaders, law makers, regulators, and child safety advocates. Rather than maintaining entrenched positions, we must work collaboratively to develop solutions that protect vulnerable users while enabling the benefits of digital innovation. This includes addressing both immediate safety concerns and establishing frameworks for understanding how humans navigate and make meaning in increasingly AI-mediated environments. We must move forward with both precautionary measures and evidence-based research, recognising that these approaches complement rather than contradict each other in the pursuit of safer online spaces for children.


 

 

 

Craig Fearn

Corporate Wellbeing Specialist | Helping Organisations Boost Employee Health & Productivity | Speaker & Consultant

4w

Dr. O’Connell raises an important point—balancing innovation with responsibility is crucial, especially when teen mental health is at stake. How can businesses ensure ethical decisions in such high-stakes scenarios? Would love to hear others’ thoughts on fostering trust and wellbeing.

Prof. Mary Aiken

Cyberpsychologist | Keynote Speaker | Expert in Cyberpsychology, Human Factors in Cybersecurity & Cybercrime | Author | Member INTERPOL Global Cybercrime Expert Group | Advisor Europol (EC3)

1mo

Very informative

Derek Young

UK Sales at YONDR - I help schools become phone-free. Yondr schools unanimously report benefits to classroom engagement, student socialisation, resilience and academic performance.

1mo

Well done Australia

Exciting. Your time has come!

To view or add a comment, sign in

More articles by Dr Rachel O'Connell

Insights from the community

Others also viewed

Explore topics