How we can actually keep kids safe online

How we can actually keep kids safe online

I'm reminded of the old saying. If you keep doing what you do, you will keep getting what you get.

Australia's online safety regime, in fact most of the world's approach to online safety does just that. The approach seems to be this. Threaten the large social and porn players with regulations, publicly shame them and try to encourage all online platforms to think 'safety first'.

Don't get me wrong. Governments and regulators have the right intent and are working hard on this issue. It's just online safety is a complex challenge with commercial, competition, technical and behavioural dimensions.

If we truly care about our kids, then we and they deserve an approach grounded in reality.

We must be realistic.

  • The world's millions of online platforms cannot be controlled from Australia
  • The money behind big tech, and now AI, does not care about our kids
  • Children have the time, capability and motivation to hack ill conceived safety techniques
  • The only way to control what a child is doing online is by controlling the device they are using
  • We can't rely on big-tech to do-the-right thing or regulators to keep up with tech

The purpose of this post is to get real. I and our supporters hope to direct discussions to what is realistic and what will deliver results.

Supporters of this post

I am delighted that the leading voices in online safety in Australia support the views and recommendations herein. Each day this group operate at the coal face of online safety. We uniquely know the technology and reality of online safety. We know what works and what will not.

Ativion has been in cyber safety and security in education for more than 40 years. Ativion consists of ContentKeeper which delivers web filtering and cybersecurity solutions for 12 million students worldwide and Impero which provides classroom management and well-being tools across 90 countries and supports more than 2 million students.

eSafeKids is a social enterprise founded by Kayelene Kerr . Kayelene is recognised as one of Western Australia’s most experienced specialist providers of Protective Behaviours, Body Safety, Cyber Safety, Digital Wellness and Pornography education workshops.

Founded by Kirra Pendergast , Safe on Social has offices in Sydney, Brisbane, London, New York and Florence. Under Kirra's forward-thinking leadership, Safe on Social has risen as the leading global privately-owned cyber safety education and consulting provider.

Surf Online Safe is WA’s leading educator on online safety. SOS was founded by 2022 WA Australian of the Year Paul Litherland . Paul is a former Police Officer, leading legislative advocate and author. Paul is an ambassador for Auspire and Zonta House and a mentor at Emmanuel Catholic.

Jocelyn Brewer is a multi-passionate Sydney-based registered psychologist, educator and researcher with a special interest in cyberpsychology and digital wellbeing, a flair for communicating modern issues in an accessible, practical way. She is the founder of Digital Nutrition™ by Jocelyn Brewer - a positive technology use philosophy and education resource. 

Rachel Downie is the 2020 QLD Australian of the Year and founder of Stymie , an innovative platform designed to combat bullying and support student well-being. Recognized for her dedication to creating safer school environments, Downie has been a leading advocate for empowering students to anonymously report incidents of bullying, harm, or distress.

The evidence is in

It’s fair to say that the way our kids are engaging with technology is causing harm. Of course this isn't to say that all technology is harmful and we of course can't deny its importance for many particularly, those in the fringe.

But, we cannot ignore the horrifying incidents and anecdotes and the clear correlation between mental health issues and ubiquitous access to smart devices.

Before we dive into this, let’s put one thing into perspective. No online safety measure or regime can eliminate all risk.

However, the reality of tech use today is that inappropriate exposures, toxicity and harms have become the norm. We need to, and I believe we can, return them to being the exception.

So let’s fix it!

For an issue so important like this it is tempting to seek a simple solution (e.g.ban phones at school) or to blame somebody (e.g. Elon Musk) or group (e.g. Social Media).

We’re seeing this play out as global Governments and regulators turn their sights on access to porn and social media and propose mandatory age-verification gates.

It sounds simple. Let’s force these big companies to check ID or age on entry. It works for cigarettes. Why not for porn and social media?

The problem is the internet doesn’t work like the real world.

Age verification is predicated on unlikely and erroneous assumptions.

Firstly such a regime can only focus on a small number of online platforms. The reality is that toxicity and misbehaviour occurs vastly beyond the mainstream porn and social platforms. And, should age verification impede the commercial success of any of these platforms, new providers will emerge and do so swiftly. Age verification exemplifies a failing “whack-a-mole” approach to online safety inherent in Australia’s online safety act and regime.

Secondly, teenagers find it trivial to bypass age verification through either VPNs or the bio-hacks being unleashed in the gAI revolution. When trialled, use of hacks skyrocket and kids will be driven more quickly into invisible platforms, the dark web and peer-to-peer social networks. This is terrifying.

Thirdly, young children access adult content mostly outside of the major porn sites. Age verification does not deal with search previews, message sharing, content shared in social & gaming platforms or inadvertent access through shared & parent devices (which will have verification tokens on them).

And lastly, it is ambitious to believe community support will be there for a measure which not only impacts significantly more adults than have children but will also drive concerns around privacy and tracking.

Why won’t age verification work? A useful metaphor.

Online safety is not like real-world safety. Calls to impose age verification on major platforms make that mistake.

Consider nightclubs where responsible service regulations require door staff to check ID and bouncers to moderate behaviour in the venue. In the internet version of this scenario:

  • there are infinite nightclubs, most outside of the reach of regulations;
  • minors can easily acquire fake ID (e.g. via credit cards, using their parent’s or shared devices or biometric hacks); and
  • back doors are wide open anyway (via VPNs to bypass geo-fencing).

Attempts to control “key platforms” will not achieve policy outcomes

Much of today’s discussions (and incidentally Australia’s Online Safety Act) are based on the premise that the law and regulator activity can achieve their aims by focusing on the major platforms. This is not possible. Their incentives are misaligned and there are inherent limitations in what they can do and the reach of our regulations.

1 Resistance of the platforms to commercial disadvantage

Imposing safety requirements on larger platforms (say those of Aylo, Meta and ByteDance) exposes those platforms to competitive threats from less visible or less scrupulous providers.

This is a massive problem.

The major platforms will feel they must and will continue to push back on efforts to make their platforms safe if those efforts put them at a commercial disadvantage.

For example

  • When Yahoo acquired Tumblr in 2013, the platform was thriving. In 2018, Tumblr implemented a ban on adult content, impacting the communities that relied on it for expression and connection. The impact of the ban was immediate and severe. Within a month the platform lost 30% of its users which had a devastating effect on its viability. By 2019, Tumblr's value had plummeted, and it was sold for just $3 million—a stark contrast to its $1.1 billion valuation at the time of Yahoo's acquisition.
  • The decline of Tumblr serves as a significant case study on how changes to content policies can dramatically impact a social media platform's user base and business model.

All online platforms and their financial backers are aware of the fatal pitfall of moderation.

Resistance of the social platforms to government pressure comes in many forms including PR advocacy and through half-hearted safety measures.

For example

  • Safety on TikTok has improved enormously in recent years as its profile increased. However, TikTok’s parental controls can be easily avoided through creating new accounts or the sharing of videos on the web (outside of TikTok’s app controls). This no doubt serves to protect TikTok’s virality.

Image showing web access to shared TikTok videos. No restrictions apply. This highlights the deliberate gaps in safety mechanisms that protect virality, engagement and underlying business models.

2 Children find it trivial to bypass geo-fences

Geo-location-based approaches to enforce restrictions (e.g. age verification) can be easily subverted through technology that emulates access from another geo-location. This is typically performed through a VPN app or application.

And so, if Australia imposes a geo-based restriction then any moderately determined child will be able to VPN to a non-compliant jurisdiction.


For example

  • According to Nord, 68% of Australians know what a VPN is and 32% use one.
  • The introduction of age verification In Utah and Louisiana generated an almost 1,000% increase in the use of VPNs to bypass the restrictions.
  • A number of popular browsers include VPNs or easy add-ons. This includes Firefox (with the Mozilla VPN), Opera, AVAST, Tor (which also accesses the dark web) and Brave. 
  • Apple launched Private Relay in 2023 which now embeds an optional VPN inside the Apple operating system.

3 Today's 'whack-a-mole' approach is ineffective

Australia’s online safety regime is built on the Online Safety Act and associated Basic Online  Expectations and industry codes & standards. These ancillary measures seek to lift the standards of online platforms. 

For reasons of practicality the expectations within industry codes & standards are graduated based on assessments of the platform’s scale and impact.

Inherent in all of this are the assumptions that a focus on the major platforms will change their behaviour and make a substantive impact on online safety.

It would be difficult to find evidence of any substantive improvements in the behaviour of the major online platforms since the 2021 Online Safety Act came into law. 

The eSafety Commissioner 's abandoned action against X in relation to the Wakeley church stabbing highlights the immense difficulty in regulating the internet from Australia.

4 Concerning behaviours also exist and are growing in gaming platforms

Proposals to target the “large” social platforms ignore the pressing issue of growth in use and toxic behaviour inside gaming platforms. Gaming platforms exhibit much of the characteristics of social media platforms i.e. the ability to communicate and share content 1:1 or as groups.

For example

  • The eSafety Commissioner reports that over 60% of Australian children participate in online gaming and 51% of teen gamers had a negative experience and/or were exposed to potentially harmful content (e.g. hate speech, misogynistic ideas, violent content) while gaming.

5 Toxicity will move to riskier platforms 

Proposals to target “large” social platforms will unquestionably result in toxic content and behaviour moving to less scrupulous or less visible platforms. 

Despite popular views, in our experience, the “big social platforms” are the most responsible. Of course, they can do better. However, the riskiest content we encounter is accessed outside the larger platforms and this will accelerate with the suggested proposals.

For example

  • As discussed above, when Yahoo implemented a ban on adult content on Tumblr in 2018 it resulted in a devastating effect on its viability and in 2019 Tumblr was sold for just $3 million. Importantly destroying Tumblr did not destroy the (mis)behaviour of its users. It moved to other platforms.
  • The school safety tech industry saw explosive growth in the use of Omegle following TikTok’s imposition of new safety settings post-COVID. 
  • In an effort to avoid porn blocking on school networks and devices students are increasingly distributing pornography via collaboration platforms (e.g. cloud drives) or through creating custom websites, unknown to traditional URL blockers.

A great risk in an ill-conceived policy like the presently considered age-verification regime is that users move deeper underground (e.g. accessing the dark web through the Tor Browser).

6 Stimulating the even greater risk of peer-to-peer social media

As troublesome as the well-known social media platforms are, at least they have a governance structure.

There are active efforts to build entirely decentralised social media services. These are known as peer-to-peer (P2P) social media. P2P social media aims to address issues such as privacy, data security, and censorship by decentralising data storage and control.


For example

  • Mastodon operates as a federated social network. Instead of a single server, Mastodon is made up of numerous independent communities (or "instances") that can interact with each other.
  • Scuttlebutt is a social media network that uses a completely decentralised protocol for building applications. It focuses on local-first technology, allowing users to interact without requiring continuous internet connectivity. Data is stored on users' devices and shared peer-to-peer, ensuring privacy and resilience against censorship.
  • Diaspora is a decentralised social network where users can host their own server ("pod") and connect with other pods. This network design prioritises user control over their data and provides an alternative to traditional platforms like Facebook.

P2P platforms are gaining traction among users who are concerned about privacy, data ownership, and corporate control of social media. 

P2P social media platforms will not be able to comply with regulations on data protection or content moderation. They will not be able to enforce age restrictions, prevent illegal activities or adhere to jurisdiction-specific laws.

Age verification is unworkable

For many, age-verification is a tempting solution to both porn and lifting the age of social media access because it sounds simple. It isn't though.

A range of age assurance and age-verification techniques have been proposed and trialled across the globe, including:

  • Cloud age verification: Requiring users to sign in with digital IDs when they access adult sites; or
  • Cloud age assurance: Requiring online platforms to implement age assurance technology (eg behavioural or biometric scanning); or
  • On-device verification: Requiring online platforms to participate in an age-verification regime whereby a verification event (e.g. via a provider or social platform) results in a “token” being cached on a user’s device for use in later platform entry checks. 

It is important to highlight that any age-assurance or AV model will be impeded by:

  1. A focus on the ‘larger’ platforms (thus missing or mutating online safety challenges); and
  2. A lack of a global mandate for geo-IP-based enforcement (thus enabling bypass via VPNs).

Indeed the eSafety Commissioner identified in the Roadmap for Age Verification that teenagers will find methods to avoid AV. The same will go for determined children seeking to access social media. 

But for young kids’ inadvertent access to porn AV is often positioned as an important and impactful measure. That is true, but it depends on the method. 

The eSafety Commissioner recommends a tokenised approach to AV. It is convenient because adults can be verified once per device and thus won’t need to verify themselves on each online platform. 

However, it also means verified devices in the home are a risk for children exactly like unprotected devices are today. This issue will be exacerbated if age verification is extended to social media.

There are many other challenges with AV and indeed these were mostly identified in the eSafety Commissioner’s roadmap. The following highlights some of them.

1 Users can avoid geo-fencing easily

As described above, avoidance of any geo-location applied restriction (e.g. Age Verification being applied in Australia alone) is easy to avoid and it’s getting easier. VPNs are the tool of choice and they are available in downloadable apps and browsers. We’d expect new techniques would evolve quickly based on demand. 

2 Biometric-avoidance is becoming increasingly easy

With the increasing use of age detection technologies, ‘the internet’ is developing clever ways to bypass it. Facial recognition is often sighted as the gold standard for user identification however AI is fast threatening this capability.

For example

  • A number of talking avatar and facial animation applications now exist that can be used to trick facial recognition. These apps can animate a still picture of a person’s face and can then run on a PCs camera driver allowing emulation of video capture.
  • The rapid advancement of AI has introduced vulnerabilities to age verification and a vast array of effective obfuscation techniques are available and easy to find online. 

3 Imposing costs on all adults is manifestly unfair

We estimate that of the ~ 20 million Adults, less than 4.5 million have children under 16. That means that possibly 80% of the adults that will be impacted by age verification requirements have no stake in the policy (i.e. no child they're trying to protect from social media). 

We’d argue it is not good policy to impose costs (or effort) on those not associated with a community concern or risk. Proposed age-verification techniques impose an effort burden on all adults. 

For example

  • Consider how challenging adults find managing online passwords, and managing identity certificates (e.g. for government websites and services) and then consider what happens when mobile phones are lost and replaced. This will be part of the expected chaos of age verification which will drive workarounds.

4 Community trust is a substantial challenge

Proposed age-verification techniques expect that adults will be willing to trust social, gaming or identity verification platforms. 

In our view, this is not likely, and our expectation is that adults will swiftly move to use VPNs or similar techniques to seamlessly and by default bypass geo-based government restrictions.

For example

  • The eSafety Commissioner’s Age Verification Roadmap even references the relationship between this work and the Government's world on digital identity. This will only magnify community concern and affect speed of implementation.
  • A study by the eSafety Commissioner in 2021 found low awareness of ‘age verification’ and scepticism on how the technology would work in practice.
  • A study by Ofcom found broad support for age verification measures however adults have serious concerns about how user data may be processed and/or stored.

5 AV won’t block search previews or message sharing

Debates on age verification in relation to adult content often mix up the concerns of preventing inadvertent access by minors and reasonably determined teenagers finding their way to adult sites. 

The eSafety Commissioner’s Roadmap for age verification does a good job of explaining the different risk vectors at different ages.


For example

  • A common scenario is an innocuous Google search such as “pussy” (see image). 
  • Search images and video previews make the potential for children going down risk pathways even worse than shown here. Age verification will not have any positive impact on this issue.

6 Age verification will help with inadvertent access but it’s not perfect

For young kids’ inadvertent access to porn, AV is often positioned as an impactful measure. 

That is likely only somewhat true because:

  • adding age verification for social media will materially increase the number of “verified” adult/shared devices in homes and likely drive more porn access;
  • age verification will not address inadvertently viewing porn in search previews or the sharing of porn in messaging platforms; and
  • the global porn giants will unquestioningly find robotic ways to drive engagement outside of their compliant titles.

It should be highlighted that inadvertent access by minors to adult content can generally be achieved today if parents 1) enable parent settings / use parental controls and 2) keep their kids away from adult devices. 

This is true now and will be true even if age verification comes into being.

7 Blocking determined teenagers is even more challenging for AV

As discussed in the above sections, any somewhat determined teen has a myriad of options for violating or bypassing measures to protect them. This includes many options to bypass the proposed age verification and assurance measures.

Furthermore, today’s parental control technologies, as discussed in below, are deliberately undermined by Google, Apple & Microsoft. 

This is the most pressing issue in online safety and needs immediate regulatory attention.

8 The YouTube problem - YouTube is a major source of educational content

YouTube is one of the major sources of educational content both inside and outside the classroom. It is normal for schools to provide access to YouTube, restricted access and to allow teachers to share specific YouTube content.

Whilst YouTube is a content streaming service it is also a social media platform because it has user-generated content, user interaction, personalization, networking, and the potential for content to go viral. Many of TikTok’s features are finding their way into YouTube as Google seeks to re-compete. And finally, much adult content such as profanity and sexualised material can find you on the platform.

As it stands an age verification regime will break the ability for schools to make YouTube available in classrooms or assign YouTube content for study.

The only truly reliable approach is to control the device

1 “On-device” approaches are the only way to view and manage all activity

Today’s internet is dynamic, global and rapidly incorporating end-to-end encryption which allows user activity to be hidden, even from the platform providers.

The only workable technique for digital safety is to leverage “on-device” technology which can:

  1. authenticate users (e.g. using on-device biometrics)
  2. keep user identity and data private (isolated in the device’s sandbox); and
  3. Inspect, block or redirect activity before encryption.

On-device technology can do this for the entirety of the internet. It doesn't just target the big social platforms or porn sites. 

Indeed the Australian eSafety Commissioner’s recommended “double-blind tokenised” age-verification approach acknowledges this by relying on user-devices storing tokens that confirm that the user has been age-verified.

On-device approaches are provided by a healthy mix of technology providers:

  • 1st party parental controls: So-called first-party parental controls provided by Apple (Screentime), Google (FamilyLink) and Microsoft (Microsoft Family);
  • Parental control apps: So-called 3rd party parental control apps provided by groups such as Qustodio, OutPact, Bark and internet security providers like Norton and Aura; and
  • Enterprise safety technology: Enterprise safety technology accesses special technologies from Google, Apple and Microsoft to seamlessly and robustly install safety technology on computers and smart devices.

Unacceptably, however, Google, Apple and Microsoft deliberately limit the effectiveness of all of these approaches.

Enterprise safety technology in particular is most capable of delivering on the community and policy objectives for online safety without all of the drawbacks of the proposed age-verification model.

Enterprise safety technology is accessed by business app developers and is in use on 10s of millions of devices today. It is in common use in US schools for school-issued devices. It works, can't be hacked and allows schools and parents to share policy responsibility.

This technology is however deliberately withheld from parents by Google, Apple & Microsoft who only licence it to enterprise App developers (for free).

2 We cannot trust Google, Apple and Microsoft to make this technology available without regulation & competition

Google and Apple in particular have been proven untrustworthy in creating and maintaining safety features and providing fair access to parental control app developers. 

Regulatory and antitrust inquiries globally have evidenced this behaviour and specifically in the app marketplaces (of Apple & Google):

  1. make deliberate commercial choices that put children in harm's way; and 
  2. deliberately undermine the ability of parents to supervise and protect them.

For example, the US House Judiciary Committee’s Subcommittee on Antitrust, Commercial and Administrative Law investigated Apple following Apple’s removal of all parental control apps from the App Store in 2019. Leaked internal Apple emails uncovered by the inquiry found Apple used children’s privacy as a manufactured justification for its anti-competitive behaviour. For example:

  • Apple’s Vice President of Marketing Communications, Tor Myhren, stated, “[t]his is quite incriminating. Is it true?” in response to an email with a link to The New York Times’ reporting.
  • Apple’s communications team asked CEO Tim Cook to approve a “narrative” that Apple’s clear-out of Screen Time’s rivals was “not about competition, this is about protecting kids [sic] privacy.” 
  • Apple reinstated many of the apps the same day that it was reported the Department of Justice was investigating Apple for potential antitrust violations.

The ACCC’s Digital Platforms Inquiry’s landmark 2021 report on app marketplaces concluded that “First-party [ie Apple & Google] apps benefit from greater access to functionality, or from a competitive advantage gained by withholding access to device functionality to rival third-party apps.” (page 6). 

The discriminatory practices found by the DPI are those that are used by Apple and Google to undermine the effectiveness of parental control apps. Parental control apps are restricted from accessing key operating/ecosystem features that would make them otherwise highly performant, effective and immune to violation by children. These companies place no equivalent restrictions on their first-party apps and only some on app developers for enterprises.

The Collation for App Fairness identified the same anti-competitive and harmful behaviour in their submission to the ACCC in 2022 stating

“.. with MDM, an app developer can push to a device settings which apply content filters, determine what features and apps can be accessed and limit access to networks and VPNs. On the Apple platform, the full suite of MDM features is available through a configuration called “Supervision”. Apple does not allow consumer app developers to access Supervision. … This undermines the ability of consumer app developers to compete and innovate, harming children as a result.”

The direct result of this anti-competitive practice is the disempowerment of parents to protect their children online. 

Most parents give up trying. Those that don’t are forced into limited and unreliable options and key parenting decisions get made by big-tech e.g. on what’s appropriate for children to use and that once a child turns 13 they can opt out of their parents' safety settings.

Every single safety technology provider in the world would attest to this market construct and the behaviour of Google, Apple and Microsoft as being the most fundamental and most important issue in online safety. 

Until this is addressed, we respectfully submit that the objectives of any online safety regulatory regime will be unmet.

3 Enterprise safety technology explained

Enterprise safety technology is made freely available by Google, Apple and Microsoft to enterprise app developers i.e. software developers for big businesses.  Such technology includes these basic components:

  1. Mobile device management: Cloud technology allows for the remote administration of devices including controlling device settings, and capability and ensuring users cannot violate them.
  2. Operating system controls: The ability to control access to operating system features e.g. Block or allow apps, WiFi, use of VPNs, use of the camera, screen capture etc.
  3. Network & browser based filtering: The ability to access device traffic for inspection, filtering and reporting.

Illustration of how enterprise mobile device management powerfully controls end user devices. The MDM becomes the “administrator” of the device. This capability is not currently available to consumers.

US schools have access to all of this capability. They have access because they issue students with learning devices and they operate IT like a business would. They thus get access to solutions provided by enterprise app developers like Qoria, ContentKeeper, Impero, Lightspeed, GoGuardian, Securly and more. 

Enterprise safety technology can be seamlessly installed over the internet. It cannot be removed by children and works across all device operating systems. Default settings can be applied and these can evolve over time.

Enterprise safety technology would more than adequately support the majority of the government’s online safety objectives. 

Enterprise safety technology:

  1. Instals instantly and seamlessly over the internet and across all computing and smart devices;
  2. Does not require adults to verify themselves on the Internet;
  3. Does not require any investment by the Government;
  4. Is extremely difficult to violate and will automatically reinstall if children factory reset devices;
  5. Enables maturity-based blocking of websites and apps and time-based access control;
  6. Enables the imposition of safe internet searches and the hiding of adult search previews;
  7. Can hide inappropriate content in real-time (e.g. pornography and terror-related images);
  8. Can alter policy as children swap devices and evolve rules with risk and maturity;
  9. Can switch between school & parent policy as children move through the day; and
  10. Can identify and automatically block access to sites and online platforms that don’t comply with regulatory standards.

Again this technology is deliberately being withheld from parents (for use on personal devices) by the big tech gateways (Google, Apple and Microsoft). This has been a matter confirmed in many competition inquiries globally including in Australia, Europe and the US.


There is a much more suitable technology pathway forward

We urge the Australian Government to adopt an approach to online safety that builds on existing successful technologies and ensures a sustainable future for online safety measures.

1) Competition law; interoperability & banning self preferencing

A crucial step is making on-device safety technology accessible to parents, similar to its availability for big businesses and schools. 

We suggest that the Australian Government (and global regulators) implement the ACCC’s policy recommendations and follow the EU Digital Market Act and require interoperability and open and competitive tech markets. Specifically, all app developers should have non-discriminatory access to essential operating systems, app stores and browser safety capabilities. 

Why?  Because parents do not have truly effective online safety options today.

And so parents will have free, effective and hack-proof tools to keep their children safe on all devices and for all online platforms. 

And so schools can ensure parent-funded (BYO) learning devices are safe and support learning. 

And to ensure a marketplace exists that innovates to solve the future needs of consumers & schools. 

2) eSafety filtering standards & Family Friendly Filters

Next, we recommend that the eSafety Commissioner upgrade the existing Family Friend Filter (FFF) program to take advantage of the opportunity for effective control, moderation and choices that on-device safety tech enables. 

We suggest the Commissioner should be guided by the UK’s highly effective and prescriptive Keeping Children Safe in Education regime

Specifically, the eSafety Commissioner should set filtering standards and the FFF program should ensure the services provided by Google, Apple, Microsoft and 3rd party parental control and school safety providers support the community needs of:

  • Blocking porn in searches and porn sites
  • Defaulting age-appropriate social media and games for children
  • Allowing parents and schools to set access rules that suit their needs
  • Allowing the eSafety Commissioner to mandate app/site bans and web image takedowns

Why? Because parents want easy and reliable online safety options. 

And so that all parents in Australia will know that devices and parental control apps certified by the FFF program have reliable standards of protection and oversight of eSafety. 

It is noted that (bizarrely) the online safety technology industry is not currently covered by the Online Safety Act, basic online safety expectations or associated codes & standards. This must be corrected as part of the review of the Online Safety Act.

3 Benefits of this approach

With this in place, parents and schools will know there are robust and safe options for their devices. And eSafety, schools and cyber experts can confidently recommend them. And all without cost to the taxpayer or the disruption of age verification.

  • Parents will be able to access defaulted, trustworthy, reliable and powerful safety capabilities across all devices (like businesses can today), either free or paid;
  • Children will not be able to bypass parent rules (which businesses have access to today);
  • Schools & parents will be able to share responsibility for policy enforcement (like US schools are beginning to do today);
  • The community will have agency / effective choices over safety providers and the content and platforms available for their children/students;
  • e-Safety and Governments will have an effective mechanism to take down content and ban or by default remove access to apps that do not comply with community standards.

It’s worth adding that compliant online platforms will ultimately have a commercial advantage in a regime like this, where non-compliant platforms are generally excluded from use.

Everything we describe here can be demonstrated today. Google, Apple and Microsoft could enable community access with little effort.

We urge at the very least that an inquiry is recommended into how to ensure that all schools and parents have access to a competitive market for on-device safety technology.


Kayelene Kerr

Child Abuse Prevention Educator | Advocate | Ambassador | Keynote Speaker | Workshop Facilitator

5mo

Tim Levy thanks for the invitation to be involved.

Like
Reply
Coert Du Plessis (杜康)

Tech & AI Executive | Scale-up Leader | Bits and Atoms | Sustainability

5mo

Tim Levy such important work great read and will share wider

Like
Reply
Kirra Pendergast

Founder & Principal Consultant, Keynote Speaker, Thought Leader empowering online trust and safety.

5mo

Proud to stand beside you on this one Tim!

Emily Lawrenson

Writer, storyteller, strategist | Content + communications + spokesperson @ Qustodio 💚

5mo

Excellent article, thank you for sharing - an important read for anyone concerned about the future of the internet and how it will affect our children.

Eli Samuel

Founder @ SafeTelecom | Writing the Code for Cleaner + Safer Tech

5mo

Thanks for sharing, Tim Levy!

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics