How we can actually keep kids safe online
I'm reminded of the old saying. If you keep doing what you do, you will keep getting what you get.
Australia's online safety regime, in fact most of the world's approach to online safety does just that. The approach seems to be this. Threaten the large social and porn players with regulations, publicly shame them and try to encourage all online platforms to think 'safety first'.
Don't get me wrong. Governments and regulators have the right intent and are working hard on this issue. It's just online safety is a complex challenge with commercial, competition, technical and behavioural dimensions.
If we truly care about our kids, then we and they deserve an approach grounded in reality.
We must be realistic.
The purpose of this post is to get real. I and our supporters hope to direct discussions to what is realistic and what will deliver results.
Supporters of this post
I am delighted that the leading voices in online safety in Australia support the views and recommendations herein. Each day this group operate at the coal face of online safety. We uniquely know the technology and reality of online safety. We know what works and what will not.
Ativion has been in cyber safety and security in education for more than 40 years. Ativion consists of ContentKeeper which delivers web filtering and cybersecurity solutions for 12 million students worldwide and Impero which provides classroom management and well-being tools across 90 countries and supports more than 2 million students.
eSafeKids is a social enterprise founded by Kayelene Kerr . Kayelene is recognised as one of Western Australia’s most experienced specialist providers of Protective Behaviours, Body Safety, Cyber Safety, Digital Wellness and Pornography education workshops.
Founded by Kirra Pendergast , Safe on Social has offices in Sydney, Brisbane, London, New York and Florence. Under Kirra's forward-thinking leadership, Safe on Social has risen as the leading global privately-owned cyber safety education and consulting provider.
Surf Online Safe is WA’s leading educator on online safety. SOS was founded by 2022 WA Australian of the Year Paul Litherland . Paul is a former Police Officer, leading legislative advocate and author. Paul is an ambassador for Auspire and Zonta House and a mentor at Emmanuel Catholic.
Jocelyn Brewer is a multi-passionate Sydney-based registered psychologist, educator and researcher with a special interest in cyberpsychology and digital wellbeing, a flair for communicating modern issues in an accessible, practical way. She is the founder of Digital Nutrition™ by Jocelyn Brewer - a positive technology use philosophy and education resource.
Rachel Downie is the 2020 QLD Australian of the Year and founder of Stymie , an innovative platform designed to combat bullying and support student well-being. Recognized for her dedication to creating safer school environments, Downie has been a leading advocate for empowering students to anonymously report incidents of bullying, harm, or distress.
The evidence is in
It’s fair to say that the way our kids are engaging with technology is causing harm. Of course this isn't to say that all technology is harmful and we of course can't deny its importance for many particularly, those in the fringe.
But, we cannot ignore the horrifying incidents and anecdotes and the clear correlation between mental health issues and ubiquitous access to smart devices.
Before we dive into this, let’s put one thing into perspective. No online safety measure or regime can eliminate all risk.
However, the reality of tech use today is that inappropriate exposures, toxicity and harms have become the norm. We need to, and I believe we can, return them to being the exception.
So let’s fix it!
For an issue so important like this it is tempting to seek a simple solution (e.g.ban phones at school) or to blame somebody (e.g. Elon Musk) or group (e.g. Social Media).
We’re seeing this play out as global Governments and regulators turn their sights on access to porn and social media and propose mandatory age-verification gates.
It sounds simple. Let’s force these big companies to check ID or age on entry. It works for cigarettes. Why not for porn and social media?
The problem is the internet doesn’t work like the real world.
Age verification is predicated on unlikely and erroneous assumptions.
Firstly such a regime can only focus on a small number of online platforms. The reality is that toxicity and misbehaviour occurs vastly beyond the mainstream porn and social platforms. And, should age verification impede the commercial success of any of these platforms, new providers will emerge and do so swiftly. Age verification exemplifies a failing “whack-a-mole” approach to online safety inherent in Australia’s online safety act and regime.
Secondly, teenagers find it trivial to bypass age verification through either VPNs or the bio-hacks being unleashed in the gAI revolution. When trialled, use of hacks skyrocket and kids will be driven more quickly into invisible platforms, the dark web and peer-to-peer social networks. This is terrifying.
Thirdly, young children access adult content mostly outside of the major porn sites. Age verification does not deal with search previews, message sharing, content shared in social & gaming platforms or inadvertent access through shared & parent devices (which will have verification tokens on them).
And lastly, it is ambitious to believe community support will be there for a measure which not only impacts significantly more adults than have children but will also drive concerns around privacy and tracking.
Why won’t age verification work? A useful metaphor.
Online safety is not like real-world safety. Calls to impose age verification on major platforms make that mistake.
Consider nightclubs where responsible service regulations require door staff to check ID and bouncers to moderate behaviour in the venue. In the internet version of this scenario:
Attempts to control “key platforms” will not achieve policy outcomes
Much of today’s discussions (and incidentally Australia’s Online Safety Act) are based on the premise that the law and regulator activity can achieve their aims by focusing on the major platforms. This is not possible. Their incentives are misaligned and there are inherent limitations in what they can do and the reach of our regulations.
1 Resistance of the platforms to commercial disadvantage
Imposing safety requirements on larger platforms (say those of Aylo, Meta and ByteDance) exposes those platforms to competitive threats from less visible or less scrupulous providers.
This is a massive problem.
The major platforms will feel they must and will continue to push back on efforts to make their platforms safe if those efforts put them at a commercial disadvantage.
For example
All online platforms and their financial backers are aware of the fatal pitfall of moderation.
Resistance of the social platforms to government pressure comes in many forms including PR advocacy and through half-hearted safety measures.
For example
2 Children find it trivial to bypass geo-fences
Geo-location-based approaches to enforce restrictions (e.g. age verification) can be easily subverted through technology that emulates access from another geo-location. This is typically performed through a VPN app or application.
And so, if Australia imposes a geo-based restriction then any moderately determined child will be able to VPN to a non-compliant jurisdiction.
For example
3 Today's 'whack-a-mole' approach is ineffective
Australia’s online safety regime is built on the Online Safety Act and associated Basic Online Expectations and industry codes & standards. These ancillary measures seek to lift the standards of online platforms.
For reasons of practicality the expectations within industry codes & standards are graduated based on assessments of the platform’s scale and impact.
Inherent in all of this are the assumptions that a focus on the major platforms will change their behaviour and make a substantive impact on online safety.
It would be difficult to find evidence of any substantive improvements in the behaviour of the major online platforms since the 2021 Online Safety Act came into law.
The eSafety Commissioner 's abandoned action against X in relation to the Wakeley church stabbing highlights the immense difficulty in regulating the internet from Australia.
4 Concerning behaviours also exist and are growing in gaming platforms
Proposals to target the “large” social platforms ignore the pressing issue of growth in use and toxic behaviour inside gaming platforms. Gaming platforms exhibit much of the characteristics of social media platforms i.e. the ability to communicate and share content 1:1 or as groups.
For example
5 Toxicity will move to riskier platforms
Proposals to target “large” social platforms will unquestionably result in toxic content and behaviour moving to less scrupulous or less visible platforms.
Despite popular views, in our experience, the “big social platforms” are the most responsible. Of course, they can do better. However, the riskiest content we encounter is accessed outside the larger platforms and this will accelerate with the suggested proposals.
For example
A great risk in an ill-conceived policy like the presently considered age-verification regime is that users move deeper underground (e.g. accessing the dark web through the Tor Browser).
6 Stimulating the even greater risk of peer-to-peer social media
As troublesome as the well-known social media platforms are, at least they have a governance structure.
There are active efforts to build entirely decentralised social media services. These are known as peer-to-peer (P2P) social media. P2P social media aims to address issues such as privacy, data security, and censorship by decentralising data storage and control.
For example
P2P platforms are gaining traction among users who are concerned about privacy, data ownership, and corporate control of social media.
P2P social media platforms will not be able to comply with regulations on data protection or content moderation. They will not be able to enforce age restrictions, prevent illegal activities or adhere to jurisdiction-specific laws.
Age verification is unworkable
For many, age-verification is a tempting solution to both porn and lifting the age of social media access because it sounds simple. It isn't though.
A range of age assurance and age-verification techniques have been proposed and trialled across the globe, including:
It is important to highlight that any age-assurance or AV model will be impeded by:
Indeed the eSafety Commissioner identified in the Roadmap for Age Verification that teenagers will find methods to avoid AV. The same will go for determined children seeking to access social media.
But for young kids’ inadvertent access to porn AV is often positioned as an important and impactful measure. That is true, but it depends on the method.
The eSafety Commissioner recommends a tokenised approach to AV. It is convenient because adults can be verified once per device and thus won’t need to verify themselves on each online platform.
However, it also means verified devices in the home are a risk for children exactly like unprotected devices are today. This issue will be exacerbated if age verification is extended to social media.
There are many other challenges with AV and indeed these were mostly identified in the eSafety Commissioner’s roadmap. The following highlights some of them.
Recommended by LinkedIn
1 Users can avoid geo-fencing easily
As described above, avoidance of any geo-location applied restriction (e.g. Age Verification being applied in Australia alone) is easy to avoid and it’s getting easier. VPNs are the tool of choice and they are available in downloadable apps and browsers. We’d expect new techniques would evolve quickly based on demand.
2 Biometric-avoidance is becoming increasingly easy
With the increasing use of age detection technologies, ‘the internet’ is developing clever ways to bypass it. Facial recognition is often sighted as the gold standard for user identification however AI is fast threatening this capability.
For example
3 Imposing costs on all adults is manifestly unfair
We estimate that of the ~ 20 million Adults, less than 4.5 million have children under 16. That means that possibly 80% of the adults that will be impacted by age verification requirements have no stake in the policy (i.e. no child they're trying to protect from social media).
We’d argue it is not good policy to impose costs (or effort) on those not associated with a community concern or risk. Proposed age-verification techniques impose an effort burden on all adults.
For example
4 Community trust is a substantial challenge
Proposed age-verification techniques expect that adults will be willing to trust social, gaming or identity verification platforms.
In our view, this is not likely, and our expectation is that adults will swiftly move to use VPNs or similar techniques to seamlessly and by default bypass geo-based government restrictions.
For example
5 AV won’t block search previews or message sharing
Debates on age verification in relation to adult content often mix up the concerns of preventing inadvertent access by minors and reasonably determined teenagers finding their way to adult sites.
The eSafety Commissioner’s Roadmap for age verification does a good job of explaining the different risk vectors at different ages.
For example
6 Age verification will help with inadvertent access but it’s not perfect
For young kids’ inadvertent access to porn, AV is often positioned as an impactful measure.
That is likely only somewhat true because:
It should be highlighted that inadvertent access by minors to adult content can generally be achieved today if parents 1) enable parent settings / use parental controls and 2) keep their kids away from adult devices.
This is true now and will be true even if age verification comes into being.
7 Blocking determined teenagers is even more challenging for AV
As discussed in the above sections, any somewhat determined teen has a myriad of options for violating or bypassing measures to protect them. This includes many options to bypass the proposed age verification and assurance measures.
Furthermore, today’s parental control technologies, as discussed in below, are deliberately undermined by Google, Apple & Microsoft.
This is the most pressing issue in online safety and needs immediate regulatory attention.
8 The YouTube problem - YouTube is a major source of educational content
YouTube is one of the major sources of educational content both inside and outside the classroom. It is normal for schools to provide access to YouTube, restricted access and to allow teachers to share specific YouTube content.
Whilst YouTube is a content streaming service it is also a social media platform because it has user-generated content, user interaction, personalization, networking, and the potential for content to go viral. Many of TikTok’s features are finding their way into YouTube as Google seeks to re-compete. And finally, much adult content such as profanity and sexualised material can find you on the platform.
As it stands an age verification regime will break the ability for schools to make YouTube available in classrooms or assign YouTube content for study.
The only truly reliable approach is to control the device
1 “On-device” approaches are the only way to view and manage all activity
Today’s internet is dynamic, global and rapidly incorporating end-to-end encryption which allows user activity to be hidden, even from the platform providers.
The only workable technique for digital safety is to leverage “on-device” technology which can:
On-device technology can do this for the entirety of the internet. It doesn't just target the big social platforms or porn sites.
Indeed the Australian eSafety Commissioner’s recommended “double-blind tokenised” age-verification approach acknowledges this by relying on user-devices storing tokens that confirm that the user has been age-verified.
On-device approaches are provided by a healthy mix of technology providers:
Unacceptably, however, Google, Apple and Microsoft deliberately limit the effectiveness of all of these approaches.
Enterprise safety technology in particular is most capable of delivering on the community and policy objectives for online safety without all of the drawbacks of the proposed age-verification model.
Enterprise safety technology is accessed by business app developers and is in use on 10s of millions of devices today. It is in common use in US schools for school-issued devices. It works, can't be hacked and allows schools and parents to share policy responsibility.
This technology is however deliberately withheld from parents by Google, Apple & Microsoft who only licence it to enterprise App developers (for free).
2 We cannot trust Google, Apple and Microsoft to make this technology available without regulation & competition
Google and Apple in particular have been proven untrustworthy in creating and maintaining safety features and providing fair access to parental control app developers.
Regulatory and antitrust inquiries globally have evidenced this behaviour and specifically in the app marketplaces (of Apple & Google):
For example, the US House Judiciary Committee’s Subcommittee on Antitrust, Commercial and Administrative Law investigated Apple following Apple’s removal of all parental control apps from the App Store in 2019. Leaked internal Apple emails uncovered by the inquiry found Apple used children’s privacy as a manufactured justification for its anti-competitive behaviour. For example:
The ACCC’s Digital Platforms Inquiry’s landmark 2021 report on app marketplaces concluded that “First-party [ie Apple & Google] apps benefit from greater access to functionality, or from a competitive advantage gained by withholding access to device functionality to rival third-party apps.” (page 6).
The discriminatory practices found by the DPI are those that are used by Apple and Google to undermine the effectiveness of parental control apps. Parental control apps are restricted from accessing key operating/ecosystem features that would make them otherwise highly performant, effective and immune to violation by children. These companies place no equivalent restrictions on their first-party apps and only some on app developers for enterprises.
The Collation for App Fairness identified the same anti-competitive and harmful behaviour in their submission to the ACCC in 2022 stating:
“.. with MDM, an app developer can push to a device settings which apply content filters, determine what features and apps can be accessed and limit access to networks and VPNs. On the Apple platform, the full suite of MDM features is available through a configuration called “Supervision”. Apple does not allow consumer app developers to access Supervision. … This undermines the ability of consumer app developers to compete and innovate, harming children as a result.”
The direct result of this anti-competitive practice is the disempowerment of parents to protect their children online.
Most parents give up trying. Those that don’t are forced into limited and unreliable options and key parenting decisions get made by big-tech e.g. on what’s appropriate for children to use and that once a child turns 13 they can opt out of their parents' safety settings.
Every single safety technology provider in the world would attest to this market construct and the behaviour of Google, Apple and Microsoft as being the most fundamental and most important issue in online safety.
Until this is addressed, we respectfully submit that the objectives of any online safety regulatory regime will be unmet.
3 Enterprise safety technology explained
Enterprise safety technology is made freely available by Google, Apple and Microsoft to enterprise app developers i.e. software developers for big businesses. Such technology includes these basic components:
US schools have access to all of this capability. They have access because they issue students with learning devices and they operate IT like a business would. They thus get access to solutions provided by enterprise app developers like Qoria, ContentKeeper, Impero, Lightspeed, GoGuardian, Securly and more.
Enterprise safety technology can be seamlessly installed over the internet. It cannot be removed by children and works across all device operating systems. Default settings can be applied and these can evolve over time.
Enterprise safety technology would more than adequately support the majority of the government’s online safety objectives.
Enterprise safety technology:
Again this technology is deliberately being withheld from parents (for use on personal devices) by the big tech gateways (Google, Apple and Microsoft). This has been a matter confirmed in many competition inquiries globally including in Australia, Europe and the US.
There is a much more suitable technology pathway forward
We urge the Australian Government to adopt an approach to online safety that builds on existing successful technologies and ensures a sustainable future for online safety measures.
1) Competition law; interoperability & banning self preferencing
A crucial step is making on-device safety technology accessible to parents, similar to its availability for big businesses and schools.
We suggest that the Australian Government (and global regulators) implement the ACCC’s policy recommendations and follow the EU Digital Market Act and require interoperability and open and competitive tech markets. Specifically, all app developers should have non-discriminatory access to essential operating systems, app stores and browser safety capabilities.
Why? Because parents do not have truly effective online safety options today.
And so parents will have free, effective and hack-proof tools to keep their children safe on all devices and for all online platforms.
And so schools can ensure parent-funded (BYO) learning devices are safe and support learning.
And to ensure a marketplace exists that innovates to solve the future needs of consumers & schools.
2) eSafety filtering standards & Family Friendly Filters
Next, we recommend that the eSafety Commissioner upgrade the existing Family Friend Filter (FFF) program to take advantage of the opportunity for effective control, moderation and choices that on-device safety tech enables.
We suggest the Commissioner should be guided by the UK’s highly effective and prescriptive Keeping Children Safe in Education regime.
Specifically, the eSafety Commissioner should set filtering standards and the FFF program should ensure the services provided by Google, Apple, Microsoft and 3rd party parental control and school safety providers support the community needs of:
Why? Because parents want easy and reliable online safety options.
And so that all parents in Australia will know that devices and parental control apps certified by the FFF program have reliable standards of protection and oversight of eSafety.
It is noted that (bizarrely) the online safety technology industry is not currently covered by the Online Safety Act, basic online safety expectations or associated codes & standards. This must be corrected as part of the review of the Online Safety Act.
3 Benefits of this approach
With this in place, parents and schools will know there are robust and safe options for their devices. And eSafety, schools and cyber experts can confidently recommend them. And all without cost to the taxpayer or the disruption of age verification.
It’s worth adding that compliant online platforms will ultimately have a commercial advantage in a regime like this, where non-compliant platforms are generally excluded from use.
Everything we describe here can be demonstrated today. Google, Apple and Microsoft could enable community access with little effort.
We urge at the very least that an inquiry is recommended into how to ensure that all schools and parents have access to a competitive market for on-device safety technology.
Child Abuse Prevention Educator | Advocate | Ambassador | Keynote Speaker | Workshop Facilitator
5moTim Levy thanks for the invitation to be involved.
Tech & AI Executive | Scale-up Leader | Bits and Atoms | Sustainability
5moTim Levy such important work great read and will share wider
Founder & Principal Consultant, Keynote Speaker, Thought Leader empowering online trust and safety.
5moProud to stand beside you on this one Tim!
Writer, storyteller, strategist | Content + communications + spokesperson @ Qustodio 💚
5moExcellent article, thank you for sharing - an important read for anyone concerned about the future of the internet and how it will affect our children.
Founder @ SafeTelecom | Writing the Code for Cleaner + Safer Tech
5moThanks for sharing, Tim Levy!