Australia leads the world setting new standards for online child safety
The commencement of Australia’s world-leading industry standards not only marks a significant stride forward in the online protection of children but may also have a global impact on how some of the largest and wealthiest companies in human history tackle the worst-of-the-worst content on the internet.
These two standards, combined with six industry codes already in place, require tech giants and a range of other services to tackle the most harmful content online, including child sexual abuse content and pro-terror material.
As we all know, there is no Australian internet, so measures these companies must put in place to tackle this mostly illegal content and comply with Australian law, will require changes by companies no matter where they are headquartered.
Operating under Australia’s Online Safety Act, the codes and standards cover many layers of the online industry, from app stores and search engines to social media services, messaging services, telcos and even device manufacturers.
But the two standards coming into force – applicable to Designated Internet Services (DIS) and Relevant Electronic Services (RES) - are some of the most important when it comes to child safety.
They will require cloud-based file and photo storage services, like Apple iCloud, Google Drive and Microsoft OneDrive, as well as chat and messaging services, to prevent their products being misused to store and distribute this harmful material.
And they will be backed up by stiff new penalties, with the Australian Government recently announcing that each instance of non-compliance with a code or standard could lead to civil penalties of up to $49.5 million.
And in another world first, the standards will also cover so called ‘nudify’ apps and sites that use generative AI to create or ‘nudify’ images without effective controls to prevent the generation of material such as child exploitation and abuse content. The online marketplaces that offer generative AI ‘models’ are also captured by these new standards.
So how did we get here?
This journey began almost three years ago when the online industry in Australia was tasked with coming up with eight enforceable industry codes requiring them to take meaningful action to prevent their products and services being misused for sexual exploitation and abuse material and pro-terrorist content.
Most people would be forgiven for wondering why the industry needed to be forced by law to tackle such abhorrent content in the first place.
While six codes were found to have sufficient community safeguards for me to register them, two – the DIS and RES codes I mentioned - were refused registration because they failed to provide appropriate community safeguards in relation to this appalling material.
As a result, we ended up writing the rules for them.
Recommended by LinkedIn
At their heart, the standards are designed to encourage innovation, investment and deployment of systems and technologies that improve safety, by preventing the hosting, sharing, the re-sharing and re-traumatisation of survivors, and the synthetic generation of this reprehensible child abuse content.
We are challenging big tech businesses and the industry as a whole, to harness their collective brilliance, vast financial resources and sophisticated tools to help address profoundly damaging content hosted on and distributed by their services.
These critical standards arrive at a time when the challenges to young Australians, their parents or carers, families, and the community are both growing and alarming.
In October this year, Mike Burgess, Director General of ASIO told the Social Media Summit, ‘All of Australia’s most recent cases of alleged terrorism, or events that are still being investigated as potential acts of terrorism, were allegedly perpetrated by young people. The oldest, 21; the youngest, 14. The internet was a factor in every single one of these incidents, albeit to different degrees and in different ways.’
Weeks later, the Australian Federal Police (AFP) reported a massive rise of online child exploitation. In the 2023-2024 financial year the AFP led Australian Centre to Counter Child Exploitation (ACCCE) recorded 58,503 reports of online child abuse, an average of 160 reports per day – a 45 percent increase on the previous year. These increases are reflected in our own reporting numbers as Australia’s online child sexual abuse hotline.
At eSafety we have also seen a year-on-year doubling of reports of online child sexual abuse material since 2019.
eSafety’s sister hotline in the United States, the National Centre for Missing and Exploited Children (NCMEC) analysed close to 36 million reports of child sexual abuse material in 2023 comprising around 55 million images and 50 million videos of child sexual exploitation. These are staggering volumes and just the tip of the iceberg.
This important milestone will serve as an important interlocking measure with a second phase of industry codes currently being drafted by the online industry that will aim to prevent children accessing harmful material like pornography and other high-impact content.
eSafety has granted a short extension to the industry to draft these phase 2 codes allowing them time to consider the implications of the social media age restrictions legislation.
This will require the industry to implement safeguards up and down the technology stack, in recognition that meaningful online child protection is a shared responsibility and each tech sector must do more.
So, whilst the world may be focusing on the social media age restriction bill, which is indeed a landmark measure, it is but one piece a of an interlocking regulatory framework set forth in our Online Safety Act.
To be sure, this must be supplemented by continued efforts to provide digital literacy for young Australians, information to empower parents and measures to ensure that the services Australians of all ages use every day are safer by design.
Congrats
Projektleiter, Kinderschutz Schweiz
2wCongrats!!
Follow me for AI, IT, Cybersecurity, Innovation | Founder @ Cyberkite | Innovator | Educator | Neurodivergent | Trekkie | Linkedin Top Voice | No DMs - Contact via cyberkite.com.au
2wRegarding Child safety: Child safety laws are good idea especially because parents plaster their kids videos and images on their social media which boggles my mind - is it not to show off or get more likes? Gotta protect the kids from predators. Regarding "pro-terror" material: Who determines what is news and what is pro-terror material on social media? Does government decide that or is it gonna be based on complaints? If it's based on complaints, can a nation state? If it doesn't like something, create a million fake accounts and complain a lot and then have any content they want taken down? Is a fine line between freedom of speech news reporting and government interference. Perhaps a red line is being crossed? Time will tell whether such laws are valid or a smokescreen for controlling the media.