World-first report shows leading tech companies are not doing enough to stamp out online child abuse
A new report from the eSafety Commissioner has found that some of the world’s biggest technology companies are not doing enough to tackle child sexual exploitation on their platforms. The report highlights inadequate and inconsistent use of technology to detect child abuse material and grooming, and slow response times when this material is flagged by users.
It follows eSafety issuing the first legal notices under Australia’s new Basic Online Safety Expectations to Apple, Meta (Facebook and Instagram), WhatsApp, Microsoft, Skype, Snap and Omegle in August this year. The notices required the companies to answer detailed questions about how they were tackling child sexual exploitation material on their platforms.
eSafety has now compiled their answers in a world-first report which will be used to lift safety standards across the industry.
The report includes confirmation from Apple and Microsoft that they do not attempt to proactively detect child abuse material stored in their widely used iCloud and OneDrive services, despite the wide availability of PhotoDNA detection technology. PhotoDNA was developed by Microsoft and is now used by tech companies around the world to scan for known child sexual abuse images and videos, with a false positive rate of 1 in 50 billion.
The report also unearths wide disparities in how quickly companies respond to user reports of child sexual exploitation and abuse on their services, ranging from an average time of four minutes from Snap to two days for Microsoft – and 19 days when these reports require re-review. While Microsoft offers in-service reporting, this is not available on Apple or Omegle, with users required to hunt for an email address on their websites, with no guarantees they will receive a response.
eSafety’s report also identifies problems in preventing recidivism – where a user banned for sharing child sexual exploitation and abuse material is able to create a new account and continue to offend. Meta revealed if an account is banned on Facebook, the same user is not always banned on Instagram, and when a user is banned on WhatsApp, the information is not shared with Facebook or Instagram.
Recommended by LinkedIn
Australia’s eSafety Commissioner Julie Inman Grant said the companies’ responses to the legal notices are deeply concerning.
“This report shows us that some companies are making an effort to tackle the scourge of online child sexual exploitation material, while others are doing very little.
“But we’re talking about illegal content that depicts the sexual abuse of children – and it is unacceptable that tech giants with long-term knowledge of extensive child sexual exploitation, access to existing technical tools and significant resources are not doing everything they can to stamp this out on their platforms. We don’t need platitudes, we need to see more meaningful action.
“As a regulator, we’ve previously sought information from these online service providers about how they are tackling child abuse material without getting clear answers. With the introduction of the Online Safety Act 2021 and the Basic Online Safety Expectations, we are able to compel companies to provide that information. This means we finally have some answers – and what we’ve learned is very concerning.
“However, they say that sunlight is the best disinfectant, and we believe that compelling greater transparency from these companies will help lift safety standards and create collective will across the industry to meaningfully address this problem, at scale.”
For more information, read the full report at eSafety.gov.au/industry/basic-online-safety-expectations/responses-to-transparency-notices