Privacy x AI Guidance is Here: How Developers and Deployers can Minimise Privacy Risks
Everyday, more and more Australian businesses are adopting artificial intelligence into their operations. According to a recent CSIRO report, 68% of Australian companies have already integrated AI into their business, with another 23% planning to do so within the next year. As is often the case, our technology advances rapidly, leaving the law to play catch-up.
In August, the European Union gave us a taste of what AI regulation might look like with the EU AI Act, which you can read more about here. We’re still waiting to see if the Albanese government follows suit with its own set of AI rules. In the meantime, the Office of the Australian Information Commissioner (OAIC) has released some much-needed guidance on how businesses can minimise their privacy risks when engaging AI.
Understanding your obligations is important – particularly in light of the new penalty provisions set to come in under the latest Privacy Bill, which introduces new civil penalty provisions for lower and mid-range breaches of the Privacy Act. The Privacy Commissioner will soon be able to fine APP entities up to $3.3 million for non-serious breaches of the Privacy Act.
Below, we outline some of the key principles so that you can develop and deploy AI without breaching existing Australian privacy laws.
Developing Generative AI Models
Step 1: Can I exclude personal information from my datasets?
“Personal information” is information or an opinion about an identified individual, or anyone who is reasonably identifiable from that information (regardless of whether that information is true or not). If you can avoid including personal information in your datasets, you’ll have significantly decreased your privacy risk.
But what if your dataset relies on personal information, or you cannot reliably exclude it?
Step 2: Consider the Australian Privacy Principles (APPs)
Recommended by LinkedIn
You’ll need to consider how the APPs apply to you and your activities. The Developer Guidance points specifically to APPs 1, 3, 5, 6 and 10 as being particularly relevant. We have included some examples from the Developer Guidance below.
Using Commercially Available AI Models
If your business decides to adopt a commercially available AI product, you’ll face a unique set of privacy risks. These risks extend not just to privacy, but to intellectual property and confidentiality too.
The OAIC’s Deployer Guidance sets out five key considerations.
What next?
Businesses are going to innovate. That’s a good thing. Whether you’re developing cutting-edge AI or deploying an existing product, you’ll want to take proactive steps to ensure you are compliant with existing privacy laws.
Compliance isn’t just about avoiding legal pitfalls. It can also be a way to demonstrate to your users and the market that you’re taking your consumer’s privacy seriously and that you’re well-equipped to address privacy risks head-on.
Questions? Give us a call.
Lawyer at Marque Lawyers
1moThanks for sharing. Very interesting
Indeed it is, Justin Cudmore !
Partner at Marque Lawyers specialising in retail and FMCG.
1moInsightful