Privacy x AI Guidance is Here: How Developers and Deployers can Minimise Privacy Risks

Privacy x AI Guidance is Here: How Developers and Deployers can Minimise Privacy Risks

Everyday, more and more Australian businesses are adopting artificial intelligence into their operations. According to a recent CSIRO report, 68% of Australian companies have already integrated AI into their business, with another 23% planning to do so within the next year. As is often the case, our technology advances rapidly, leaving the law to play catch-up.

In August, the European Union gave us a taste of what AI regulation might look like with the EU AI Act, which you can read more about here. We’re still waiting to see if the Albanese government follows suit with its own set of AI rules. In the meantime, the Office of the Australian Information Commissioner (OAIC) has released some much-needed guidance on how businesses can minimise their privacy risks when engaging AI.

Understanding your obligations is important – particularly in light of the new penalty provisions set to come in under the latest Privacy Bill, which introduces new civil penalty provisions for lower and mid-range breaches of the Privacy Act. The Privacy Commissioner will soon be able to fine APP entities up to $3.3 million for non-serious breaches of the Privacy Act.

Below, we outline some of the key principles so that you can develop and deploy AI without breaching existing Australian privacy laws.

Developing Generative AI Models

Step 1: Can I exclude personal information from my datasets?

“Personal information” is information or an opinion about an identified individual, or anyone who is reasonably identifiable from that information (regardless of whether that information is true or not). If you can avoid including personal information in your datasets, you’ll have significantly decreased your privacy risk.

But what if your dataset relies on personal information, or you cannot reliably exclude it?

Step 2: Consider the Australian Privacy Principles (APPs)

You’ll need to consider how the APPs apply to you and your activities. The Developer Guidance points specifically to APPs 1, 3, 5, 6 and 10 as being particularly relevant. We have included some examples from the Developer Guidance below.

  • Even if personal information is readily available online, developers shouldn’t assume they can collect this to train their models. Under APP 3, organisations must only collect information that is reasonably necessary for its functions or activities, by lawful and fair means, and directly from the individual unless it is unreasonable or impracticable to do so.  
  • Many generative AI models rely on data scraping techniques. As an inherently covert means of data collection, this method risks being deemed unfair or illegal. Consider if you can filter out unnecessary personal information from the dataset, or take measures to de-identify or anonymise your dataset.  
  • Generative AI is known to produce inaccurate results - just ask this US lawyer. Under APP 10, organisations must take reasonable steps to ensure the personal information collected, used and disclosed is accurate.  
  • “Reasonable steps” will depend on your business’s circumstances, but may include a number of measures such as testing your product for inaccuracies and bias before deployment, and using appropriate disclaimers.

Using Commercially Available AI Models

If your business decides to adopt a commercially available AI product, you’ll face a unique set of privacy risks. These risks extend not just to privacy, but to intellectual property and confidentiality too.

The OAIC’s Deployer Guidance sets out five key considerations.

  1. Avoid using personal information. Best practice dictates that organisations do not enter personal information – and particularly, sensitive information – into publicly available AI tools.  
  2. Conduct due diligence. Privacy obligations will apply to any personal information input and output generated by AI (where it contains personal information). Accordingly, deployers should conduct due diligence to understand whether the product is appropriate for their intended use, whether it has been tested, the level of human oversight involved and the potential privacy and security risks.  
  3. Transparency. Update your privacy policy and notifications to include your intended AI use. This is particularly important if AI is being used to assist in decision-making processes.   
  4. Lawful collection. If AI systems are being used to generate personal information, this must comply with APP 3 (i.e. collection must be reasonably necessary for the businesses functions or activities, and by lawful and fair means).     
  5. Use and disclosure. If personal information is input into an AI system, organisations should only use or disclose the information for the primary purpose for which it was collected, unless they have consent or can establish that an exception applies.

What next?

Businesses are going to innovate. That’s a good thing. Whether you’re developing cutting-edge AI or deploying an existing product, you’ll want to take proactive steps to ensure you are compliant with existing privacy laws.

Compliance isn’t just about avoiding legal pitfalls. It can also be a way to demonstrate to your users and the market that you’re taking your consumer’s privacy seriously and that you’re well-equipped to address privacy risks head-on.

Questions? Give us a call.

Andrea Farrugia

Lawyer at Marque Lawyers

1mo

Thanks for sharing. Very interesting

Like
Reply
Justin Cudmore

Partner at Marque Lawyers specialising in retail and FMCG.

1mo

Insightful

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics