The Components of a Good GenAI Governance Program:  How to manage GenAI risk and be AI regulation compliant...

The Components of a Good GenAI Governance Program: How to manage GenAI risk and be AI regulation compliant...

Generative AI (GenAI) popularised by Chat GPT is all the rage in tech circles these days. I personally have more than 15 clients actively engaged in GenAI pilots at the moment across a variety of industries including banking, federal public sector, utilities and general industrial. This is such a new technology many organisations are not yet totally aware of the hazards involved in the use of Generative AI and the need for good governance. Governments are implementing new regulations in response to public demand to make sure organisations are using AI and GenAI safely and to hold organisations accountable when they use AI and GenAI badly. Not complying with these new regulations can be costly. The EU recently announced that the maximum fine for its new AI regulation could be up to 7% of global revenue (see here). The intention of this article is to help demystify what is Generative AI and to provide organisations with a set of building blocks that can help ensure they use GenAI safely and be compliant with new government AI regulations. It is targeted at senior business leaders who are trying to understand how they should be reacting to this new and evolving technology and to provide a checklist of questions they should be asking of their technology leadership when they implement GenAI pilots and programs. It should only take you 15 minutes to read, and I hope you find it useful.

What is Generative AI, What are Foundation Models, and how is AI changing?

Traditionally, Artificial Intelligence (AI) has required organisations to build a specific AI model for each individual use case, whether it was machine vision, natural language processing or other. These models, trained on specific use case data, needed to be tuned and constantly monitored for performance, updating the models regularly where performance degrades. Organisations needed to pay for the people and tools necessary to build and manage these individual AI models. As a result, AI was only really economical for higher volume use cases where the costs of AI model development and management were offset by the benefits of using AI.

Traditional AI Model Development - tailored models for every use case


In 2017, the University of Toronto, in partnership with Google, wrote a research paper called "Attention is all you need". See here if you'd like to read this paper. Without going into detail, this research paper fundamentally changed AI by giving organisations a computationally efficient way of building extremely large AI models. These extremely large models, trained on a variety of data, could be used for a variety of use cases, not just one. As a result, the concept of the "Foundation Model" was born. The term "Foundation Model" was first coined by the Stanford AI team (see here) to refer to AI models that could be applied across a variety of use cases. These Foundation Models allow organisations to adopt a build once-use many times approach. This radically changes the economics of AI by making even lower volume use cases economical. It also allows organisations to use models built by other organisations, reducing even further the investments necessary and improving the economics of AI even further.

Newer Foundation Model Development - Build once use Many times


Generative AI (GenAI) is any time you are using AI to generate content, whether it is text, images or voice. Large Language Models (LLMs) are a form of Foundational, Generative AI that is used specifically for the generation of text.

I recently published a video on what is Generative AI, which you can watch here if you'd like further information and explanation.

ChatGPT, which was released for public consumption a little over a year ago by OpenAI (see here) and which most people have played with, is a Large Language Model, which is a form of Foundational, Generative AI. ChatGPT demonstrated the capability of these Foundational GenAI models, and ever since, organisations have been racing to adopt this new technology because of the benefits it can provide.

The adoption rate of these Foundational, Generative AI solutions has been so fast that the use of ChatGPT has surpassed the adoption of Facebook, the Mobile Phone and even the Internet (see here). Hence why I have so many clients experimenting with this technology, and many organisations are asking big tech companies like Microsoft, AWS, Google and IBM to help them with the deployment of Foundational, Generative AI into their organisation.

But whilst organisations have adopted GenAI quickly, their adoption of the necessary Governance models, tools and processes to use GenAI safely has been slow.


But why Accountable AI and why Generative AI Governance is important?

Whilst Gen AI and the concept of a Foundation Model have radically changed the economics of AI and, as a result sped up its adoption, it does come with additional risks not associated with traditional AI deployments. This largely has to do with the fact that using a model that was built by another organisation has inherent risks because you don't know how that model was built or what data was used to build it.

The analogy I often use with clients is that the use of Foundation models is like building a house. Traditionally, you used to have to build the house yourself, so you knew the standard it was built to. But now you can buy it from someone else, renovate or just move in. In fact, there are whole new marketplaces for GenAI models where hundreds of thousands of GenAI models are on offer (see here). The problem here is that, unlike your family home, which had to be built to certain building codes and had to be inspected for compliance with those codes before you moved in, the market for Foundation models is largely self-regulated and ungoverned. You don't know to what standard a Foundation Model was built to. Oftentimes, the way a Foundation model is built is considered a source of competitive advantage and, therefore, proprietary information by its builders, and the builders won't show you how they built it. But like a homeowner, governments are holding the model user, not the model builders, accountable for when someone gets hurt.

The IBM AI Ethics Board has recently written a really good white paper that describes how Foundation models like ChatGPT amplify the risks of traditional AI and create completely new risks (see here). Chief among the risks of using these Foundational Generative AI models are Copyright and IP infringement, issues related to privacy, fairness/ bias, value alignment/toxicity/ hate, and transparency/ explainability. What is important for organisations and the senior business executives managing those organisations to understand is that their traditional tools and governance models for AI are insufficient to manage the new risks associated with Foundational, Generative AI.

To address these risks, many of the large tech companies have adopted their own standards (or building codes to extend the house analogy) when using AI and GenAI in an attempt to self-regulate. This is often referred to as Responsible AI. But what organisations using GenAI also need to understand is that governments are increasingly rejecting the tenets of Responsible AI espoused by big tech companies like Microsoft (see here), AWS (see here), and others as not being sufficient. An example of this rejection is illustrated in the recent comments by the Australian NSW Ombudsman (see here), where he suggested that Responsible AI doesn't exist. Rather, the ombudsman, like many government officials worldwide, is advocating for new regulation to hold organisations and their business management teams legally accountable for their use of AI and Generative AI, backed up by the use of heavy fines for enforcement. This move from an acceptance of self-regulation or "Responsible AI" to government regulation or "Accountable AI" will mean that the expectations of organisations using AI will increase even further. It is a move that seems to be heavily supported by the general public. As a result, for an organisation that is using GenAI to have good Governance will be of paramount importance for most senior business leaders to help them with their new personal legal accountabilities.

But how do you achieve "Good" GenAI Governance in order to comply with this increased organisational and individual accountability and to address the risks of GenAI? In each of the following sections, I try to explain, in simple business terms, what I think, based on my experiences, are the core components of a good GenAI governance program and what senior business leaders need to insist on from their technical counterparts when they start to roll out GenAI pilots and programs across their organisations.

Components of a Good GenAI Governance Program

1) Toxicity/ Hate Detection

The first thing that organisations need to consider when implementing a GenAI program is to ensure that they have Toxicity/ Hate Detection tools and processes in place.

In simple terms, GenAI and, specifically, LLMs are large complex models that, in effect, use the last several hundred words presented to them to predict the next hundred words. These predictions are stochastic and probabilistic in nature and mean that the use of GenAI and LLMs can lead to random, unforeseen results. These models are not intelligent and don't have the same filters that most humans do which prevent us from saying toxic or hateful things or saying things that are inappropriate for a given situation.

In order to prevent this Toxic or Hateful speech, organisations need to ensure that their GenAI programs include Toxic Language Detection and correction in their GenAI tooling and management processes.

Toxic Language Detection is important to ensure inappropriate content isn't produced.

2) Bias Correction

Humans are often biased in terms of how they make decisions. It shouldn't be surprising then that the GenAI models we build can also be biased.

Foundational GenAI models are built using large amounts of data. That data can include hidden biases because of how the humans selecting that data were unknowingly biased in their data selection and curation. This can make the content generated by these GenAI solutions inherently biased. That bias can take many forms, from bias on individual human characteristics (male/ female, young/ old, etc.) to bias on more esoteric questions like what is the best car, business, etc. Bias detection involves testing the Foundational GenAI model outputs to ensure they behave the same across individuals or groups with different characteristics before you put them into production.

In order to prevent bias, organisations need to ensure that their GenAI programs put tools and processes in place to detect when bias has occurred and to correct for it in their GenAI outputs.


Biased data used to build the models can lead to biased answers.

3) Explainability

GenAI can have a variety of use cases. From simple Question and Answer type scenarios to generating summarised content to simple reasoning. GenAI models are, by definition, large and complex. Thus, when asking a GenAI solution a question or asking it to make a decision, one may not know exactly how it came up with a specific answer.

This "black box" nature of GenAI can be problematic in many situations. For example, if you used a GenAI-based solution to help in making product recommendations, you would want to know why it made a specific recommendation before accepting it.

There are many ways to improve the explainability of GenAI solutions, including "Chain of Thought" prompting, where you ask the model to follow specific steps in coming up with a response or by simply asking the model to provide a list of the source content used when it provides a response.

The important point here is that an organisation's GenAI Governance needs to ensure that explainability is built into all their GenAI solutions so you understand and a confident in the decisions or responses these models make.


The black-box nature of GenAI solutions can be a problem for many use cases

4) Privacy/ PII Data Leak

In order to answer questions, generate content or make decisions, often, one may be induced to pass private or commercial in-confidence data to the GenAI solution to get the desired response. In so doing, this data can be trapped and used for other purposes, with the user being completely unaware. This is a well-known problem for public GenAI LLMs like ChatGPT (see here) and has led to many organisations attempting to ban the use of public GenAI solutions. But a full ban on such popular and productivity-enhancing tools is often difficult or near impossible. It is much better to provide your organisation with safe internal GenAI alternatives to these often free and public GenAI solutions.

Either way, organisations need to put controls in place to ensure the GenAI solutions they are using, particularly public open GenAI solutions, are not trapping their critical data.

Ensure that the use of GenAI is not passing PII or Commercial in Confidence information outside of your organisation

5) Usage Monitoring and Use Case Governance

The variety of use cases that I have seen GenAI proposed for is quite broad. One can argue that this is one of GenAI's primary benefits, that it can be used to solve many different types of problems. Some of the use cases I've seen it being proposed for are lower risk, but some have been quite high risk. These risks can be financial in nature, potentially impact the organisation's brand, but can also, in some cases, impact personal safety.

Recognising that the variety of use cases is quite broad and that the level of risk can vary depending on the use case has been one of the chief concerns of government regulators. For example, the recently passed EU AI regulation classifies the various use cases for GenAI into risk categories and insists on different levels of protection depending on the level of risk (see here). Some use cases for GenAI have also been banned as the level of risk is perceived to be too great.

To ensure that you understand how GenAI is being used in your organisation and what level of risk you are being exposed to, you need to put controls in place that ensure a complete listing of all the ways GenAI is being used and that the higher risk use cases are approved before being put into production.

You will also need to understand how your organisation is using GenAI to ensure that you are compliant with government regulations, which will vary depending on the use case and will change over time.

So a complete up-to-date inventory of all the ways your organisation is using GenAI is critical for good GenAI Governance.

Generative AI Usage Monitoring will be critical to achieving Regulatory Compliance

6) Model Quality Health Evaluation

As previously mentioned, one of the chief benefits of using Foundational Generative AI is the fact that you don't have to build the models yourself, but you can use models built by other organisations. The volume and variety of models on offer are increasing by the day, with literally hundreds of thousands of model options currently available. What is particularly exciting is the advent of vertically specific and use case category-specific models. For example, Bloomberg has recently released a Financial Services-specific GenAI model (see here).

In addition, all these GenAI models have meta-parameters that can be tuned to control how the GenAI model behaves. Everything from how creative a model is to how long its responses are can be controlled.

It is also important to understand that all of the models available have different End User Licensing Agreements and different levels of indemnification on the use of the LLM on the part of the model builders. Proprietary models, which are offered on a charge-for-use basis, tend to have higher levels of indemnification than those that are offered open source with essentially no warranty. As I've previously said, the marketplace for GenAI models is largely self-regulated.

This cornucopia of choice, variability in terms of model quality and flexibility in terms of how the models are used means that organisations need to have good tools in place to ensure that they are picking the right model and using it in the most effective way. As a consequence, tools and Governance processes to ensure the quality of the model responses are another critical component of Good GenAI Governance.

7) Drift Detection and Management

Even when good Governance processes are in place for model selection, what was working when you first put it into production can stop working. This is because how the models are used over time can change. You may have optimised your GenAI solution for a certain set of use cases, but then your users change how they use the model. Alternatively, the data you pass the models to help them address a use case can change because of some related upstream changes in your organisation's systems that some team has made, not understanding the impact on your GenAI models.

As a consequence, just like in traditional AI, organisations need to have tools and processes in place to monitor ongoing model performance and make adjustments where necessary. Unlike traditional AI, this Drift Detection and Management is less costly because you have fewer models because of the build once used many times capability that Foundation models provide. So you will have fewer models to manage, but you will still need to manage them! So Drift Detection and Management continues to be a critical component of AI Governance and GenAI Governance.

8) Malicious Prompts Robustness

Generative AI, like any other technology, can be exposed to cyberattacks. I once saw a "white hat" (aka a good hacker... people who hack to help improve cybersecurity ), as part of a cybersecurity presentation, use malicious prompting to induce ChatGPT to spread malicious code just to demonstrate how easy it was. He immediately reported the flaw to OpenAI (the owners of ChatGPT) so they could fix it. But the point is he was able to achieve the hack in a matter of minutes/ hours, which illustrates how easy it can be. There are even games on the internet, like Gandolf, that allow you to test your skills in GenAI hacking (see here).

The overall point is that GenAI, like any other type of technology, needs active involvement from your Cybersecurity Teams with tools and processes put into place to ensure that it is robust against cybersecurity attacks.

Malicious Prompt Injection is something that all GenAI users will need to guard against.

9) Model Inventory/ Facts/ Report Cards, Experimentation Tracking and Model Lifecycle Tracking - LLM Ops

These last three components of good GenAI Governance are needed as much for efficiency as they are for risk management. As your use cases of GenAI start to increase and you deploy more GenAI solutions into production, you are going to want to maintain detailed catalogues of the models you've used and for which use cases, the experiments you've conducted with the GenAI models and where the models are in terms of their deployment lifecycle. This helps you with your regulatory compliance (reference point #5 above) but also makes you more efficient as an organisation in learning what has worked and what hasn't worked. That way, as time goes on, your GenAI deployments will become faster. Therefore, keeping a good model inventory, tracking your model experiments and managing your GenAI model lifecycle are all components of good GenAI Governance.

All the major GenAI vendors have started building tools to support these key functions. For example, here is a good blog article on the tools that Microsoft has built around GenAI model management and lifecycle tracking (see here). AWS also has their tools (see here), and of course, IBM has their own tools (see here).

Conclusions

So, what I have tried to do with this article is explain what GenAI is, talk about how it is different from traditional AI, identify that there are new significant risks associated with the use of GenAI, and provide a simple list of what I think are the components of Good Gen AI governance program. The intent was to provide the senior business leader with a simple checklist of questions they should be asking their technology leaders about how they are managing their GenAI programs of work. With the new accountabilities that regulation is placing on business leaders and as the usage of GenAi continues to grow, it is a senior business executives' own best interests to ensure they have good Governance processes and procedures in place to help them use GenAI safely.

Hopefully, you have found this article useful, and when you head back into the office from your Christmas break, you'll start to ask some of these important questions about your GenAI programs! :)


Dr David Goad is the CTO and Head of Advisory for IBM Consulting Australia and New Zealand. He is also Microsoft Regional Director. David is frequently asked to speak at conferences on the topics of Generative AI, AI, IoT, Cloud and Robotic Process Automation. He teaches courses in Digital Strategy and Digital Transformation at a number of universities. David can be reached at david.goad@ibm.com if you have questions about this article.


Rory McManus

Owner @Data Mastery | Need help with Data?

1y

Great post Dave!

Like
Reply
Brian Geach

Photographer/writer/co. director at Polperro Picture Productions Pty Ltd

1y

👍 Great article Dave

To view or add a comment, sign in

More articles by Dr Dave G.

Explore topics