From the course: Generative AI for Digital Marketers

AI issues, risks, and policies

From the course: Generative AI for Digital Marketers

AI issues, risks, and policies

- Generative AI, like any technology, comes with benefits and risks, and you need to assess and manage the good and the bad before adding AI to your marketing mix. Let's start with the data used to train the algorithm. If it's spouts racist or toxic content, what are the implications for your brand? What about the proprietary company data you enter? What guarantees do you have that it won't be used to train the machine and that it won't end up in your competitor's hands? Bias is another concern, and it comes in many forms. It could be the bias in data that hasn't been trained on a diverse set of examples and adversely affects a person's life, or it could be the bias built into the algorithm itself. You'll want to figure out how to reduce and mitigate biases or you could risk making the wrong decision and harming your customers or your brand. Some other issues to consider are privacy, including protecting your data as well as the data your customers and staff share with you. Then there's transparency and the way you'll disclose how you're using AI. Explainability is another thing to consider, or communicating why a machine made the decision it did if a customer is unhappy with the outcome or just wants to know. Then there's the issue of copyright and ownership, especially if your creative work resembles an artist or a designer and because generative AI tends to make things up, you'll want to set up guardrails to ensure your content is factual and you don't spread misinformation or lies. That's why it's essential to develop enterprise-wide policies to frame the ways you'll be using AI. Bring your company's legal, data science, IT, finance, operations, marketing, and communications teams together to establish those guidelines, and prepare a crisis response strategy that puts people ahead of machines. The National Institute of Standards and Technology, or NIST, has a four step reputation management framework you can use to get started. That is map, measure, manage, and govern. You'll also want to think about the wider consequences AI might have for your company and society at large and how you can use it in an ethical and responsible manner. Figure out where you'll draw the line and what checks and balances you'll need, and in every situation, make sure a human has the final say.

Contents