Check Your Ethics (To Your Business Outcomes)

Check Your Ethics (To Your Business Outcomes)

At the end of the day, responsible AI is just good AI practice.

That’s what Olivia Gambelin, founder and CEO of Ethical Intelligence, told me recently. A trained ethicist, Olivia is also the author of a forthcoming book, Responsible AI: Implement an Ethical Approach in Your Organization. She is, in short, the perfect person to talk to about responsible AI, and I was thrilled to have her as a guest on my livestream.  

Olivia’s expertise spans philosophy, ethics, and technology. She serves as an advisor to executive teams as they build out and implement responsible AI strategies, and she leads interactive training workshops focused on using ethics for innovation and strategy. 

As an AI ethicist and a responsible AI strategist, Olivia believes AI should be embedded in everything you do as an organization. But where do you begin?

Here, she shares an overview of how to ethically and responsibly approach AI. (Note: The following livestream excerpts of Olivia’s comments were lightly edited for clarity and brevity.)

Slow Down To Speed Up

Here, Olivia shares her advice on how organizations can begin to set up their ethical frameworks.

I prefer to start with the “Responsible AI” side of things, which sets up the right structures, policies, and processes to use AI. By putting the correct mechanisms in place to enable “Ethical AI” decision-making, you’re slowing down to speed up.

It might be tempting to start from an ethics perspective first, focusing on the actual AI itself and making brilliant decisions with strong value alignment. But if you do that, you may not have the right protocols to execute on those decisions. Having responsible AI practices in place opens up the time and space to make good ethical decisions. 

Starting with Strategy: People, Process, and Technology  

Responsible AI strategy has three layers: People, process, and technology. Leaders want to consider the first step in each layer to get started. 

  • For people, begin with education. This can be as simple as a few Coursera courses or a podcast, or as complex as rolling out a new training requirement. It depends on your needs and can be flexible.

  • For process, begin with intent. Here, we’re talking about policies. You’ll want to either update current ethical tech policies or create new ones that are specifically ethics-focused. This sets the intention that the rest of your frameworks—your governance—can be built on. 

  • For technology, begin with data ethics. This involves getting your data practices standardized. It sounds simple, but it's surprising how chaotic most data practices are handled! Get good practices in place so you can unlock AI’s power.

Protecting and Aligning Your Values: Taking An Ethical Approach to AI

Next, Olivia shared two ways that companies can use ethics.  

Think of ethics as a tool, one that is two-sided: You can use ethics for risk mitigation, and you can use ethics for innovation. 

When using ethics in risk mitigation, you’re thinking about what you need to safeguard. You’re asking questions like, “How do I protect my company’s values?” 

  • For example, if your organization values privacy, you’ll want to examine regulations like GDPR or the CCPA and ensure you are compliant with them. You’ll want to protect against data breaches and so on. 

The other side of ethics is innovation. Instead of thinking about what you want to protect, you’re thinking about how the innovation aligns with your values. 

  • Again, if we take privacy as an example, you’ll want to align with privacy, asking questions like, “Do we need to operate on an opt-in or an opt-out functionality?” or “What's going to make our users feel stronger and protected?”

When it comes to practicing ethics—at least on the technology side—you ideally want to protect and align. The combination of both is the most transformative. 

AI Ethics Value Sets 

Olivia shared her thoughts on how organizations can determine and prioritize their values. 

AI ethics offers three layers to examine. It’s in the overlap of the following that companies can then determine their core values to begin building from:

1. Legal Regulations 

We still lack many of those regulations, but many are under development. Consider what regulations exist in your industry or areas where AI is regulated (like fairness practices). That’s a value you want to bring in.  

2. Industry Regulations 

Health care is a great example—it has its own field of medical ethics. If you are using or developing AI in healthcare, that’s a value set you need to follow. Consider your industry standards, and let those inform your AI values and practices. 

3. Company Values

What are the values your company centers on? What values do you have on the wall? All companies should have values that align with the company’s objectives. 

In a perfect world, your value sets overlap, and you can cross-reference them to determine your core values. It helps with decision-making when you can have those different inputs all in one place. But that doesn’t always happen!

What do you do if you’re faced with a decision between two conflicting values? (Consider privacy versus transparency, for example). There’s a name for this: Moral Overload, and it’s where many companies get stuck.

Creating a two-tiered version of a value set helps you prioritize a hierarchy of values. Have your primary values—the ones you see across all of those value inputs—first. Those must be reflected. Then, write your secondary set of values. That way, when your primary values are conflicting, you can use those secondary values to guide your alignment in key decision-making. 

Where The Rubber Meets The Road

Olivia shared her suggestions on how to get started implementing a Responsible and Ethical approach to using generative AI in organizations. 

When implementing your new ethical and responsible AI framework, consider two approaches:

Action-Based Framework: When you have a specific decision that needs to be made based on your values, use this kind of framework. Think of it as an additional box for your teams to check. This approach is easily adopted and will ideally be designed with current existing policies and workflows. 

Decision-Based Frameworks: Leadership teams and executives often utilize this framework. It requires flexibility to assess a situation against your set of values and work through them. To be able to train that, you need to get people to a point where they can think from a strategic standpoint rather than an emotional one. You want people making decisions less based on gut feelings (important in some cases, but not here) and more on critical analysis. 

One final point: You want to circle back to your ethical and responsible AI frameworks regularly. Given the speed at which technology is developing, you need to go back and ask yourself whether what you’re doing is working. Has it helped you achieve the impact you wanted? Are you aligned with this framework, or is some type of adjustment needed? 

Extend your feedback loop by doing at least a yearly checkup of your framework. There's a lot of trial and error to start, but remember, this is a living, breathing document.  

Your Turn

How do you approach embedding ethics in your AI strategy? What actionable steps have you taken from Olivia?


Hossam Afifi

Uniting Global Entrepreneurs | Founder at NomadEntrepreneur.io | Turning Journeys into Stories of Success 🌍💼 Currently, 🚴♂️ Cycling Across the Netherlands!

10mo

Great insights on responsible AI practices, looking forward to implementing them!

Like
Reply
Tim Grassin

Building in stealth • 3x Exit Founder in Southeast Asia • 15+ years of growth leadership 🚀

10mo

Can't wait to read more about ethical AI and how it's shaping the future of business!

Like
Reply

Can't wait to read it! 🌟

Like
Reply
Cory Blumenfeld

4x Founder | Generalist | Goal - Inspire 1M everyday people to start their biz | Always building… having the most fun.

10mo

Can't wait to read about it!

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics