AI & Equity

AI & Equity

Taking Action to Counter AI Bias

Last year, we shared how iFsters were using AI in a previous newsletter called AI & Us. At that time, many iFsters were just beginning to use AI for basic tasks like content summarizing, meeting transcriptions, and idea generation. Playing around with these tools can be fun, but we’re seeing a shift beyond experimentation to the standardization of AI use in the workplace. Now that a year has passed, AI tools have become more embedded in our work at iF. As we continue to deepen engagement with AI, a core question has come to the forefront:

At first look, it might appear that AI tools, especially chatbots, offer objective answers. But humans aren’t perfect, and neither is AI. As iFster Alison Gazarek explained in a recent insight, AI is only as good as its human-created inputs. Most large language models (LLMs) are trained using data that is widely available on the internet. Unfortunately, that means that most of that data generally reinforces the stereotypes and biases present in the humans and systems that make up society. It’s overwhelmingly anglocentric, with 55% of websites online in English. AI tools have also been proven to perpetuate disability bias, which is concerning in all fields but especially so in healthcare and hiring.

While AI tools can offer clear, simple answers, they might also be rife with many kinds of bias and errors. We’ve seen examples of bias in AI in everything from AI-created images to policing tools. Other organizations have overcorrected for equity concerns like with Google’s Gemini rollout, where an effort to mitigate bias went south. The information also might be plain wrong — despite delivering information in a confident tone, AI tools are not always right, leading to a phenomenon sometimes called “AI hallucinations.”

AI is changing rapidly, which makes it challenging to become an expert. However, we can create and cultivate both ongoing awareness and processes that ensure AI use in an organization is equitable and consistent. Since we aren’t creating the tools we use ourselves, we aren’t always in control of the results that are returned to us. However, how we write prompts, what tools we use, and how we modify the results we get is within our control. These efforts can combat user bias, one of the areas that bias can show up in AI.

  • When writing prompts, provide starter examples that you trust for the tool to go off of. Stick to facts, simple requests, or research-based inquiries — or be ready to heavily edit. AI can be immensely helpful, but it is a learned process to navigate its strengths and weaknesses. Check out examples from Montana State University.
  • Exercise caution when using AI tools to create images. Think critically about what you are attempting to have represented, and be on the lookout for results that reinforce stereotypes or problematic tropes. Ask for edits to get the picture how you want it. If the tool demonstrates limitations producing something that meets your standards, consider using an artist or illustrator that you can discuss the work in-depth with. The example below shows what ChaptGPT 4.o generated when asked for an image of a doctor versus a nurse. While there are certainly doctors and nurses who look similar to those images, the pictures that ChatGPT provided reinforces the stereotype that doctors are always men, and nurses are always women. Additionally, often both are people of color.

  • If a tool is answering a question for you, ask it for sources. AI tools instantly pull information from a variety of sources, and often don’t list them unprompted. Go the extra step to ask where the information came from and follow up with those sources. Confirm that those sources contain the information that the tool claimed they did and cast aside information from sources that you do not deem credible. If you don’t have time to check the sources from AI-generated information, consider getting information another way.
  • If it’s clear who the authors of the work you’re using are, cite them. If not, always avoid representing work from AI as your own. Be transparent about your sources and when you’re using AI. Take time to cite the creators or authors of the work you use, even if the AI tool doesn’t.

At iF, we pride ourselves on our awareness and openness to new technologies. In the past year, we’ve seen more iFsters adopt AI tools to increase their productivity. 40% of iFsters use AI tools for content and idea generation, and 30% utilize AI tools for meeting transcriptions and note taking. We’re also seeing iFsters use AI to create images, edit, and assist with deck research. iFsters have shared:

It can be intimidating to set expectations for using a technology that is changing so quickly. But since change isn’t going anywhere, our best, and simplest, solution is to stay vigilant for bias and errors, and be willing to spend extra time correcting and modifying AI work. As we move forward, these efforts will ensure that we set the right tone and boundaries for our AI use. This allows it to be a helpful augmentation to our work, and not a tool that controls us.

Check out Perplexity, which automatically cites its sources, and Latimer, an AI tool trained on diverse data as a couple places to start. Read further on using AI equitably from the U.S. Department of Labor.


Happenings

Other things we’re working on include:

Christian and Debi co-facilitated an all-day session with Mayor Bruce Harrell and his leadership team at the City of Seattle. The day was spent workshopping goals to improve organizational adaptability, collaboration, and accountability across departments.

All of us at iF recently hosted a big bash to celebrate the re-opening of our office and opportunities to connect with our community. Stay tuned for our 15-year anniversary party in April.

In early 2024, iF created its Senior Advisors program to widen its circle of like-minded people wanting to catalyze positive change with iF. We share a common vision of a healthy planet on which business and society prosper together and mission of turning good intentions into positive impact. Individually, our eight Senior Advisors are change agents who have made significant contributions in business and civil society, and we’re grateful for their wisdom and the opportunities they have brought to iF to create even greater impact. Thank you!


Read, Watch, & Listen

Alison Gazarek, Director, Education Practice — “Starting With The State” by Tim Hanstad

I love the provocative article by Tim Hanstad of the Chandler Foundation about the intersection of international philanthropy and governments, and the potential role of philanthropy in advancing societal good through strategic investments in parallel with / influencing “good governance.”

Peter Arbaugh, Technical Project Manager — “Zadie Smith on Populists, Frauds and Flip Phones” by Ezra Klein

I really enjoyed Ezra Klein’s recent interview with the novelist Zadie Smith on his podcast, The Ezra Klein Show. As someone who thinks a lot about technology and is frequently immersed in it, her decision to not use a smartphone and social media was interesting to me. She talks about how she thinks she would have been changed by exposure to the constant flow of opinions, and it raises a question that sometimes we don’t think about enough at a personal, individual level: How has the constant exposure to other people’s opinions changed us?

Jen Cupp, Advocacy and Communications Strategist — “Evolving from Violent Language” by Anna Taylor

As a communications strategist, I spend much of my time picking the right words to use. So this LinkedIn post on evolving past violent language really caught my eye. So many of the phrases we use in corporate America are rooted in violent terminology — I love that the author provided some practical alternatives for gentler language that has the dual benefit of being clearer, especially if you aren’t familiar with corporate jargon.


Work & Insights

“Writing the Story of Teacher Success” — Read about our work with EdLight

“System Mapping for a Safer Tomorrow” — Read about our work with the Los Angeles County Office of Violence Prevention

Thank You

To our clients, former and current; our fans; our colleagues; and our friends, we thank you for your continued support. Reach out to us anytime at info@intentionalfutures.com.


To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics