How To Solve for 3 Common Ethical Issues in AI Models

How To Solve for 3 Common Ethical Issues in AI Models

Over the past three years, AI spending projections for 2024 have doubled to over $200 billion. Organizations, governments and individuals are racing to explore the possibilities, unleashing a flood of ethical issues with AI and raising questions about how we can build inclusive, trustworthy models that serve all users and customers.

Solving this challenge brings three design priorities to the fore: mitigating bias, promoting explainability and approaching AI as a multiplier of human performance, not a replacement for it. Together, they flip the script on AI-human relationships. Training AI in this way requires us to check our biases, codify our values in code and ultimately perform at a higher level.

Inclusivity As the Foundation of Ethical and Effective AI

False imprisonment. Federal Trade Commission bans. Housing bias. These are examples of noninclusive AI applications failing their intended purposes and exacting harm on individuals, businesses and society.

Inclusivity creates better outcomes for all players, including businesses. By definition, it requires awareness of how our models directly and indirectly affect everyone involved. Given the rapidly changing dynamics of these issues, the path to inclusive AI is one guided by continuous learning and adaptation. It begins with three important stepping stones of understanding.

1. Bias: How It Causes, Perpetuates and Exacerbates Ethical Issues With AI 

AI models become biased because they’re trained on datasets and processes that humans create—and humans carry societal and personal subjectivity into everything we do.

  • Biased datasets: Using personal judgment, we decide on the goals, scope, questions, subjects, parameters and means of collecting data. Each decision can limit the completeness and representativeness of a dataset—i.e., its ability to accurately represent reality—often by omission or assumption. Those assumptions and gaps represent biases that then inform algorithmic decisions.

For example, after training on 10 years of resume submissions, mostly from men, Amazon’s recruiting algorithm learned to score men overwhelmingly higher than women, even disqualifying resumes mentioning “women.”

  • Algorithmic bias: Humans also define how an algorithm applies data to its decision-making process—e.g., what’s relevant, what to exclude, what to favor in certain conditions—when to reward or penalize a recommendation based on feedback, how success is defined, and more. As a result, systemic, repeating errors that produce unfair outcomes can form and grow.

Say you create a compensation-recommendation algorithm based on performance that considers everyone’s performance data equally. A young person unburdened with childcare may work longer hours, resulting in higher performance than that of an older person with kids. The algorithm may learn to associate lower age with over-performance and higher age with underperformance, which isn’t accurate.

The consequences are most severe when biased results are reincorporated into the training model or added to new training datasets—thus perpetuating and amplifying the bias. 

While you can’t circumvent bias, you can follow these strategies to reduce it:

  • Develop a strategic perspective at the outset. You can’t completely remove bias after it’s ingrained, so act early. Define the purpose of what you’re building, how it will be used, what data you’ll train it on and why. 
  • Be willing to recognize bias. Seek a diversity of opinion. Invite stakeholders and domain experts to weigh in.
  • Partition your data into separate subsets targeting each area of bias, then train and test performance for each one. Other techniques may include “cleaning” your data of bias, weighting it differently or training the model to remove points of bias. 
  • Continuously evaluate the results. How are they impacting various parties? Ensure that any results going back into the training model are providing equitable outcomes and monitor for emergent biases.

At my company, Paro, we wanted to proactively solve for potential biases in our freelancer-to-client matching algorithm so that new experts to our network would not be passed over experts with a successful track record in our network. To eliminate bias, our model suggests both seasoned and new experts. Clients unknowingly choose from the mix. Selected new candidates are then promoted in future recommendations, ensuring equal exposure opportunities based on client-chosen merit rather than arbitrary factors.

But trust in AI is not just about model outputs. Stakeholders, including employees, end-users and your board of directors, will still have questions around how AI is used—and what it could mean for their role in the business. How can you build transparency and buy-in while mitigating fears around AI adoption? Learn how to solve your next two challenges in the full article

About the Author

Saum Mathur has 29 years of experience in driving business growth by leveraging leading-edge business concepts, advanced analytics and technologies. He is a progressive and results-oriented executive who thrives on challenges and opportunities. As the Chief Operating Officer at Paro, a platform that connects businesses with freelance finance and accounting experts, he leads the marketing, sales, product management, technology and revenue operations for the company, with the mandate to grow the company efficiently and sustainably.

#artificialintelligence #futureoffinance #ai #finance

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics