Fairness in AI — A Short Introduction for Everyone

Fairness in AI — A Short Introduction for Everyone

Written by Hsin-Hao Yu , PhD.

The term “fairness” is typically understood as a complicated and subjective topic — discussed and debated in philosophy and social science classrooms, newspaper editorials, and in the chambers of the parliament. Most of the time, we don’t try to express the meaning of fairness in precise terms. Is it possible to explain it to a computer?

The advance of artificial intelligence (AI) has turned that rhetorical question into an imperative one. Automatic systems are now routinely used to make decisions that affect our lives. Most of these systems are trained on data, which means that the decisions that they make (or recommend) are not determined by a set of pre-defined rules or principles. Rather, they are based on the statistics of the data. If some patterns in the dataset are the consequences of unfairness in the real world, they can be exploited by AI to perpetuate that unfairness. In fact, AI systems deployed at large scales are likely to amplify unfairness, because it is much harder for a non-specialist to understand where the unfairness might come from, and how to guard against it.

The goal of this article is to explain, without mathematical formulas, some of the most basic concepts of AI fairness, in a simple scenario where an AI model has to make a yes or no decision. It doesn’t cover all the fairness issues that we would like to address in AI, but it should provide a good starting point.

Being blind to demography does not ensure fairness

Some demographic attributes are considered sensitive: gender, race, age … and so on. To avoid biases against groups identified by these attributes, it is not uncommon for designers of AI systems to remove these attributes from the datasets used to train models. Some practitioners go as far as believing that as long as the AI does not have access to the gender attribute, it cannot possibly be unfair to women.

If only fairness could be achieved with this simple strategy! The most obvious problem is that with big datasets, even if the sensitive attributes are removed, non-sensitive attributes can still provide information about the censored sensitive attributes. For example, the last name of a person says a lot about her cultural background. We can remove that as well, but many attributes correlated to the sensitive ones are hard to detect.

The problem goes even deeper. Because real-world data are usually biased, to be fair, we need sensitive attributes to account for the biases! This might sound like a paradox, so let’s consider a simple example where a bank makes loan decisions based only on the applicant’s credit score. Being blind to the race of the applicant means that the decision must be based on a common threshold that applies to everyone’s credit score. Researchers have shown that in this scenario, the optimal threshold can be so high that it’s difficult for an underprivileged minority to receive loans [1].

Demographic Parity is not always fair either

Another strategy is to allow sensitive information to be used in model training, but we require AI’s decision to be statistically the same across all the demographic groups. In our loan example, if the application success rate is the same across all demographic groups, it can only mean that the approval process is fair, right?

This requirement (termed Demographic Parity by researchers) intuitively seems fair but it’s fair on the level of demographic groups. What about fairness on the level of the individual?

In the loan application scenario that we discussed in the last section, consider the fact different demographic groups are not equal — underprivileged minority groups tend to have lower credit scores, while another more privileged group might have credit scores distributed in the high-end. It has been shown that for a model to achieve a certain level of expected overall outcomes, demographic parity entails that the credit score threshold used to make the loan decision needs to be low for the underprivileged group and high for the more privileged groups, and the difference between the two thresholds can so large that it can be difficult for a legitimate applicant in the more privileged group to get a loan. At the level of the individual, it seems unfair.

Equal Opportunity is a way to balance the fairness of the individual and the group

In the loan application scenario, suppose that in the training dataset, we have an attribute indicating if an applicant has defaulted before. Requiring the AI’s decision to be predictive of the default attribute addresses the fairness of the individual — because it should be hard for defaulters to get loans, regardless of demography. However, this criterion by itself can lead to unfairness on the level of the group. Because the minority groups are not well represented in the dataset, AI’s predictive accuracy can be lower for them, making it harder for non-defaulters in the minority group to get their applications approved.

To address this issue, techniques have been developed to require predictive accuracy to be equally high in all demographic groups. In this formulation, the AI model is allowed to use sensitive attributes in its decision, but it is not allowed to exploit them to make less accurate predictions for certain groups.

“Predictive accuracy” can be measured in different ways. One example is to measure the probability of an application being approved by AI, but only for those who have not defaulted. When accuracy is measured this way, we enforce a criteria known as Equal Opportunity [2], because we ensure that an applicant who has not defaulted will be allowed to receive the loan in the first place. When this requirement is used in the loan example, the credit score threshold for making the decision will be lower for underprivileged groups, but not so low that defaulters can easily loans. For the privileged groups, the threshold will be higher, but not to the extent that a non-defaulter can’t get a loan.

Systematical injustice might be addressed by emerging techniques such as Counterfactual Fairness

The Equal Opportunity criterion is a step forward for fairer AI, but there are much subtler kinds of unfairness. In the loan example, if society is systematically prejudiced such that members of a certain group are scrutinized more intensely than others, a concern is that the credit score itself might be biased against that group. All AI models derived from that dataset would therefore be biased, and it would appear that there is absolutely nothing we can do to address it with AI, since the dataset is the “ground truth” as far as the AI can see.

And indeed, it is unlikely that AI is the answer to all the injustice in the world. But the power of big data means that we can hope to do better — we might even be able to infer factors that might have biased the credit score, and then go on to infer what would have happened if certain facts about the applicant were different. Counterfactual Fairness is a criterion requiring that an applicant receive the same treatment, in a counterfactual scenario where the same applicant belonged to a different demographic group [3]. The word “counterfactual” is used to acknowledge that we are making inferences about what the dataset would have looked like in a condition different from the “facts” that are recorded in the dataset.

This description sounds like science fiction. After all, if the applicant was born a member of a different race or a different sex, wouldn’t this individual be a completely different person? How can an AI make any reliable inferences about this person living in a parallel universe? The answer is that if we are willing to assume certain causal relationships in the data, it is possible to answer what-if questions in a mathematically rigorous way. The main idea (a mathematical framework called Causal Inference) cannot be explained in this brief article, but in a world that is often unfair, statements about fairness are inevitably counterfactual. This is why research in counterfactual inference is highly relevant to AI fairness.

Parting Thoughts

The goal of this article is not to show that fairness is an issue that can be solved by a certain AI technique. Rather, there are different perspectives that are context-dependent. The discussion can only be advanced by continuous debates and reflection. In the past, fairness has been primarily addressed in the context of philosophy, but it has evolved to a cross-disciplinary field, with contributions from AI, mathematics, statistics, psychology, economy, and many branches of social science. It is a diverse forum and it is important for everybody to participate in this discussion.

Notes

[1] Equality of Opportunity in Supervised Learning (2016) by Hardt et al.

[2] Other ways to measure accuracy result in different perspectives on fairness. Equal Odds, for example, is a stricter requirement ensuring that AI’s decisions are consistent across groups, for defaulters and non-defaulters. It is a stricter embodiment of fairness, but the requirement is harder to enforce.

[3] Counterfactual Fairness (2017) by Kusner et al.


Robert Junior SIMUSHI

Founder/Managing Director at Diligent Consultancy and Training Limited | PhD Student - Computing and Data Science | Rotarian

1mo

I love the elaborate explanation of the concepts. Truly fairness in AI remains a complex, evolving challenge. As AI technology becomes embedded in more areas of society, achieving fairness requires a nuanced, interdisciplinary approach, ensuring equitable outcomes and minimizing harm. It’s about building systems that not only perform efficiently but also respect the diverse backgrounds and experiences of those they serve.

TJ Larkin Jr

Co-Founder of Zeel AI | AI Strategy Advisor | Empowering Entrepreneurs & Consultants to Profit from the AI Revolution

1mo

The way we're basically teaching computers to be fair - something we humans are still figuring out ourselves! Really appreciate you breaking down these concepts.

This was a very nice and informative article. Such concepts are very helpful, especially for a startup such as AVENIRS - EduTech where fairness in data handling will be a main challenge. Thanks for sharing.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics