Expert Insight from our CEO on how our mental shortcuts affect our hiring decisions and how AI could influence diversity and inclusion.
Firstly, let’s go over some of the main biases that we have as humans:
- Confirmation bias: The tendency to seek, interpret, and favor information that confirms one's preexisting beliefs or hypotheses, and ignore or discount information that contradicts them. For example, a hiring manager may focus on positive aspects of a candidate that match their expectations and overlook negative aspects that do not. This can lead to hiring decisions based on subjective opinions rather than objective facts.
To mitigate this bias, AI can provide data-driven assessments of candidates' skills, competencies, and fit, and highlight any discrepancies or inconsistencies between the AI's evaluation and the human's perception. AI can also prompt the human to consider alternative perspectives or evidence that challenge their assumptions.
- Halo effect: The tendency to generalize a positive impression of a person or thing based on one favourable attribute or aspect, and overlook or minimize any negative or unfavourable ones. For example, a hiring manager may be impressed by a candidate's prestigious education or work experience, and assume that they are also competent, intelligent, and reliable in other areas. This can lead to overestimating the candidate's suitability or potential for the role, and ignoring any weaknesses or gaps.
To mitigate this bias, AI can provide a comprehensive and balanced overview of the candidate's strengths and areas for improvement, and compare them with the requirements and expectations of the role. AI can also use multiple sources and methods of data collection and analysis to reduce the influence of any single factor or indicator.
- Horn effect: The opposite of the halo effect, the tendency to generalize a negative impression of a person or thing based on one unfavourable attribute or aspect, and overlook or minimize any positive or favourable ones. For example, a hiring manager may be put off by a candidate's appearance, accent, or mannerism, and assume that they are also incompetent, unintelligent, or unreliable in other areas. This can lead to underestimating the candidate's suitability or potential for the role, and ignoring any strengths or achievements.
To mitigate this bias, AI can use multiple sources and methods of data collection and analysis to reduce the influence of any single factor or indicator.
- Similarity bias: The tendency to prefer, Favor, or relate to people who are similar to oneself in terms of identity, background, or characteristics, such as gender, race, ethnicity, age, disability, sexual orientation, religion, or socio-economic status. For example, a hiring manager may be more inclined to hire a candidate who shares their values, beliefs, hobbies, or preferences, and perceive them as more competent, trustworthy, or likable. This can lead to homogeneity and lack of diversity in the workforce, and discrimination or exclusion of qualified and talented candidates from different groups or backgrounds.
To mitigate this bias, AI can provide objective and standardized assessments of candidates' skills, competencies, and fit, and ensure that they are evaluated based on relevant and job-related criteria. AI can also promote diversity and inclusion by highlighting the benefits and value of having a diverse and representative workforce, and providing suggestions or recommendations on how to attract, engage, and retain candidates from different groups or backgrounds.
- Contrast effect: The tendency to compare and contrast candidates with each other rather than with the established standards or criteria of the role. For example, a hiring manager may be influenced by the order, sequence, or quality of the candidates they interview, and rate them higher or lower depending on how they compare with the previous or next candidate. This can lead to inconsistent and unreliable evaluations of candidates' performance, and unfair or inaccurate hiring decisions.
Recommended by LinkedIn
To mitigate this bias, AI can provide consistent and reliable assessments of candidates' skills, competencies, and fit, and ensure that they are evaluated based on the same standards and criteria. AI can also help the human to avoid making hasty or impulsive decisions by providing reminders, feedback, or guidance on how to conduct fair and effective interviews.
- Anchoring bias: The tendency to rely too heavily on the first piece of information or impression that one receives about a person or thing, and adjust subsequent judgments or decisions based on that anchor. For example, a hiring manager may be influenced by the first impression they form of a candidate based on their resume, appearance, or introduction, and use that as a reference point for the rest of the interview or evaluation process. This can lead to confirmation bias, halo effect, or horns effect, and prevent the human from considering new or additional information or evidence.
To mitigate this bias, AI can provide multiple and varied sources and methods of data collection and analysis, and present the human with a comprehensive and holistic view of the candidate's profile, performance, and potential. AI can also help the human to update or revise their judgments or decisions based on new or additional information or evidence, and avoid being fixated or attached to their initial anchor.
To use AI to help remove these cognitive biases in hiring and selection, we need to take practical and actionable steps, such as:
- Define clear and objective criteria and standards for evaluating candidates or applicants, and ensure that they are aligned with the requirements and goals of the role or program. We can use AI to help us design and validate these criteria and standards, and check if they are fair, relevant, and consistent.
- Collect and analyze data from multiple and diverse sources and methods, such as resumes, tests, interviews, portfolios, or references. We can use AI to help us gather and process large and complex data sets, and extract meaningful and actionable insights from them. We can also use AI to help us monitor and audit our data collection and analysis processes, and identify and correct any errors, gaps, or biases in them.
- Use AI tools and systems that are transparent, explainable, and accountable, and that allow us to understand how they work, why they make certain decisions or recommendations, and how we can challenge or change them. We can use AI to help us evaluate and improve the quality, accuracy, and fairness of our AI tools and systems, and ensure that they comply with ethical principles and legal regulations.
- Collaborate and communicate with other human stakeholders, such as managers, colleagues, candidates, applicants, or customers, and involve them in the design, implementation, and evaluation of our AI tools and systems. We can use AI to help us facilitate and enhance our human interactions, and provide us with feedback, guidance, or support. We can also use AI to help us learn from and share our best practices and experiences with other human stakeholders, and foster a culture of trust, openness, and learning.
AI is not a silver bullet but a journey that leaders need to sign up for, as it can offer significant benefits but also pose serious challenges. Leaders need to do a deep dive on the use of AI and ensure that it is ethical, effective, and aligned with the vision and mission of the organization. AI is a tool that can augment and enhance human capabilities, but it is not a substitute for human judgment, values, and leadership.
AI can certainly help in improving a company’s commitment to diversity and inclusion (D&I) by reducing bias and discrimination in hiring, promotion, and performance evaluation processes. AI can help to analyze large amounts of data and identify patterns and trends that may reveal hidden biases or barriers for certain groups of employees. AI can also help to design and implement fair and objective assessments and feedback mechanisms that can enhance the potential and performance of all employees, regardless of their background, identity, or preferences.
However, AI is not a magic solution that can automatically eliminate bias and promote D&I. AI systems are only as good as the data and algorithms that they are based on, and they can also inherit or amplify the biases and prejudices of their human creators and users. Therefore, leaders need to ensure that AI is used in a responsible and ethical manner, and that it is constantly monitored and evaluated for its impact and outcomes on D&I. Leaders also need to foster a culture of trust and transparency around the use of AI, and engage with various stakeholders, such as employees, customers, regulators, and civil society, to address any concerns or issues that may arise from the use of AI.
It is important to remember that AI is not a substitute for human leadership, values, and actions. Leaders need to be proactive and intentional in creating and sustaining a diverse and inclusive work environment, where everyone feels respected, valued, and empowered to contribute and grow. AI can support and augment this effort, but it cannot replace it.