AI FOR DI
Role of Artificial Intelligence in fostering Diversity and Inclusion
By Nina Alag Suri, Founder and CEO X0PA AI, February 2024
What is AI and why does it matter for D&I?
AI or artificial intelligence is the ability of machines or software to perform tasks that normally require human intelligence, such as perception, reasoning, learning, decision making, and natural language processing.
AI is transforming every aspect of our lives, from how we communicate, work, shop, travel, learn, and entertain ourselves. It also has the potential to create new opportunities, improve efficiency, enhance productivity, and solve complex problems.
According to a report by McKinsey, AI could add up to $13 trillion to the global economy by 2030, and increase global GDP by 1.2% annually. AI supposedly created up to 58 million new jobs by 2022, according to a report by the World Economic Forum.
However, AI also poses significant challenges and risks for D & I, such as ethical, legal, social, and economic implications. AI can amplify existing biases, stereotypes, and inequalities, or create new ones, affecting how people are perceived, treated, and valued in society and in the workplace.
Therefore, as leaders in the D & I space and leaders of our organisations, we need to be aware of what AI is, how it works, and what impact it has on D & I. We also need to be proactive, responsible, and inclusive in how we design, develop, deploy, and use AI, to ensure that it serves the common good and respects human dignity and diversity.
AI can offer many benefits for hiring and selection processes, such as:
- Increasing efficiency and reducing costs by automating repetitive and administrative tasks, such as screening resumes, scheduling interviews, or conducting assessments. According to a study by Korn Ferry, AI can reduce the time to hire by 71% and the cost per hire by 81% (Korn Ferry, 2018).
- Enhancing accuracy and consistency by eliminating human errors, biases, or subjectivity, and applying objective and standardized criteria, metrics, and algorithms
- Expanding diversity and inclusion by reaching out to a wider and more diverse pool of candidates, reducing barriers and discrimination, and fostering a culture of meritocracy and fairness.
- A case study by Google shows that AI can help foster a culture of belonging and inclusion, by using natural language understanding and sentiment analysis to monitor and address microaggressions, stereotypes, and biases in workplace communication and feedback. For example, the AI system can flag and suggest alternatives for language that is potentially harmful, offensive, or exclusionary, such as "you guys" or "man up". The system can also provide positive reinforcement for language that is respectful, supportive, and inclusive, such as "thank you" or "I appreciate your perspective". The study also demonstrates that AI can help improve employee engagement, performance, and well-being, and reduce turnover and attrition, by providing personalized and timely feedback, recognition, and coaching, based on the employees' communication style, preferences, and goals (Google, 2023).
- One of the ways that text analytics and AI can help write better job descriptions is by detecting and reducing gendered language, which can discourage applicants from underrepresented groups or create stereotypes and expectations based on gender. For example, some studies have shown that words such as "competitive", "dominant", or "leader" are perceived as more masculine, while words such as "collaborative", "supportive", or "nurturing" are perceived as more feminine (Gaucher et al., 2011; Leary & Sullivan, 2019). Text analytics and AI can help identify and suggest alternatives for gendered words, or balance them with words from the opposite gender category, to create more neutral and inclusive job descriptions. For example, instead of writing "We are looking for a competitive and dominant sales manager", one could write "We are looking for a driven and collaborative sales manager".
- Another way that text analytics and AI can help write better job descriptions is by enhancing readability, clarity, and relevance, which can attract more applicants and reduce confusion and frustration. Other tools can help check and correct grammar, spelling, punctuation, and syntax errors, which can affect the credibility and professionalism of the job description. Furthermore, some tools can help optimize the keywords and phrases in the job description, by analyzing the most common and relevant terms used by job seekers and employers in the same industry or domain, and suggesting ways to match them. This can help increase the visibility and ranking of the job description in search engines and platforms, and ensure that it reflects the actual skills and qualifications required for the job.
- X0PA AI: X0PA AI is an intelligent hiring platform that uses natural language processing and machine learning to analyze and improve job descriptions for diversity, inclusion, and fairness. It also provides predictive analytics and insights on how the job description will impact the quality and quantity of applicants, and how well they will fit the role and the organization (X0PA AI, 2021).
- Improving candidate experience and engagement by providing personalized and timely feedback, recommendations, and support, and creating a positive and transparent impression of the organization. A survey by CareerBuilder reveals that 58% of candidates are more likely to apply to a job if they receive a response from an AI chatbot, and 69% of candidates are more likely to accept a job offer if they receive regular updates from an AI system (CareerBuilder, 2018).
- Enabling data-driven decision making and learning by collecting, analyzing, and visualizing large amounts of data, and generating insights and predictions that can inform and improve hiring and selection strategies and outcomes. A report by Deloitte suggests that AI can help HR professionals make better and faster decisions, and enhance their capabilities and competencies (Deloitte, 2017).
How do our mental shortcuts affect our hiring decisions? How can we reduce the potential harm of bias and how AI could influence D&I:
- Confirmation bias: The tendency to seek, interpret, and favor information that confirms one's preexisting beliefs or hypotheses, and ignore or discount information that contradicts them. For example, a hiring manager may focus on positive aspects of a candidate that match their expectations and overlook negative aspects that do not. This can lead to hiring decisions based on subjective opinions rather than objective facts. To mitigate this bias, AI can provide data-driven assessments of candidates' skills, competencies, and fit, and highlight any discrepancies or inconsistencies between the AI's evaluation and the human's perception. AI can also prompt the human to consider alternative perspectives or evidence that challenge their assumptions.
- Halo effect: The tendency to generalize a positive impression of a person or thing based on one favourable attribute or aspect, and overlook or minimize any negative or unfavourable ones. For example, a hiring manager may be impressed by a candidate's prestigious education or work experience, and assume that they are also competent, intelligent, and reliable in other areas. This can lead to overestimating the candidate's suitability or potential for the role, and ignoring any weaknesses or gaps.
To mitigate this bias, AI can provide a comprehensive and balanced overview of the candidate's strengths and areas for improvement, and compare them with the requirements and expectations of the role. AI can also use multiple sources and methods of data collection and analysis to reduce the influence of any single factor or indicator.
- Horn effect: The opposite of the halo effect, the tendency to generalize a negative impression of a person or thing based on one unfavourable attribute or aspect, and overlook or minimize any positive or favourable ones. For example, a hiring manager may be put off by a candidate's appearance, accent, or mannerism, and assume that they are also incompetent, unintelligent, or unreliable in other areas. This can lead to underestimating the candidate's suitability or potential for the role, and ignoring any strengths or achievements.
Recommended by LinkedIn
To mitigate this bias, AI can use multiple sources and methods of data collection and analysis to reduce the influence of any single factor or indicator.
- Similarity bias: The tendency to prefer, Favor, or relate to people who are similar to oneself in terms of identity, background, or characteristics, such as gender, race, ethnicity, age, disability, sexual orientation, religion, or socio-economic status. For example, a hiring manager may be more inclined to hire a candidate who shares their values, beliefs, hobbies, or preferences, and perceive them as more competent, trustworthy, or likable. This can lead to homogeneity and lack of diversity in the workforce, and discrimination or exclusion of qualified and talented candidates from different groups or backgrounds.
To mitigate this bias, AI can provide objective and standardized assessments of candidates' skills, competencies, and fit, and ensure that they are evaluated based on relevant and job-related criteria. AI can also promote diversity and inclusion by highlighting the benefits and value of having a diverse and representative workforce, and providing suggestions or recommendations on how to attract, engage, and retain candidates from different groups or backgrounds.
- Contrast effect: The tendency to compare and contrast candidates with each other rather than with the established standards or criteria of the role. For example, a hiring manager may be influenced by the order, sequence, or quality of the candidates they interview, and rate them higher or lower depending on how they compare with the previous or next candidate. This can lead to inconsistent and unreliable evaluations of candidates' performance, and unfair or inaccurate hiring decisions.
To mitigate this bias, AI can provide consistent and reliable assessments of candidates' skills, competencies, and fit, and ensure that they are evaluated based on the same standards and criteria. AI can also help the human to avoid making hasty or impulsive decisions by providing reminders, feedback, or guidance on how to conduct fair and effective interviews.
- Anchoring bias: The tendency to rely too heavily on the first piece of information or impression that one receives about a person or thing, and adjust subsequent judgments or decisions based on that anchor. For example, a hiring manager may be influenced by the first impression they form of a candidate based on their resume, appearance, or introduction, and use that as a reference point for the rest of the interview or evaluation process. This can lead to confirmation bias, halo effect, or horns effect, and prevent the human from considering new or additional information or evidence.
To mitigate this bias, AI can provide multiple and varied sources and methods of data collection and analysis, and present the human with a comprehensive and holistic view of the candidate's profile, performance, and potential. AI can also help the human to update or revise their judgments or decisions based on new or additional information or evidence, and avoid being fixated or attached to their initial anchor.
To use AI to help remove these cognitive biases in hiring and selections, we need to take practical and actionable steps, such as:
- Define clear and objective criteria and standards for evaluating candidates or applicants, and ensure that they are aligned with the requirements and goals of the role or program. We can use AI to help us design and validate these criteria and standards, and check if they are fair, relevant, and consistent.
- Collect and analyze data from multiple and diverse sources and methods, such as resumes, tests, interviews, portfolios, or references. We can use AI to help us gather and process large and complex data sets, and extract meaningful and actionable insights from them. We can also use AI to help us monitor and audit our data collection and analysis processes, and identify and correct any errors, gaps, or biases in them.
- Use AI tools and systems that are transparent, explainable, and accountable, and that allow us to understand how they work, why they make certain decisions or recommendations, and how we can challenge or change them. We can use AI to help us evaluate and improve the quality, accuracy, and fairness of our AI tools and systems, and ensure that they comply with ethical principles and legal regulations.
- Collaborate and communicate with other human stakeholders, such as managers, colleagues, candidates, applicants, or customers, and involve them in the design, implementation, and evaluation of our AI tools and systems. We can use AI to help us facilitate and enhance our human interactions, and provide us with feedback, guidance, or support. We can also use AI to help us learn from and share our best practices and experiences with other human stakeholders, and foster a culture of trust, openness, and learning.
AI governance and auditing- keeping a check:
- Before we can embrace AI in talent pipelining as advocates for D & I, we need to ensure that our AI tools and systems are trustworthy, reliable, and beneficial for all. One way to do this is to use the AI Verify framework developed by the Infocomm Media Development Authority (IMDA) of Singapore and endorsed by the World Economic Forum (WEF).
- The AI Verify framework is a voluntary self-assessment scheme that aims to promote the adoption of ethical and responsible AI practices among organizations that develop or deploy AI solutions. The framework consists of four components: the Model AI Governance Framework, the Assessment Guide for Trustworthy AI, the AI Maturity Model, and the AI Readiness Index.
- The Model AI Governance Framework provides a set of guiding principles and best practices for implementing ethical and responsible AI governance within an organization. It covers topics such as human oversight, fairness, transparency, safety, security, and accountability. The framework also provides a set of practical tools and templates, such as the PDPC's AI Ethics & Governance Self-Assessment Checklist, the AI Register, and the FEAT Fairness Self-Assessment Toolkit, to help organizations operationalize the framework.
- The OECD Principles on AI, which are a set of five principles that aim to foster trust in and promote the responsible stewardship of trustworthy AI. The principles are; respect for human values and dignity, inclusive growth and well-being, human-centered values and fairness, transparency and explainability, robustness, security, and safety, and accountability. The principles are supported by a set of practical recommendations for implementing them across different sectors and contexts.
- The EU High-Level Expert Group on Artificial Intelligence (AI HLEG) Ethics Guidelines for Trustworthy AI, which are a set of seven requirements that AI systems should meet in order to be considered trustworthy. The requirements are; human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being, and accountability. The guidelines also provide an assessment list that can help organizations evaluate and improve their AI systems according to the requirements.
- The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (A/IS), which is a global network of experts and stakeholders that aims to ensure that A/IS are aligned with human values and ethical principles. The initiative has produced a series of standards and publications, such as the Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, which provides a comprehensive set of recommendations and best practices for designing and deploying ethical A/IS. The initiative also offers a certification program, called the IEEE Certified Ethical AI Practitioner, that validates the knowledge and skills of professionals who work with A/IS.
In a nutshell
AI is not a silver bullet but a journey that leaders need to sign up for, as it can offer significant benefits but also pose serious challenges. Leaders need to do a deep dive on the use of AI and ensure that it is ethical, effective, and aligned with the vision and mission of the organization. AI is a tool that can augment and enhance human capabilities, but it is not a substitute for human judgment, values, and leadership.
AI can certainly help in improving a company’s commitment to diversity and inclusion (D&I) by reducing bias and discrimination in hiring, promotion, and performance evaluation processes. AI can help to analyze large amounts of data and identify patterns and trends that may reveal hidden biases or barriers for certain groups of employees. AI can also help to design and implement fair and objective assessments and feedback mechanisms that can enhance the potential and performance of all employees, regardless of their background, identity, or preferences.
However, AI is not a magic solution that can automatically eliminate bias and promote D&I. AI systems are only as good as the data and algorithms that they are based on, and they can also inherit or amplify the biases and prejudices of their human creators and users. Therefore, leaders need to ensure that AI is used in a responsible and ethical manner and that it is constantly monitored and evaluated for its impact and outcomes on D&I. Leaders also need to foster a culture of trust and transparency around the use of AI, and engage with various stakeholders, such as employees, customers, regulators, and civil society, to address any concerns or issues that may arise from the use of AI.
It is important to remember that AI is not a substitute for human leadership, values, and actions. Leaders need to be proactive and intentional in creating and sustaining a diverse and inclusive work environment, where everyone feels respected, valued, and empowered to contribute and grow. AI can support and augment this effort, but it cannot replace it.
2023 APSCo Advisor Of The Year and 2024 APSCo Individual Contribution Award Winner, Co-Founder of The Satori Partnership and The Ability People and Advisor to the Recruitment Sector
9moGreat (and balanced) insights Nina Alag Suri!
Senior Director UN Affairs and International Organisations at Microsoft
9moGreat Insights Nina
Intranets, digital workplace, policy management ⚡️ CEO of Content Formula ⚡️ Consultancy specialising in Microsoft 365 & SharePoint
9moGreat perspective and utilisation of AI.
Intriguing insights on leveraging AI to enhance diversity and inclusion in the hiring process—definitely a game-changer for the future of talent acquisition.