The AI Hiring Dilemma: Is Your Algorithm Killing Diversity?

The AI Hiring Dilemma: Is Your Algorithm Killing Diversity?

Written by: Henry Yeomans , Executive Vice President, Stanton House

In recent years, Artificial Intelligence (AI) has transformed recruitment, offering quicker and more efficient processes. By automating tasks such as CV screening and video assessments, AI-driven platforms can handle an immense volume of applications.

However, beneath the surface lies a growing concern that these technologies may unintentionally hinder diversity, equity, and inclusion (DEI) efforts. Could your algorithm be undoing the progress you’ve worked so hard to achieve?

The Hidden Bias in AI Hiring

AI systems rely heavily on data to make decisions, and this is where the issue begins. If the data used to train these algorithms reflects historical biases - such as favouring specific educational backgrounds, gender, or ethnic groups - AI can inadvertently replicate and amplify these biases. Research from the National Bureau of Economic Research (NBER) shows that AI models trained on biased data can perpetuate existing inequalities, filtering out diverse candidates without the recruiter even knowing.

A widely discussed case is Amazon’s AI recruitment tool, designed to automate CV screening. The tool, however, was found to penalise CVs containing words like “women’s,” such as "women’s soccer club" or "women’s college." The issue arose because the AI had been trained on a dataset of predominantly male CVs, mirroring Amazon's historical hires. Ultimately, Amazon abandoned the tool, acknowledging that it was reinforcing biases.

ProPublica’s investigation into AI-based criminal justice tools like COMPAS highlighted similar concerns. These tools disproportionately flagged minority defendants as high-risk, even when profiles were identical to those of white defendants. While this example is outside recruitment, it demonstrates how biased data can skew AI's decision-making, with serious implications for equality.

Latest Research on AI Bias in Hiring

Recent studies have illuminated the persistence of AI bias in recruitment. A report from MIT Sloan Management Review found that AI models tend to favour candidates who resemble past successful hires, even in organisations striving for diversity. These systems often replicate hiring decisions based on historical preferences, which can hinder efforts to bring diverse talent into the fold.

In a separate study published by Harvard Business Review, AI tools used in video interviews were found to introduce biases against candidates based on names, postal codes, and even speech patterns. AI systems that analyse candidates’ voice inflection or facial expressions during interviews could disadvantage neurodiverse individuals or those with disabilities.

We discussed why some neurodivergent candidates might struggle with eye contact, for example, in one of our previous Outspoken articles - ‘Beyond the gaze: Why your best candidate might not look you in the eye’.

While AI can certainly expedite recruitment processes, it’s clear that without proper oversight, it can inadvertently perpetuate systemic biases.

Stanton House's Approach to AI in Recruitment

At Stanton House , we integrate technology and AI into our recruitment processes to improve efficiency, but with great care. Every tool we implement is rigorously assessed to ensure it enhances rather than compromises the experience of both clients and candidates. Our guiding principle is that AI should never come at the expense of personal engagement and service quality.

We believe that AI will never fully replace the need for human recruiters. While AI can process data rapidly, it lacks the nuance to understand individual circumstances or to evaluate the diverse, multifaceted needs of today’s workforce. Our recruiters ensure that AI-driven insights are balanced with human judgement, particularly when building diverse shortlists. Human oversight remains essential in maintaining fairness, ensuring cultural fit, and safeguarding DEI goals.

Real-World Examples and Lessons Learned

Several organisations have taken proactive steps to mitigate AI-related biases in recruitment. For example, Unilever uses AI to screen entry-level candidates but has developed mechanisms to remove bias. By anonymising applications (excluding names, gender, and educational background) and auditing AI decisions regularly, Unilever has boosted diversity within its candidate pool.

Similarly, LinkedIn has developed fairness algorithms to monitor and correct any disparities introduced by its AI-driven tools. Such continuous monitoring is essential for ensuring that AI systems do not unintentionally disadvantage certain candidate groups.

 How to Ensure AI Doesn’t Undermine DEI Goals

For companies eager to harness AI while upholding their diversity commitments, there are several critical steps to follow:

  • Audit Your Data: It is essential to scrutinise the data being used to train AI tools. Does it fairly represent the diversity you aim to achieve? If the dataset is skewed, so will be the outcomes.
  • Use Diverse Datasets: Train AI models using inclusive datasets that reflect a variety of experiences and demographics. Partner with organisations that specialise in creating diverse datasets to ensure your AI assessments are fair and accurate.
  • Conduct Regular Bias Audits: Like LinkedIn and Unilever, organisations should perform regular audits of their AI systems. External evaluations of the AI’s decision-making processes can help identify and rectify any biased patterns.
  • Human Oversight: AI should complement - not replace - human decision-making. At Stanton House, human recruiters ensure the AI’s insights are balanced by human judgement, which is crucial for recognising diverse talents and maintaining fairness throughout the process.
  • Focus on Skills-Based Hiring: Shifting towards skills-based assessments rather than traditional credential-based hiring can reduce bias. Focusing on the competencies needed for the job, rather than on education or past employers, can level the playing field for underrepresented candidates.

Conclusion

AI holds enormous potential to revolutionise hiring, making it more efficient and data-driven. However, without vigilant oversight, AI can reinforce existing biases, particularly those related to diversity and inclusion. As companies adopt more AI tools, it is crucial to remain proactive in ensuring these technologies align with DEI goals.

At Stanton House, we are committed to balancing the power of AI with human insight to ensure a fair, personalised, and inclusive recruitment process. By taking a balanced, thoughtful approach, organisations can leverage AI while upholding their commitment to diversity, equity, and inclusion.

Sources:

 

About Outspoken: Stanton House is delighted to introduce Outspoken! Our monthly newsletter where we raise the volume on crucial discussions shaping the landscape of talent acquisition and career advancement. With a firm commitment to diversity and inclusion, we delve into the challenges faced by both employers and candidates, providing practical solutions and thought-provoking commentary. Whether you're a forward-thinking organisation seeking top talent or an ambitious professional navigating your career path, Outspoken is your ally in fostering inclusive excellence.

Graham Ellinor

International Finance Director | Listed business and PE | Business transformation, Operations, FP&A, Strategy development, Audit, M&A and Compliance | IFRS. GAAP | Manufacturing, FMCG, IT | Pension trustee

5d

Thought provoking. AI certainly has its place but, as Henry states, "AI should complement - not replace - human decision-making".

Chris Barber

Interim Transformation leader delivering strategic, people focused transformation to achieve profit growth, reduce operating costs and increase performance outputs across fleet and transport, industries.

2w

This is a really well thought out post and is a reminder that AI is a tool to aid businesses, and as with any tool, needs to be checked regularly to ensure that it is still working as intended.

Lindiwe Temba

Social Investment Executive | Corporate & Investment Banking | MSc International Development | MBA

2w

Great insights. AI output is what you feed it, human interaction will always be required.

James Tickner

Connecting Exceptional Interim Finance / Accounting Professionals with Market Leading Clients Across The Thames Valley & South-East

2w

Great article. As long as it is adopted correctly AI can be a great tool to aid the recruitment process. But as you’ve outlined, it should be used with caution!!

To view or add a comment, sign in

More articles by Stanton House

Explore topics