AI in recruitment: Levelling the playing field or entrenching bias?

AI in recruitment: Levelling the playing field or entrenching bias?

AI in recruitment aims to reduce hiring biases and streamline candidate selection. But do these tools truly make hiring fairer, or are they reinforcing hidden biases? Asks Paul Armstrong

The AI hiring wave has arrived, promising to tidy up recruitment’s inefficiencies with sleek algorithms and predictive analytics. Job descriptions are optimised for clicks, CVs sifted in seconds, and interview scheduling is a thing of the past. Companies market it as progress, claiming these tools can strip bias from the hiring process. But beneath the sheen of efficiency, one question looms: are these systems levelling the playing field or reinforcing age-old biases under a high-tech veneer?

Take Amazon’s famous cautionary tale. Everyone’s favourite cardboard abuser’s now-infamous AI recruitment tool systematically penalised CVs mentioning “women’s” activities. Why? The training data was drawn from years of male-dominated hiring patterns. Rubbish in, rubbish out. Despite these failures, adoption of such systems is surging, with platforms like Hirevue promising “bias-free” candidate evaluations via facial analysis and tone detection — tools that have faced accusations of pseudoscience. For instance, critics argue that facial analysis perpetuates race and gender biases due to the datasets these tools are trained on tend to be white faces (and hands) , making it harder for marginalised candidates to succeed.

Several tools are “actively working” to tackle these biases. Pymetrics, for example, blends neuroscience games with machine learning to evaluate candidates’ cognitive and emotional traits. Crucially, it audits its algorithms to identify and eliminate bias. Seekout enables companies to tap into diverse talent pools by identifying underrepresented groups. And Textio analyses job descriptions, providing insights to make language more inclusive and appealing to broader audiences. All fine, but are they getting to the crux of the issue? I would argue not. 

But even with these advancements, the paradox persists. Recruitment AI optimises for “best-fit” candidates — but who defines “best”? These algorithms often rely on patterns found in historical data, leading organisations to inadvertently replicate the status quo. Particularly troubling when innovation demands a diverse workforce. Hiring outliers — those who think differently or challenge assumptions — is critical for disrupting conventional wisdom. Yet, algorithms trained on historical norms will usually reject such candidates outright.

The ethical dilemmas don’t end there. Behavioural economists have long known that bias isn’t an anomaly — it’s deeply embedded in human decision-making. When these biases feed AI systems, they don’t disappear; they scale, becoming harder to spot and challenge. Tools designed to root out bias risk becoming gatekeepers of inequality, their flaws buried in black box systems no one fully understands.

AI’s potential to drive equality isn’t theoretical — it’s achievable with the right practices. For instance, synthetic data is an emerging (somewhat controversial) method for training AI systems without replicating historical biases. By creating artificial datasets that mimic real-world scenarios while erasing discriminatory patterns, synthetic data could offer a path toward fairer algorithms.

Bias auditing tools are also gaining traction. Fairlearn and Aequitas, for example,  provide frameworks for identifying, measuring, and mitigating bias in machine learning models. These tools allow organisations to scrutinise the decisions their AI systems make and adjust their processes accordingly.

Transparency is key

Explainable AI (XAI) is an essential development, offering insights into why an AI system made a specific decision. This could help organisations identify flaws and allow candidates to challenge decisions they perceive as unfair. For example, if an algorithm rejects a candidate based on certain keywords, XAI tools could highlight this and prompt recruiters to question whether those terms truly reflect job requirements.

Regulation is catching up

The EU AI Act categorises recruitment algorithms as high-risk systems, subjecting them to stricter transparency and...

Read the rest of this article over at City AM...and the other columns.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics