It’s time to decide: Should TA teams fight, freeze, or befriend AI in recruitment?

It’s time to decide: Should TA teams fight, freeze, or befriend AI in recruitment?

The AI-enabled candidate is here to stay. So what happens next?

Last month, we launched our new report –– the State of the AI-enabled candidate in 2024-2025. And the facts were pretty clear. Use of AI for both Early Careers candidates and seasoned professionals is prolific and here to stay. 

88% of students and recent graduates are now using AI tools regularly and 86% describe themselves as proficient. 

While 61% of professionals now use AI regularly, and 68% describe themselves as proficient. 

In both groups, almost two-thirds have or plan to use AI in the selection and assessment process. 

The symptoms of the AI-enabled candidate are well recognised –– a huge influx in application volume or traditional assessment pass rates before a drop in candidate quality at later stages of the selection process.

For TA leaders, this creates a pressing dilemma… do we fight AI, ignore it, or embrace it?

 

Option 1: Fight

If your team sees AI in recruitment as “cheating,” then a Deter and Detect strategy might seem like a good approach. This means flagging and/or rejecting candidates who use AI in the process. 

But there’s a big catch: detection tools aren’t reliable. Current detection models not only struggle to keep up with AI advancements, but 2 in 10 detections produce false positives — a risk that could mean falsely accusing up to 20% of candidates of cheating. Even OpenAI retired its detection model earlier this year because it couldn’t ensure fair results.

If the creators of ChatGPT say they can’t guarantee detecting its use without disadvantaging groups like neurodiverse candidates or non-native English speakers, can anyone can truly detect AI usage without unintended consequences? Consequences that could result in an inequitable process at best and a lawsuit at worst. 



Option 2: Freeze

Despite application volumes soaring from hundreds, to tens of thousands in a year, some TA teams are still choosing to stick with the status quo — keeping processes unchanged and hoping they hold up. 

This “wait-and-see” approach might feel easier, especially since AI does level the playing field for some groups. 68% of Black Early Careers candidates, for instance, believe they should be able to use AI in the recruitment process, compared to 59% of all candidates. And 65% of Black professionals and 64% of neurodiverse candidates have or plan to AI in the recruitment process vs the average baseline of 54%. 

But freezing has its own risks. Without clear guidance on “good” vs “poor” AI usage, underrepresented groups (who use AI more often) might face higher rejection rates. On top of that, as AI adoption climbs, application volumes continue to swell. Unmanaged, this could lead to recruiter burnout, longer screening times, and an expensive increase in rerun interviews or assessment centres — all at a time when budgets are tight.

AI could even widen a new divide: the latest models, like OpenAI o1, are behind a paywall. With 60% of Early Career candidates and 58% of professionals unwilling to pay for premium access, wealthier candidates may gain an unfair advantage if TA teams opt to do nothing. 


Option 3: Friend

The third option? Embrace AI by redesigning the selection process with AI-enabled candidates in mind. Rather than relying on detection tech, leading TA teams are switching out text-based sifting methods for non-text based and more interactive sifting methods. Ones that naturally sidestep AI’s strengths but still help TA teams capture a fuller, more authentic picture of candidate potential. 

What does a robust sifting process look like in the era of the AI-enabled candidate? 

It employs a sifting method that… 

  1. Avoids simply capturing a sea of sameness by replacing text-based applications or questions with an interactive, task-based psychometric assessment that evaluates candidates' core human strengths. 
  2. Captures nuanced candidate behaviours by capturing their responses to dynamic activities, rather than evaluating them on binary right-or-wrong answers. 
  3. Offers varied formats that prevents candidates from giving predictable responses –– requiring real-time problem-solving that helps TA teams understand more about how a candidate would respond in a real work scenario without access to ChatGPT. 

The Friend approach helps TA teams differentiate candidates fairly while ensuring an equitable process that sidesteps the pitfalls of detection methods (or doing nothing). This approach gives TA teams an opportunity to support underrepresented groups, maintain a positive candidate experience, and avoid being seen as technophobic or overly rigid.

So, what’s the play?

In the age of AI, the choice is clear: to thrive, TA teams need to rethink their selection processes in a way that embraces and guides AI usage. 

This starts with non-text-based sifting methods, allowing your team to assess genuine candidate potential without getting overwhelmed by AI-driven responses. And it ends with guiding candidates' use of AI right across the recruitment process –– helping them understand what good looks like when it comes to preparing for interviews, and completing homework tasks too. 

Want to dive deeper into what the AI-enabled candidate era means for TA teams?

Download our latest report — featuring insights from 78,679 data points captured from Early Careers candidates and professionals alike: The state of the AI-enabled candidate.


3 nuggets to take you into next week 🚀

📰 PwC released their Workforce Radar 2024 Report, offering practical steps to build a reputation that pulls in top talent.

📈 As TA teams start to plan for 2025, Gartner share 5 HR trends for next year. Including how 55% of HR leaders report their current technologies don’t meet evolving business needs, and 51% cannot measure the ROI of their technology investments.

🤖 The FT dives into the debate on whether using AI in the recruitment process cheating: “You can warn people not to use AI, but they’ll use it anyway. You need to be prepared.”

How about CHANGING the recruitment process to reduce the impact of ChatGPT etc? Remove CV's and application forms. Do something DIFFERENT. 😉

Like
Reply
Chris Webb

Career Development Professional (RCDP) / Careers Writer / Podcaster / AI x Careers Trainer, Presenter and Consultant

1mo

I'm interested in the survey data regarding student/graduate self-reported use/proficiency with AI, as it's quite a bit higher than results we've seen from focus groups I've been undertaking with students at the University where I work - will dig into the methodology in the report but keen to know whether there may have been a self-selection element at play here, in terms of the profile of respondents? (e.g. Those already confident with or interested in AI possibly more likely to respond to a survey gathering opinions on this topic)

Like
Reply

To view or add a comment, sign in

More articles by Arctic Shores

Insights from the community

Others also viewed

Explore topics