Fair hiring in the age of AI: Opportunity, risk and the candidate experience

ALEX LAMONT • 23 Apr 2026

 

AI is reshaping hiring at speed - but is it making recruitment fairer, or are we sleepwalking into a new set of risks? That was the central question at Jobtrain’s recent How Talent session, which brought together a panel of leading experts to cut through the hype and give HR and talent acquisition professionals a practical, honest view of where things stand.

 

 

Here’s what you need to know if you didn’t have time to watch.

The Problem We’re Actually Trying to Solve

Before diving into tools and technology, the panel - featuring Michael Blakley (Equitas/Screenloop), Martyn Redstone (Warden AI), Katie Noble (Omni RMS) and Jamie Betts (Neurosight) - made an important point: panic is not a strategy.

Yes, organisations are being inundated with AI-assisted applications. Yes, candidates are using ChatGPT, Claude and Gemini to complete application forms, psychometric tests and even video interviews. But as Jamie Betts noted, this is largely a structural problem of our own making. Candidates use AI because conventional hiring processes are deeply vulnerable to it - and because, from a candidate’s perspective, everyone else is doing it too.

The right response isn’t to introduce more friction or ask candidates to solemnly promise not to use AI. It’s to step back and ask a more fundamental question: what are you actually trying to measure, and why?

Infographic - The LLM Wrapper Problem

Start With Job Analysis - Every Time

Every robust hiring process should begin with job analysis: a clear, evidence-based understanding of what drives success in a given role. What skills, behaviours and traits predict high performance? Until you can answer that, you cannot select the right assessment tools - AI-powered or otherwise.

This doesn’t have to be a months-long project. Even a structured conversation with your high performers can surface common threads: accountability, collaboration, conscientiousness. Capture those, define them, and build your hiring process around measuring them.

“There is no one-size-fits-all solution. The question we always start with is what are you trying to assess, and why?” — Katie Noble, Omni RMS

Get this right and everything else follows. Get it wrong and no amount of technology will save you.

 

 

 

The Hidden Risk in Most AI Screening Tools

A large proportion of AI screening tools currently on the market — including conversational AI, automated video evaluation and AI-powered chatbots — are what the panel called “wrappers”: they don’t own their AI model at all. They’ve built a front end that sends candidate data to ChatGPT, Claude or Gemini, and the answer comes back.

Why does this matter? Because large language models (LLMs) cannot explain their reasoning. They hallucinate. Their outputs vary depending on the time of day and how busy the model is. And crucially, they will never be compliant with the EU AI Act because they cannot provide the explainability that regulation requires.

If a screening tool talks to an LLM to evaluate your candidates, you cannot defend those decisions to a rejected applicant, a hiring manager, or a regulator. Ask your vendors directly. If they can’t tell you how their tool works under the bonnet, that’s your answer.

The practical implication is clear: LLMs are great productivity tools, but they cannot be used for candidate evaluation in a legally defensible way. Any vendor who can’t explain their model should be a red flag.

What Good AI in Hiring Actually Looks Like

The good news is that not all AI is created equal — and AI is not inherently more biased than humans. Warden AI’s 2025 research found that 85% of audited AI hiring tools met internationally recognised fairness thresholds. Well-implemented AI can in some cases be up to 45% fairer for women and ethnic minorities than unaided human decision-making.

The tools that work use explainable, deterministic algorithms — systems where you can trace, audit and reverse-engineer every score. Bespoke psychometric assessments, structured scoring frameworks, competency-based sifting tools: these are not new ideas. Large organisations have been applying rigorous due diligence to automated screening tools for decades.

The difference now is that smaller organisations are deploying AI screening without the same level of scrutiny - and without the procurement infrastructure to ask the right questions.

The Candidate Fairness Gap You’re Probably Not Aware Of

There’s another dimension to the AI arms race that deserves more attention: socioeconomic bias in candidate AI use.

Neurosight’s research found that the use of AI to game psychometric tests is not evenly distributed. Premium AI tools — those costing £100–200 per month — significantly outperform free alternatives on tasks like critical reasoning tests. Men are more likely to pay for AI tools than women. Candidates who attended private schools are more likely to have access to premium tools than those who attended state schools.

The result? Hiring processes that are vulnerable to AI use are not just experiencing a crisis of authenticity - they’re experiencing a crisis of fairness, with existing gaps between demographic groups widening over time. This is the employer’s responsibility to address.

Social square - generic webinars

What HR Leaders Should Do Now

The panel closed with a clear set of practical actions:

  • Audit your hiring process as a candidate would. Go through every stage and use AI to test where the vulnerabilities are. Where can candidates shortcut, game or fake? Those are the gaps you need to close.
  • Ask harder questions of your vendors. Is the tool an LLM wrapper? Can every score be explained? Can the model be overridden or retrained? Is it tested for bias - and can they prove it?
  • Invest in your people, not just your tools. “Human in the loop” only means something if the human is trained. Under the UK’s Data Use and Access Act, organisations must now offer candidates a human review of AI decisions — and that human must be suitably qualified.
  • Keep reviewing. A hiring process that is valid today may not be valid in six months. Build in regular checkpoints, not just a one-off launch and walk away.
  • Don’t let the risks put you off. As Martyn Redstone put it: “AI is transformational. We just need to use it responsibly, sensibly and ethically.”

Presentation - Audit Hiring ProcessThis session was hosted by Jobtrain as part of our How Talent series. To find out how Jobtrain supports fair, compliant and efficient hiring, visit jobtrain.co.uk to book a demo or explore our resources.

Download our latest Talent Insights report - it's free ⬇️