AI is reshaping hiring at speed - but is it making recruitment fairer, or are we sleepwalking into a new set of risks? That was the central question at Jobtrain’s recent How Talent session, which brought together a panel of leading experts to cut through the hype and give HR and talent acquisition professionals a practical, honest view of where things stand.
Here’s what you need to know if you didn’t have time to watch.
Before diving into tools and technology, the panel - featuring Michael Blakley (Equitas/Screenloop), Martyn Redstone (Warden AI), Katie Noble (Omni RMS) and Jamie Betts (Neurosight) - made an important point: panic is not a strategy.
Yes, organisations are being inundated with AI-assisted applications. Yes, candidates are using ChatGPT, Claude and Gemini to complete application forms, psychometric tests and even video interviews. But as Jamie Betts noted, this is largely a structural problem of our own making. Candidates use AI because conventional hiring processes are deeply vulnerable to it - and because, from a candidate’s perspective, everyone else is doing it too.
The right response isn’t to introduce more friction or ask candidates to solemnly promise not to use AI. It’s to step back and ask a more fundamental question: what are you actually trying to measure, and why?
Every robust hiring process should begin with job analysis: a clear, evidence-based understanding of what drives success in a given role. What skills, behaviours and traits predict high performance? Until you can answer that, you cannot select the right assessment tools - AI-powered or otherwise.
This doesn’t have to be a months-long project. Even a structured conversation with your high performers can surface common threads: accountability, collaboration, conscientiousness. Capture those, define them, and build your hiring process around measuring them.
“There is no one-size-fits-all solution. The question we always start with is what are you trying to assess, and why?” — Katie Noble, Omni RMS
Get this right and everything else follows. Get it wrong and no amount of technology will save you.
A large proportion of AI screening tools currently on the market — including conversational AI, automated video evaluation and AI-powered chatbots — are what the panel called “wrappers”: they don’t own their AI model at all. They’ve built a front end that sends candidate data to ChatGPT, Claude or Gemini, and the answer comes back.
Why does this matter? Because large language models (LLMs) cannot explain their reasoning. They hallucinate. Their outputs vary depending on the time of day and how busy the model is. And crucially, they will never be compliant with the EU AI Act because they cannot provide the explainability that regulation requires.
If a screening tool talks to an LLM to evaluate your candidates, you cannot defend those decisions to a rejected applicant, a hiring manager, or a regulator. Ask your vendors directly. If they can’t tell you how their tool works under the bonnet, that’s your answer.
The practical implication is clear: LLMs are great productivity tools, but they cannot be used for candidate evaluation in a legally defensible way. Any vendor who can’t explain their model should be a red flag.
The good news is that not all AI is created equal — and AI is not inherently more biased than humans. Warden AI’s 2025 research found that 85% of audited AI hiring tools met internationally recognised fairness thresholds. Well-implemented AI can in some cases be up to 45% fairer for women and ethnic minorities than unaided human decision-making.
The tools that work use explainable, deterministic algorithms — systems where you can trace, audit and reverse-engineer every score. Bespoke psychometric assessments, structured scoring frameworks, competency-based sifting tools: these are not new ideas. Large organisations have been applying rigorous due diligence to automated screening tools for decades.
The difference now is that smaller organisations are deploying AI screening without the same level of scrutiny - and without the procurement infrastructure to ask the right questions.
There’s another dimension to the AI arms race that deserves more attention: socioeconomic bias in candidate AI use.
Neurosight’s research found that the use of AI to game psychometric tests is not evenly distributed. Premium AI tools — those costing £100–200 per month — significantly outperform free alternatives on tasks like critical reasoning tests. Men are more likely to pay for AI tools than women. Candidates who attended private schools are more likely to have access to premium tools than those who attended state schools.
The result? Hiring processes that are vulnerable to AI use are not just experiencing a crisis of authenticity - they’re experiencing a crisis of fairness, with existing gaps between demographic groups widening over time. This is the employer’s responsibility to address.
The panel closed with a clear set of practical actions: