AI is now embedded across UK recruitment; drafting adverts, parsing CVs, scheduling interviews, even scoring candidates. But not every AI feature is built the same, and the gap between safe AI and risky AI is becoming a material legal and reputational question for HR leaders.
Our new guide, How to responsibly harness AI in Recruitment, sets out the framework. Here are six things every Chief People Officer, HR Director and Head of Resourcing should know before their next vendor conversation.
Some AI tools score candidates by reading free-text responses - CVs, personal statements, open-ended answers - using a large language model. The scoring logic is generated dynamically each time it runs. It cannot be reproduced, audited or explained. The same candidate submitting the same application twice can receive a different score - and nobody can fully explain why.
This is "black box" AI. It fails the UK Government's safety and robustness principle by design, and it's where most of the legal exposure sits.
Article 22 gives candidates the right not to be subject to decisions based solely on automated processing with significant effects - and recruitment outcomes qualify. If an AI scores and ranks candidates in a way that materially affects whether they progress, and that scoring can't be meaningfully explained or overridden by a human with genuine understanding, the organisation is exposed.
You don't need to wait for the EU AI Act to feel this. The obligations are live in the UK right now.
University of Washington researchers found AI models preferred CVs with white-associated names in 85% of cases and disadvantaged Black male candidates in up to 100% of cases. Bias in LLM scoring is structural, not incidental.
The legal record is catching up. In the US, iTutorGroup settled for $365,000 after its AI rejected women over 55 and men over 60. In Mobley v. Workday, a federal court ruled AI vendors can be held liable as agents of employers. The cases are American, but the principles apply under the UK Equality Act 2010 and UK GDPR — and indirect discrimination requires no intent.
65% of job applicants use AI tools during the application process (Career Group Companies, 2025). For teams still relying on open-ended personal statement boxes, this produces a wave of polished, similarly-worded responses that reveal very little about the candidate behind them.
Detecting AI-written responses doesn't work - it's unreliable and discriminates against non-native English speakers. The answer is to redesign the assessment: structured questions with forced choices, time limits and randomised question banks are far harder to game and far easier to score objectively.
The UK has opted for principles over EU-style legislation. The five cross-sector principles map directly to recruitment:
Any AI feature in your recruitment process should be measurable against all five. Black box LLM scoring fails on the first two by design.
Before deploying any AI scoring tool, ask:
If a vendor can't answer clearly and in writing, that's your answer. Both employer and vendor can be held responsible for discriminatory outcomes - vendor assurances alone aren't enough.
AI in recruitment isn't a technology question - it's a leadership question. Used responsibly, AI can cut time-to-hire, improve candidate experience and produce fairer, more defensible decisions than the manual processes it replaces. Used carelessly, it creates legal exposure that vendor PR cannot insulate you from.
Jobtrain's approach is built on a single principle: AI should make recruitment faster, fairer and more transparent - never replace human judgement. We don't use AI to read candidate responses and produce opaque scores. We use it to build better questions, assessed by transparent criteria explainable to every candidate.
Download the full guide: How to responsibly harness AI in Recruitment or book a demo to see our responsible AI features in action.