How to use assistive AI to keep hiring fair and human

LAURA CHAMBERS • 22 Sep 2025

Q: Is AI making recruitment worse for candidates?
Yes, it can. When black-box tools replace judgement, candidates are left in the dark. But used transparently, with human oversight throughout and alongside structured assessments, AI can improve fairness and speed. At Jobtrain we focus on assistive AI that helps teams write clearer adverts, design stronger assessments and create supporting visuals, with structured forms and audit trails that keep people in control.

Important: This article shares general ideas on AI use in recruitment in a UK context. It is not legal advice or a statement of UK law.

Why getting AI wrong is risky

  • Workday lawsuit sets a precedent
    A US federal judge allowed claims to proceed that Workday’s AI-powered screening could perpetuate bias and said Workday may be liable as an 'agent' where screening functions are delegated to it.

  • More claims are emerging
    Sirius XM faces a class action alleging its AI hiring tool downgraded black applicants, underlining how historic data and opaque scoring can create disparate impact claims.

  • UK regulator focus
    The ICO’s audit of AI recruitment tools found “considerable areas for improvement” around fairness, transparency, data minimisation, lawful basis and DPIAs. It warns that being merely “better than random” is not enough to demonstrate fair processing and stresses clear explanations of logic, data use and roles across controller/processor boundaries.

What goes wrong for candidates

AI candidate pain words (1)

  • Hidden criteria - candidates see “application rejected” with no explanation. Lack of transparency undermines trust and can mask errors or bias. The ICO presses for clear explanations of logic, data use and roles across controller/processor boundaries. 
  • Weak free-text prompts - long personal statements invite copy-paste AI answers that look polished but reveal little about judgement, skill or the person.
  • Biased prompts - loaded phrases like “top-tier university”, “native speaker” or “cultural fit” - can skew outcomes. The ICO flagged tools enabling filtering on protected characteristics or inferring attributes (e.g. gender or ethnicity) as high-risk.
  • Access barriers - timed forms, CAPTCHAs and chatbot-only help can exclude people and make it harder to request adjustments. Regulators highlight the risk of digital exclusion if design choices are not scrutinised. 

Free resources to go deeper

  • AI applications guide: When candidates use AI to apply - practical ways to redesign assessments so you get better signal from applicants who use AI tools. Download the AI applications guide.

  • IA vs. AI guide: What’s best for recruitment - when to use deterministic automation vs. predictive AI in hiring workflows. Download the AI vs IA guide.

  • Jobtrain and responsible AI in recruitment - we've carefully created our own AI features working closely alongside regulation and compliance. Read more here. 


Fixing it (ethically)

Start with structure, not free text

Open personal statements are notoriously difficult to benchmark (let alone time consuming to assess!) and easy to game using generative tools like ChatGPT. Our recommendation is to switch to skills-based assessments with explicit scoring, branching logic, ranked responses and anchored scales to surface judgement and role-relevant capability. In the Jobtrain ATS you can also randomise questions and add timers where appropriate to encourage authentic answers.

Keep people firmly in the loop

Treat automation as decision support. Use clear scoring to shortlist, then sense-check and offer candidates accessible routes to ask questions. This aligns with a spirit of fairness, transparency and accountability.

Prefer transparent automation first, then add AI selectively

Lead with intelligent automation (IA) for deterministic tasks like parsing, scheduling and acknowledgements, then layer in AI selectively where it adds value and can be managed and governed by people. Our IA vs AI guide includes a framework to decide what to automate vs where to apply AI.

AI candidate pain - best practice (1)

Contact us - learn more about our AI features in Jobtrain

How AI in the Jobtrain ATS supports fair, human-led screening

AI job advert generator

Inclusive, well-structured job adverts can be created in seconds based on vacancy information. You stay in control: edit, approve and publish once it's reviewed. Prompts are designed to support fairness and reduce biased wording.

AI question recommendations

From a job description, in Jobtrain you can generate role-relevant assessment and shortlisting questions to speed up form design. This helps teams move away from long text boxes and towards consistent, skills-based evaluation.

AI image generation for adverts and social

Producing on-brand visuals is made easy using AI to generate images in preset sizes for job adverts and campaigns without having to leave our ATS. Great for standing out on socials and for image-based assessment prompts where applicable. 

Sample ethical workflow with Jobtrain

  1. Design the assessment with structured questions (rating scales, mandatory multiple choice, ranked lists, branching). Randomise from a role-specific bank and add timers if appropriate.

  2. Draft the advert using the AI generator, then edit to reflect real-world expectations and reasonable adjustments

  3. Create visuals with AI image generation to support reach on social and clarity in the advert. 

  4. Publish and monitor. Use Advanced Insights tagging to review outcomes, then refine question banks and scoring based on evidence. 

  5. Keep humans in the loop throughout and direct candidates to a point of contact for questions or adjustments.

FAQs

Does Jobtrain detect if a candidate used AI?
Rather than policing the tools candidates might use, we design assessments that reveal genuine judgement and skills, reducing the impact of generic AI text.

Will AI replace shortlisting decisions?
No. In Jobtrain, AI is assistive. It helps you create better adverts, questions and visuals, while people remain accountable for decisions.

How does Jobtrain address bias?
Prompts are optimised to support fairness, outputs are always editable and access is role-controlled. Teams should still review the language and results routinely.

Understand how to safely use AI in recruitment
Get in touch