If you're a hiring manager or you're involved in managing recruitment, then it probably feels like every personal statement now sounds the same - and you're not imagining it. AI has entered the application process and it's here to stay. A recent survey indicates roughly two thirds of candidates use AI at some point when applying. But it's not cheating, it's just the new baseline of where digital literacy is at. The real risk isn't "AI-written CVs," or "AI-completed applications," it's clinging on to out-dated assessment methods that were fragile long before AI and ChatGPT arrived.
The winning move is simpler and more strategic: change what you measure so polished copy stops being the problem.
Don't police AI-written CVs. Redesign your application steps to capture judgement, evidence and job-related capabilities.
We've written a detailed guide on how best to manage the use of AI in applications - download your free copy below:
The problem isn't AI, it's weak evidence
Long text boxes, simply put, just invite generic prose and make comparisons with other applications very murky indeed. Even before AI, sifting subjective paragraphs was slow, inconsistent and unfair. Now it's even noisier. What hiring teams need is clearer evidence: the decisions candidates make, trade-offs they choose and proof they can apply skills to realistic situations.
.png?width=800&height=527&name=Dominoes%20blog%20(1).png)
What to change first
1. Remove the blank text box
Swap open-ended prose or essays for targeted prompts instead that map to role outcomes. Ask about decisions, constraints and results, not adjectives and ambition. Keep inputs short, specific and scorable.
2. Use formats that show thinking
Instead of “Tell us about yourself”, try:
- A scenario with 3–5 concrete options where the candidate must pick and justify one
- A prioritisation task that forces ordering under time pressure
- A brief work sample aligned to the job's day-to-day, capped at a sensible time limit
3. Agree scoring before you go live
Define what a strong and acceptable response is versus a weak response. Agree on examples with hiring managers so everyone assesses against the same criteria.
4. Keep some variation
Serve a rotating set of prompts so answers can't be templated or shared. It also reduces unconscious coaching via forums and social posts.
5. Measure the outcome, not the essay
Score on clarity of reasoning, relevance to the scenario and likely impact. This shifts attention from the quality of writing to the quality of judgement.
Our Jobtrain applicant tracking system makes this straightforward. It allows you to configure targeted prompts, add time controls where appropriate, rotate question sets from a pre-approved (by you) list and apply clear scoring so your best candidates surface quickly. It also offers AI assistance to suggest prompt ideas from your job description, which speeds build time without diluting rigour.
What this looks like in practice
-
Decision scenarios: “A critical stakeholder requests a last-minute change that will delay delivery by a week. Choose the best response and explain your reasoning in 120 words or fewer.”
-
Prioritisation: “Rank these five tasks for your first week in the role and tell us why you put your top two first.”
-
Small work sample: “Here's a short brief. Draft three bullet points you would share with the team to move this forward.”
Each of these is short, role-relevant and testable. They reveal how someone thinks, not how well they prompt an AI.
Download our guide for more best practice ideas and information about candidates using AI to apply.
Why this beats detection
-
More predictive: Decisions and trade-offs mirror real work, so your scores mean something in the role.
-
Consistent and fair: Pre-defined rubrics reduce drift between reviewers and create an audit trail.
-
Faster to shortlist: Scoring happens as applications land, so talent teams can focus conversations on the strongest matches first.
-
Better candidate experience: Short, purposeful tasks feel fair and reduce drop-off compared with long essays.
Replace long personal statements with short, scorable prompts that test judgement and practical skill.
Implementation checklist for TA
-
Identify the 5-7 skills or behaviours that predict success in the role family
-
Write one decision scenario, one prioritisation task and one micro-task sample for each role type
-
Define scoring anchors for strong, acceptable and weak answers
-
Add sensible time limits for judgement-based prompts
-
Rotate prompts from the bank and review pass rates after two hiring cycles
-
Track conversion, time to shortlist and quality at interview to refine
Our ATS guides you to suitable prompt types for each stage, validates set-up to reduce abandonment and automates ranking so your team moves quickly without sacrificing rigour.
Sensible AI, used where it helps
We're pragmatic about AI. We use it to accelerate admin, but it should not replace judgement. Within our platform, you can generate suggested prompts from your job description and create on-brand visuals for image-based tasks. Recruiters save time, candidates face fairer tests and hiring managers get cleaner signals.
Get the full guide
Want the detailed playbook with examples, templates and scoring rubrics? Download our practical guide to handling AI-assisted applications and put this into your next campaign.
The bottom line
You don't need to outsmart AI-written CVs. Change the questions, tighten scoring and move faster on real capability. Our ATS gives you the levers to do it today.
If you'd like, I can drop this into your CMS with the CTA button placement, UTM tags and internal links wired up.