No, AI interviewing isn't the future

ALEX LAMONT • 06 May 2024

Editor's Note: A couple of weeks ago, Alex came to a content planning session determined to write an opinion piece on artificial intelligence and interviewing.

After a lengthy discussion, we gave in. Whether that was a good idea or not is for you to decide, but here's what he had to say about gpt-vetting and AI interviewing 👇

Hey, Hiring Manager! Do you hate meeting new people? Tired of interviewing potential future colleagues? Sick of helping impressive but nervous candidates show the best side of themselves? Well, have no fear, gpt-vetting and AI interviewing are here to shine a neon beam into the sky spelling THE FUTURE in glitzy letters.


What is gpt-vetting? 

Gpt-vetting is when a configurable AI interviews a person instead of a human being. 

On April 12th of this year, Ali Ansari – founder of Micro1AI – tweeted: 

“Excited to introduce the world’s first AI interviewer, gpt-vetting.” 

Accompanying the tweet was an eerie video of an animated avatar called Alex “interviewing” Ali by asking a set of competency questions. 

“Hey! I’m your AI interviewer. How are you?” it says in the kind of monotone voice anyone who’s spent an hour using a screen reader will recognise. 

“I’m good. How are you?” Ali responds patiently. 

“GREAT LET’S JUMP INTO IT,” Alex responds, leaping into its opening question. Alex is able to scan the information a candidate provides – like your LinkedIIn Profile – and ask questions like – “Can you tell me a bit about your experience at WBD at Berkeley?”

What Micro1AI promises is a way to streamline the interview process – getting rid of the human first touch and replacing it with code. It claims you can interview up to 100 times more candidates in a shorter period, apparently providing candidates with a “more enjoyable, gamified, and less biased interview experience.” 

The beta phase has completed 13,000 AI-powered interviews. 

Their website shows a variety of testimonials by Managing Directors, CEOs, Founders praising the tech. One CEO – Laith Masarwehs – claims gpt-vetting has allowed his company to “cut down processes by 50-60%... we attract the top 1% of talent in scale, without having to do all the manual work.” 

Candidates, however, seem to have a different opinion. 

“All kinds of hell-no. Never will I ever accept an AI interview. Well, ok. I might, just to see the extent that I can mess with it,

“All kinds of hell-no. Never will I ever accept an AI interview. Well, ok. I might, just to see the extent that I can mess with it,” User myrobotbrain says, replying to Ali’s tweet. 

“This absolutely sucks. I would encourage any Dev who went to interview for a company and found they were using LLM screening software to boycott them, walk away and be clear to the company the reason why. This is not the future we want for the software industry,” User Jaypop states. 

But I think it’s @shellscape's reply that sums up my personal feelings about gpt-vetting and other tech like it: 

“Why are you actively trying to make interviewing worse?” 

“Why are you actively trying to make interviewing worse?” 

Leaving the interview to the robots won’t solve your problems 

Interviews aren’t just for the interviewer – something AI advocates seem to fundamentally misunderstand – they’re for the interviewee as well. This is something we at Jobtrain constantly talk about – whether it’s in our Talent Intelligence reports, our webinars, or our ultimate guides to onboarding and the candidate experience. The job market hasn’t pivoted back to being in favour of employers yet. Jobseekers have all the power and 81% of them consider company culture important when choosing where to look. 

You’re not just assessing them, they’re assessing you too. An interview is an opportunity for a candidate to ask questions about the organisation, and a bad interview will turn them off. 

If I psyched myself up for an interview just to be met with a bit of machine learning, I’d bin the process immediately. I would be fascinated to see what the interview drop-out rates are like for organisations using this sort of technology. 

Candidates are looking for an organisation that fits with their values. The initial interview – whether it’s an assessment interview or a full-blown question and answer session – is one of the key ways a new hire is exposed to who you are, what working with you will feel like, and what their working future looks like. 

What does a cartoon robot say? 

Alex’s cold eyes tell me “I don’t care about you. I don’t think your time is valuable.” 

Not-so-critical video intelligence 

Micro1AI isn’t the only company trying to erase people from the interviewing process. Companies like Interviewer.AI are gambling on it too: 

“Would you rather spend hours doing pre-screen interviews with a huge number of applicants or focus your energies on the top candidates? Let Interviewer.AI do the heavy lifting so you can spend your time evaluating the best candidates.” 

Just like Micro1AI, Interviewer.AI assumes the best candidates – the most qualified, the most passionate people job-hunting in your sector, the best of the best who likely have offers coming in from your competitors – will stick around for you to spend time evaluating them after you shunt them through an AI-led interview process. 

Automated shortlisting and screening questions in an application form are
one thing because a candidate doesn’t know for sure when their information is just being scanned. They can suspend their disbelief that every punctuation mark of their cover letter is poured over by another person, but with an interview, any complex automation is exposed. 

And that is especially true if the AI looks and sounds like a person. 

Across the globe, governments are exploring – or have even passed - AI regulations that make sure the technology is transparent. Any artificial intelligence that’s mimicking a human will have to open a conversation with something to explain that – “by the way, I’m a robot. Sorry for the disappointment!” In some countries, it’s illegal for them to suggest otherwise.  

AI decision-making is disrespectful 

Staying with Interviewer.AI, it scores candidates after an interview to help with your selection process. On its site, they declare: 

“Our AI uses Computer Vision (expression analysis), Voice Analysis, and Natural Language Processing (NLP) to assess and score candidates’ video responses. Candidates are scored on four traits—Professionalism, Energy levels, Communication, and Sociability.” 

Of course, a hiring manager can ignore this grading system if they wish – although if it’s switched on, they’ll inevitably be influenced by the scores in some way. But there’s something intensely dehumanising about ‘energy levels’ or ‘sociability’ being documented mathematically for someone to peer over after refusing to take the time to interview someone themselves. 

Artificial Intelligence – especially in the smoke-and-mirrors state that it currently exists in – is incapable of respecting a person in the way we would understand it. By the very definition, it’s disrespectful, doubly so if a Hiring Manager uses the grading system Interviewer.AI recommends. Imagine you referred a friend to a job role with the heads up that “you have to get past the AI first.” 

Relying on an AI’s score to judge a candidate’s suitability for a role is like seeing a 4/10 review of Citizen Kane, never watching it, and then telling your friends that it sucks. 

At least the review is written by a person!

Sections of Jobtrain's AI in recruitment Guide are on show.If you're looking at how artificial intelligence can benefit your recruitment process - while keeping people at the heart of it - Jobtrain's own Giles Heckstall-Smith put together a free guide that you can download below!

Get Started
Book a demo