Giles Heckstall-Smith – the Director of Strategic Development at Jobtrain – was joined by Matt Burney – Senior Strategic Advisor at Indeed – and Ryan Blaker - at Indeed. Together, they discussed artificial intelligence in recruitment, and what to watch out for.
Is Artificial Intelligence a threat or an opportunity for recruiters? How do we keep the human element while embracing this next step for recruitment technology?
Matt: We’re using a GPT model as largely an example of AI, but at the risk of bursting everybody’s bubble – ChatGPT isn’t actually AI! To qualify as artificial intelligence, a system needs to exhibit behaviours of learning etc.
Two broad categories.
Artificial General Intelligence – true AI. This technically doesn’t exist yet.
Large Language Models – which are programmes like ChatGPT. They’re like predictive text on steroids. What they do is fantastic, but they are quite limited by the number of inputs you can put into them, but that’s changing and growing every day. ChatGPT has also been limited to data from 2021, but it will soon upgrade to data from April 2023.
Giles: As recruiters its shifted the dial massively. Particularly when it comes to data. There’s an awful lot of data collected throughout the recruitment process. To use a practical example, our ATS will automatically sift through all of the CVs you currently have and score them based on the potential relevance based on job descriptions.
Matt: If you look at the burgeoning world of Talent Intelligence and Talent Acquisition, we’re moving into a world where. If we look at language models, they can look at datasets quickly and make your understanding of.
Very simply, rewriting job adverts or job descriptions are a great way to use GPT, Bard, Bing, any model you want to use, but there are lots of things you can do. However there is some existential risk with that!
Matt: If you look at any newspaper from the past 12 months, you’ll be able to find an article that says the world’s going to hell in a handcart and it’s all because of AI! I remember the days of e-mail and the internet when similar fears were stated. In the short term there are roles that get displaced, but generally when new technology comes along, it’s not as widespread as people anticipate.
Realistically what we need to look at is how does automation of any variety – be it artificial intelligence or just general automation – augment what we do. We’re in a very inefficient world. Processes are overly complicated with manual tasks. Indeed did a survey that found around 35% of the working week is given over to doing administrative tasks that a person wasn’t hired to do. If we can use tools like AI to free up that time, we’re giving workers back time to do their job.
The risk there is that a manager looks at that statistic and says “well you’ve got 35% time on your hands! Let’s replace it with 35% more tasks!” or “we can get rid of 35% of the workforce!” but that’s the wrong mindset for those managers to have.
Giles: I remember the days of the first applicant tracking system, and there were similar fears across the industry. As we saw, ATS’s didn’t get rid of the need for hiring managers or recruitment directors. Just like with an ATS, the only recruiters threatened by AI are the ones who are unable to adapt to the oncoming change and learn about AI.
AI can’t sell a job or opportunity to a potential hire in the same way that a recruiter could. It can’t influence the hiring process in the sense of engaging hiring managers, leadership, or challenging job specs. It can’t truly nurture people the same way that the human element can. What it can do is take away a lot of the administrative tasks that get in the way of that work. To my mind, there’s not an existential threat to recruiters out there, but there is a need to embrace and understand this technology closely.
Giles: On my soapbox, I still can’t believe that in 2023 70% of people who apply for jobs still don’t receive an acknowledgement email which is going to tank your candidate satisfaction rate. From a candidate perspective, we could employ this technology to create more convincing automated responses that can make a candidate feel like there’s a more personal touch. WhatsApp's a great way to do that because it feels more real. But to be honest, the technology is already there in your applicant tracking system, recruiters across the board need to start harnessing it.
One thing to note with chatbots is that if you are going down this route, make sure the bot is open and transparent about being a bot, rather than pretending to be a person. That will – for sure – increase candidate satisfaction because it means the candidate has a touchpoint.
Matt: If you’ve got a recruitment process, I guarantee I’ll be able to find it on Reddit, Telegram, or a WhatsApp group. Yes, people are using new technology to go and find buzzword answers to questions they know nothing about, but you should be able to sift those answers easily by using online tools that identify AI responses. If you want to video interview someone remotely, make sure the camera shows where their hands are so they can’t be typing on ChatGPT the whole time. Obviously text-to-speech then becomes a problem, but with all kinds of technologies there are loopholes that a candidate could find. Interview fraud like that is very rare but will happen regardless of what technology is available.
Giles: You don’t need to use advanced technology to speed up and streamline your process because it all goes back to your application process on the front-end. Time how long it takes your application, we’ve seen previously that for non-specialist roles the application process should take less than 15 minutes. That will set a precedent. Things like DBS, right-to-work checks can all be integrated with your ATS to shorten your overall time and become a quick win.
Giles: Let’s start with equality, diversity and inclusion. If you have an algorithm that can ensure unconscious bias when sifting through CVs and the like, that should – on paper – be better than a human being. But of course the thing to be wary of is that algorithms are designed by people in the first place!
Matt: There are a couple of ways to think about ethics in AI. Transparency is the first. Giving people an explanation of how your AI tool works, who designed it, where it comes from, is key. You need to explain to candidates what your AI tool is, what it does, and why you’re using it.
We saw in a survey that 78% of millennial candidates believe AI has influence over whether or not someone gets hired. While AI tools can help with CV sifting and such, there is currently no AI tool that can handle the full end-to-end process effectively.
Control of our data is key. Data collection is a slippery slope, so if you are using an AI to go and look at data, you need to understand the limitations of that and tell people if the tool has access to their data.
It’s a really deep rabbit hole and a really important one. There’s automation for good, there’s automation for bad. Just doing automation for the sake of it isn’t enough.
We’ve worked closely with NHS organisations to understand and deliver against their requirements – resulting in an intuitive, functionality-rich applicant tracking system that integrates effectively with a range of complimentary NHS technical partners. We're proud to work with NHS and healthcare clients across a wide variety of boards and trusts up and down the United Kingdom. If you're interested in learning more, get in touch today!