How should artificial intelligence be used in recruitment?
It’s a debate that’s as complex as it is urgent, and one we believe deserves proper scrutiny. AI has been hailed as a game-changer for hiring, promising speed, objectivity and innovation. But is it really the fair and efficient solution it claims to be? Or are we at risk of handing too much power to systems that don’t truly understand people?
AI efficiency
There’s no denying the appeal of AI in recruitment. Faced with hundreds, sometimes thousands, of applications for a single role, many employers are turning to AI to make sense of the chaos. And the numbers back up its potential. According to LinkedIn’s Future of Recruiting Report 67% of recruiters say AI has significantly improved efficiency in the hiring process.
AI doesn’t sleep. It doesn’t forget to call someone back. It doesn’t skim-read a CV because it’s been reviewing applications for six straight hours. And when used well, it can help reduce administrative bottlenecks, flag strong candidates, and even reduce some unconscious bias by ignoring names, photos or personal details that might sway a human recruiter. For high-volume hiring, especially in tech or customer service roles, AI has already become a go-to tool for managing the early stages of recruitment. Tools that pre-screen CVs, automatically rank candidates, or schedule interviews are no longer novel; they’re becoming standard. When efficiency matters, AI is proving to be a powerful assistant.
But faster isn’t always fairer
Despite its advantages, AI is far from a silver bullet, and when used as the sole decider, it may be making things worse, not better.
One of the biggest misconceptions about AI is that it’s neutral. In reality, most AI systems are trained on historical data, and that data is often riddled with human bias. If a company has consistently hired mostly white, male candidates for the past decade, and AI is trained on that data, guess what it learns to prefer? The same kinds of applicants.
According to a 2024 Business Insider report, when researchers used ChatGPT to score fictional candidates for a job, it consistently gave higher ratings to white candidates than Black ones, despite identical qualifications. A similar study reported in the Guardian found that AI video interview tools misinterpreted non-native speech patterns and penalised candidates with strong accents or visible disabilities. In some cases, the technology struggled to understand them at all.
There’s also the issue of what AI can’t see. It doesn’t recognise grit. It can’t truly detect the quiet determination of someone who’s learned a new skill on their own, or the resilience behind a career break. These are things humans pick up on in conversations, in stories, and in subtle cues that machines simply don’t grasp. The fear isn’t just that AI might miss great candidates, it’s that it could actively exclude them.
Recruitment is about relationships
At its core, the debate isn’t about whether AI should be used, it’s about how and when we use it.
Some hiring managers see AI as a way to bring logic and structure to a messy, emotional process. Others worry we’re outsourcing deeply decisions to systems that lack empathy, nuance and accountability.
Interestingly, even companies investing heavily in AI are approaching it with caution. A report from the World Economic Forum found that 85% of HR leaders say human oversight is essential when using AI in hiring. They recognise that algorithms can amplify bias just as easily as they can suppress it, and that the cost of getting it wrong is high.
Candidates, too, are wary. Surveys show that over 60% of applicants are uncomfortable with AI making decisions about their job prospects. Many worry about the lack of transparency. If a machine says “no,” who do you appeal to? How do you learn, improve, or get feedback?
In a world increasingly obsessed with data, we risk forgetting that recruitment is about relationships, not just results.
Our Perspective: Augment, Don’t Replace
At ImpactMatch we believe AI absolutely has a place in recruitment, but only at the right stage, and never at the cost of human insight. We don’t rely on AI to filter CV’s or run interviews. Every candidate we work with is handpicked and interviewed by a real person. That way, we get a full picture, not just of what’s on paper, but of who they are, how they think, and what they could bring to a team.
Once we’ve done the groundwork, made sure we have a balanced shortlist and spoken to candidates, built rapport, and shortlisted those who stood out, we bring in AI to support the next step. Specifically, we use our Value Matcher tool, which alongside our 1:1 interviews looks at publicly available information to help explore how a person’s working style might line up with a company’s mission and culture. It’s not there to make decisions. It doesn’t rank people or knock anyone out of the running. Instead, it gives hiring managers more insight, a conversation starter, not a scorecard.
This isn’t automation taking over. It’s humans staying in control, with a bit of help from smart technology. Our system is designed to guide, not decide.
Used well, AI can make the hiring process smoother. It can save time, cut down on admin, and help keep things consistent. BUt the decision about someone’s future? That’s not a job for a machine. That’s something people should own. Because hiring isn’t about checking boxes. It’s about seeing potential, recognising character, and giving people a chance to show who they are.
Machines can’t hold conversations. They can’t feel motivation or notice the spark in someone’s story. The best hiring decisions are thoughtful. They take time, care, and a bit of gut instinct. We think that’s worth protecting.
Do you want to see what smarter, more human search looks like? Drop us a message, we’d love to show you how we find people who truly match.
- – Author: Cara Mcahill, Intern, ImpactMatch