When The Times published its recent article, Interview with a Bot, it gave many HR professionals a glimpse into the future of hiring: CVs screened by algorithms, interviews conducted by bots, and candidates selected or rejected without a human ever getting involved.
For entrepreneurial businesses growing fast and hiring even faster, the appeal is obvious. AI can process applications at scale, free up overstretched HR teams, and introduce consistency into decision-making. However, there are legal and reputational risks of AI-led recruitment that could be overlooked. AI in recruitment isn’t new, but its use is accelerating rapidly. However, the legal framework has yet to catch up. While the technology promises efficiency, businesses that deploy it without proper oversight could face significant legal exposure.
The illusion of neutrality
One of AI’s most seductive promises is that it can reduce human bias. But in practice, AI often replicates the very biases it was supposed to remove.
That’s because machine learning models rely on historical data, which often reflects systemic inequality. If your recruitment tool has been trained on past hiring decisions that favoured certain demographics (say, white male graduates), the AI is likely to reproduce those patterns. The result? A high risk of indirect discrimination under the Equality Act 2010.
Employers remain liable for discriminatory outcomes, even if those outcomes were delivered by a third-party algorithm. The bot might reject the candidate, but it's your business that ends up in the tribunal.
The first AI bias claim and why it matters
That legal risk is no longer theoretical. In a recent case, a Chinese job applicant brought a claim for racial discrimination, arguing that he had been unfairly rejected by an AI-powered recruitment system. The tribunal dismissed the claim, finding no evidence of bias in the tool’s decision-making.
The employer was cleared in this case, but the significance of the claim shouldn’t be underestimated. This is believed to be the first UK employment tribunal case involving allegations of discrimination by an AI recruitment tool - and it won’t be the last. The fact that tribunals are now being asked to assess whether bots are behaving lawfully signals a shift in how employment law is likely to evolve in the AI age.
Just because an AI tool hasn’t been found to be discriminatory yet, doesn’t mean future claims won’t succeed, especially if the technology remains unregulated and untested.
Automated decisions and data protection
Aside from discrimination risks, employers also need to consider UK GDPR. If AI is being used to make solely automated decisions, such as filtering out CVs or rejecting candidates at the video interview stage, this could breach Article 22 of the GDPR. Under this rule, individuals have the right not to be subject to decisions made without human involvement, unless very specific criteria are met.
In most cases, employers would need explicit consent to use AI in this way, and very few are obtaining it. Transparency is another requirement. If a candidate asks why they were rejected, your business must be able to explain how the decision was made. Many off-the-shelf AI systems lack this ‘explainability’, which puts employers in dangerous territory.
The commercial risk of impersonal hiring
Even where no legal breach occurs, there are reputational and cultural risks. AI can’t detect personality, growth potential or cultural fit. It can’t interpret nuance or appreciate a non-traditional career path. Candidates who feel they’ve been dismissed by a faceless system are unlikely to reapply or speak positively about your brand.
In a tight labour market, rejecting a great candidate because their CV didn’t hit the right keyword count is more than just a missed opportunity, it’s a business risk.
What should HR and business owners do next?
Our advice is simple: use AI to support, not replace, your recruitment process.
Human oversight is key. If you’re using AI tools to assist in shortlisting, scoring interviews, or analysing candidate responses, there must be a person involved in the final decision. You should audit the tools you use: what data are they trained on? Are outcomes regularly reviewed for bias? Can you explain how a rejection was made if challenged?
Privacy notices must be updated to reflect AI use. And HR teams need training to understand the strengths and limits of this fast-moving technology.
Whether it’s a recruiter or a robot making the call, it’s still your business that’s accountable.
Need help navigating AI and employment law?
Our employment law solicitors work with fast-growing businesses to implement hiring processes that are not just efficient, but also fair, ethical and legally sound. Get in touch for an AI compliance health check.