Navigating the legal and ethical risks of AI in HR and recruitment

The exponential expansion of artificial intelligence across professional and personal spheres has inevitably led to its integration into human resources and recruitment functions. Research commissioned by ACAS indicates growing acceptance among employers, with approximately one-third of UK businesses surveyed believing AI deployment will enhance productivity.

AI’s capability to enhance efficiency and streamline administratively intensive tasks is undeniable, but the operational, ethical and legal hazards accompanying its widespread implementation are becoming increasingly apparent.

This is particularly seen within HR and recruitment contexts, where the technology risks becoming a significant liability for organisations tempted to over-depend on automation at the expense of sound management procedures and human judgement.

The business rationale for AI in recruitment

Regarding recruitment specifically, numerous organisations find themselves attracted to AI’s ability to expedite candidate screening, automate repetitive administrative functions and derive insights from substantial volumes of applications and related information.

In certain respects, this represents nothing particularly novel, as many enterprises have utilised AI or AI-adjacent algorithms for years, perhaps unknowingly, through their engagement with online recruitment platforms.

However, both the scale and scope of available technology have advanced significantly in recent years, accompanied by intensified scrutiny of AI’s control, applications and ethical implications.

Contemporary AI-powered platforms are frequently deployed to screen CVs, align suitable candidates with AI-generated job descriptions, coordinate interview scheduling and produce candidate assessments. Accomplishing these tasks in a fraction of the time compared with conventional methods at reduced cost highlights the attraction for most users.

A growing number of organisations employ AI screening technologies during online interviews, enabling preliminary candidate assessments to be conducted virtually and entirely by AI chatbots. Following employment commencement, AI can routinely monitor and evaluate employee activity.

Theoretically, this liberates HR professionals from administrative burdens, enhancing efficiency and enabling them to dedicate increased time to more complex strategic tasks. The reality, when inadequately managed, suggests it may equally be generating significant problems.

When automation exceeds appropriate boundaries

Despite its apparent advantages, AI’s deployment in recruitment carries potential risk. Alongside efficiency improvements come legitimate concerns surrounding fairness, data protection and transparency.

From an employment law perspective, the most significant impact will involve discriminatory outcomes. Algorithms trained using biased datasets, often reflecting unrepresentative cohorts or historical inequalities, risk perpetuating or exacerbating those patterns. The frequently referenced illustration involves Amazon’s discontinued AI recruitment tool which, having been trained predominantly on male CVs, subsequently discriminated against female applicants.

Additional examples include Samsung prohibiting all employees from utilising generative AI platforms after sensitive proprietary code was inadvertently uploaded to ChatGPT.

Legislative and regulatory context

Under the Equality Act 2010, employers must ensure recruitment practices avoid producing discriminatory outcomes, whether directly or indirectly. AI tools disadvantaging protected groups, whether intentionally or otherwise, may give rise to claims under sections 13 or 19 of the Act.

GDPR and the Data Protection Act 2018 impose additional obligations, with Article 22 specifically protecting data subjects, including employees, from significant decisions made solely on the basis of automated processing.

The Employment Rights Act 1996 maintains relevance. For instance, any dismissal resulting from a flawed or unreviewed decision, including one potentially influenced by AI, exposes the employer to the risk of being unable to demonstrate that the decision was reasonable, procedurally fair, or adequately investigated. This could generate unfair dismissal claims.

Where accountability resides

Organisations, HR departments and legal advisers alike bear responsibility for the material they produce and the decisions they reach, irrespective of whether AI tools were involved in that process.

Courts have recently adopted a decidedly unfavourable view of inaccurate and in certain instances, entirely fabricated evidence presented as factual. This was demonstrated by the recent High Court judgment in the consolidated cases of Ayinde and Al-Haroun [2025] EWHC 1383 (Admin), which brought together two entirely distinct instances where legal practitioners were suspected of employing AI during proceedings, citing fictitious and fabricated case law.

The referrals emerged from “the actual or suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked, so that false information (typically a fake citation or quotation) is put before the court.”

In one instance, it remained disputed whether AI constituted the source of fabricated citations presented in proceedings, whilst in the second, the solicitor acknowledged AI had been among research tools utilised. The consequence was that the lawyers involved were referred (or self-referred) to the Bar Standards Board and Solicitors Regulation Authority, having been thoroughly admonished whilst avoiding contempt proceedings.

In a comparable well-publicised case before the First-tier Tax Tribunal (Harber v HMRC [2023] UKFTT 1007 (TC)), non-existent law was cited. During a hearing, HMRC contended that the opposing party had produced a response document including cases that were not identifiable as genuine cases. The FtT concluded that the cited cases had been generated by AI, which were subsequently disregarded.

Practical guidance

To manage legal and reputational risk, employers incorporating AI tools into operational practices should be advised to develop and implement clear AI use policies, governing both organisational responsibilities and employee conduct.

Policies should identify approved tools and tasks for which these may be employed, prohibited uses (such as uploading confidential information to public platforms) and governance structures for oversight. Implementing these measures will not eliminate risk entirely, but will significantly diminish exposure to discrimination and data breach claims.

Systems require auditing to identify hidden bias and ensure their security, robustness, and compliance with equality, data protection and employment legislation. Particularly, the training data employed by AI systems should be scrutinised, ensuring it reflects a diverse and representative candidate pool, inclusive of races, ethnicities, genders and educational backgrounds. This should be further monitored against outputs to ensure results align with intended policy.

Critically, human oversight needs to be integrated into any process, including regular reviews, clear accountability for automated systems and training for HR and line managers who utilise them.

The law is explicit regarding individuals’ rights concerning significant decisions that may affect them and AI’s role in these, so there needs to be a clear understanding and process within teams where decision-making needs to remain firmly in human hands.

Conclusion

Artificial intelligence offers genuine efficiency gains for HR and recruitment functions; however, these benefits must be balanced against substantial legal and ethical risks. The cases of algorithmic discrimination, data breaches through careless use of generative AI platforms, and fabricated legal evidence in court proceedings all illustrate how readily AI can create serious liabilities when deployed without adequate safeguards.

For legal practitioners advising clients on AI adoption, the message is unambiguous: robust governance frameworks, regular algorithmic audits, comprehensive training programmes and, most fundamentally, meaningful human oversight are non-negotiable requirements. The Equality Act 2010, GDPR, Data Protection Act 2018 and Employment Rights Act 1996 establish clear parameters within which AI must operate.

As courts demonstrate increasing willingness to sanction inadequate AI governance, whether through discrimination claims, data protection enforcement or professional misconduct proceedings, organisations that treat AI as a compliance-free efficiency tool do so at considerable peril. Those that establish rigorous safeguards from the outset will be far better positioned to harness AI’s benefits whilst minimising its very real legal hazards.

Steph Marsh is an Employment Law specialist and Head of the Employment team at Coodes Solicitors. She acts for both employers and employees in contentious and non-contentious matters, with extensive experience, advising clients on discrimination issues, redundancy situations and staff management.

Coodes Solicitors is a leading regional law firm with offices in Falmouth, Holsworthy, Launceston, Liskeard, Newquay, Penzance, St Austell and Truro. It offers a full range of legal services, including corporate law, commercial law, litigation and dispute resolution, employment law, commercial property law, family law, contentious probate, medical negligence and private client matters. 

Photo by Scott Graham on Unsplash.