Algorithms in the hiring room: ICO spotlight is on automated recruitment
The Information Commissioner's Office has made automated decision-making (ADM) in recruitment a top regulatory priority.
Following its AI and biometrics strategy published in June 2025, the ICO gathered evidence from over 30 employers between March 2025 and January 2026, producing a detailed picture of where organisations are falling short of their legal obligations.
Employers should review their recruitment processes (as well as their supplier and agency engagement terms) to ensure that talent pipeline, candidate review and selection processes and shortlisting techniques comply with emerging technology requirements and deployment of artificial intelligence (AI) and ADM.
What the ICO found
Most employers believe their automated tools constitute decision support. However, the evidence shows they are making solely automated decisions, ones with legal or similarly significant effects on candidates, without the required necessary safeguards. The key exposure is the "solely automated decision-making" trap. Many employers will have assumed that because a recruiter or HR manager reviews a shortlist at some point, they are covered. The ICO's findings suggest that assumption is frequently wrong.
The test for meaningful human involvement is clear. A human must be able to influence genuinely the outcome before it is applied, with the authority and competence to change it. A recruiter who rubber-stamps or ‘validates’ an algorithmic ranking, or who never accesses a video file before a rejection fires (without manually reviewing the supporting application and CV, for instance, alongside the role criteria and required competencies and skillsets), may not necessarily meet that standard. In the absence of meaningful, human scrutiny, an organisation could be unwittingly opening themselves up to the risk of a successful discrimination claim.
The human cost of algorithmic rejection
Some candidates are experiencing a recruitment process that can be increasingly impersonal and, in many cases, unlawful. A third-year university student described applying for over 100 jobs and receiving a rejection less than two minutes after submission. Before, presumably, any human could have read her application. The typical automated journey involves AI CV screening, testing, and AI video interviews where candidates record responses with no human on the other end.
The scale is significant. 70% of employers plan to increase their use of AI and automation in recruitment over the next five years. A LinkedIn survey found 89% of recruiters agree AI reduces the amount of time it takes them to fill a vacancy. However, a Totaljobs survey of 2,002 UK workers found 62% feel uncomfortable applying where the entire process is AI-driven, and an Omni RMS research found 29% of jobseekers would drop out of the recruitment process due to perceived AI overuse.
What is happening: Automated decision-making without meaningful human involvement
A ‘cost-efficiency first’, plus AI (or machine learning) driven practice has emerged: candidates are algorithmically rejected at the testing stage and then invited to complete a video interview anyway. This is likely unlawful on multiple grounds. It involves collecting personal data, including facial expressions and vocal tone, that is excessive where the rejection decision is already made.
Organisations that may be primarily fishing for candidate applications and video capture content, predominantly to train AI models or building talent insight databases may be at risk of regulatory investigation, as well as data subject complaint. Such practices may also breach the fairness and transparency principle under Article 5(1)(a) of the UK GDPR, as the video stage is presented as genuine when the outcome is already determined at the testing stage. The immediate rejection that follows without human review further violates Article 22 of the UK GDPR.
The broader regulatory landscape
The ICO is consulting on updated draft ADM guidance following the introduction of the Data (Use and Access) Act 2025, with the consultation closing on 29 May 2026. The ICO has committed to scrutinising major employers and recruitment platforms and has made clear it will use its enforcement powers where organisations fail to act.
Why this matters and what can you do now
When human review is cursory or inconsistent, the safeguards required by UK GDPR must apply. Those safeguards include informing candidates that ADM is being used (or considering whether it is appropriate, in the absence of a data privacy or AI risk or conformity assessment e.g. to identify and eliminate any bias or discrimination, as well as consider any associated reputational risk associated with the use of such tools), offering the right to request human review, and ensuring candidates can contest the decision. Few organisations appear to be meeting all three.
The bias risk is a separate but compounding concern. Algorithmic tools trained on historical data can replicate and entrench existing patterns of discrimination. Without active monitoring, employers may not know their tools are producing discriminatory outcomes until a complaint or claim arrives, by which point they may struggle to successfully defend the claim.
The timing is also significant. With the Data (Use and Access) Act 2025 now in force and the ICO consulting on updated ADM guidance, the legal framework is shifting. Clients who act now will be better placed to respond to final guidance and avoid enforcement.
Practical takeaways for employers
We recommend that employers and affected organisations should focus on these seven areas:
- Audit your use of automation in hiring: Map every stage at which automated tools influence hiring decisions and assess whether human involvement is genuinely meaningful.
- Stop collecting data from already-rejected candidates: Inviting them to further stages constitutes a data minimisation breach and deceptive processing.
- Update your candidate-facing privacy notices: Candidates must be clearly informed that ADM is in use, what it decides, and what their rights are this includes the right to request human review and challenge decisions.
- Apply human review consistently across all candidates at each stage: If human review is part of your process, apply it uniformly across all candidates at the relevant stage. Selective or ad hoc human involvement does not provide the protection you may think it does.
- Implement bias monitoring and risk assessments: Regularly test automated tools for discriminatory outcomes and ensure vendor contracts require the same and, where applicable, ensure you have a written record of data privacy and/or AI risk or conformity assessments (which mirror the OECD responsible AI principles and the EU AI Act – which, for starters, are a useful benchmark to go by).
- Respond to the ICO consultation before 29 May 2026: For those operating recruitment platforms or making significant use of ADM, take this opportunity to shape the final guidance.
- Review and update vendor contracts: Responsibility for UK GDPR compliance sits with the employer as data controller, not the technology provider. Review your agreements with AI recruitment tool suppliers to ensure they reflect your compliance obligations and provide adequate contractual protection.
Our expertise
This article was prepared with the assistance and support of Kathyrn Balogun, a trainee in our commercial team. For more information about how AI and automation in the recruitment process will affect your organisation, please reach out to Philip James (data, privacy and cybersecurity) or Emma Capper (employment) or to discuss your circumstances further.
Authors
Emma Capper
Partner
Kathryn Balogun
Trainee Solicitor
Contact
Philip James
Partner
philip.james@brownejacobson.com
+44 (0)330 045 1022