A recent study examined how Artificial Intelligence (AI) is reshaping recruitment. The research highlights a critical yet underexplored issue: AI and its impact on disabled job seekers. AI hiring tools, from résumé screeners to video-interview algorithms, are now widely adopted by organisations seeking efficiency and objectivity. However, these systems are not inherently neutral and they frequently reflect biases embedded in their training data and design frameworks.
Disability bias in AI recruitment remains comparatively underexamined, despite the persistent employment gap endured by disabled people. Approximately 53% of disabled people are employed compared with 82% of non-disabled individuals. If AI recruitment systems penalise disability-related career paths, communication styles, or employment gaps, they risk “disabling by design” and thus inadvertently reinforcing structural exclusion rather than reducing it.
Key Findings: AI Bias and Disabled Candidates
The research found that AI-driven hiring tools can misjudge or filter out disabled candidates in subtle but significant ways. Automated résumé screeners often penalised disability-related CV gaps or non-linear career paths. Video interview algorithms sometimes misinterpreted atypical eye contact, speech patterns, or communication styles as indicators of poor “fit.”
Notably, perceptions of fairness differed sharply between groups. Non-disabled participants were more likely to view AI tools as neutral or even beneficial. In contrast, disabled candidates frequently described the systems as opaque and stressful, referring to them as “black boxes” that lacked transparency or opportunity for contextual explanation.
Accessibility concerns were also prominent. Many disabled applicants encountered platforms incompatible with assistive technologies: for example, tests that were not screen-reader friendly or that imposed rigid time constraints. These barriers extended beyond technical usability; they carried emotional consequences. Disabled participants often reported feeling anxious, invisible, or disadvantaged, while non-disabled peers generally experienced AI hiring processes as routine.
Why These Insights Matter
AI-driven recruitment stands at a pivotal moment. Without careful governance, it risks entrenching existing inequalities. With inclusive design and oversight, however, it has the potential to broaden access to opportunity.
The research identifies several practical steps for organisations seeking to align AI recruitment with inclusive and legally compliant HR practice:
· Audit AI systems for bias. Algorithms should not be assumed fair; regular testing and monitoring are essential.
· Provide alternative assessment routes. Non-video interviews, extended time allowances, or alternative formats should be available where needed.
· Ensure transparency from vendors. Organisations must demand clarity on how AI systems evaluate candidates and how fairness is tested.
· Maintain meaningful human oversight. Recruiters should review AI-driven decisions and retain the authority to override automated outcomes.
Adopting these measures not only supports emerging regulatory expectations such as the EU’s classification of recruitment AI as “high-risk” and the UK Equality Act’s accessibility obligations but also strengthens organisational credibility. Inclusive AI recruitment is not solely a compliance issue; it is a strategic imperative. Failing to address bias risks excluding qualified talent and undermining diversity objectives.
Ackoweledgements:
Researcher: Neha Rajendar Verma
Supervisor: Jim Simpson
University of Sussex, UK
The full study report can be accessed here: AI in Recruitment study, full report, Neha Rahendra Verma