There has been an upsurge in the use of technology at work during the pandemic. Virtual meetings and the use of audio-visual technologies have become second nature. There are reports of increasing use of such technologies also in recruitment processes and many employers will be inclined to digitalise their recruitment and selection processes. However, the use of artificial intelligence (AI) and other automated decision-making (ADM) technology is not without risk. Employers should tread carefully when purchasing and implementing such systems.

In this article, Audrey Williams examines the key risks and safeguards which organisations must consider and outlines the likely developments that may arise in the future.

Recruitment assessment systems

These can range from the automatic processing and filtering of CVs and applications (which can be particularly useful where high volumes are received) to virtual interviews and augmented or virtual reality testing and assessments. As more is done via virtual meetings, it might be assumed that candidates will be comfortable and confident using automated or asynchronous video interviews, which it seems is the next key development. The advantages here include freeing up resources and interview panels, where the technology is used to “interview” a long list of candidates and select the short list.

Legal risks

With all such technology, employers must take steps to minimise the risk of bias and discrimination that may arise, as well as addressing data protection obligations.

To a certain extent, bias can be addressed when selecting the provider of the technology by ensuring the provider’s data and algorithms have been stress-tested for bias and discrimination. An employer will want to ensure that the provider has evidence to demonstrate that there is no risk of unfair bias, for example against candidates due to gender, race or age.

In addition to the risks of discrimination claims, particular concerns arise with disability discrimination. Given the obligation to make reasonable adjustments to remove and address disadvantage under the Equality Act, some mechanisms must exist to enable disabled candidates to flag barriers which they might face in undertaking an automated video interview or other automated process. This should enable discussion about any exemptions or adjustments needed for candidates. For example, candidates with visual or hearing impairments, those on the autism spectrum or with a facial disfigurement or a stroke, Bell’s palsy, or facial paralysis. Due to the way in which the systems are designed, such systems read or assess a candidate’s facial expression or response, the level of eye contact, voice tone and language. Disadvantages can arise here and must be addressed. Language and tone of voice can also be more difficult for some whose first language is not English, raising the risk of racial bias and discrimination or challenge those with speech impediments (again disability risks arise).

Finally, as with all technology, it will be necessary to ensure that data protection obligations have been met to ensure there is legitimate processing, protection of rights to privacy and increasingly, being able to explain and justify the results.

What practical steps should employers take?

  • Employers should challenge the validity of the assessment process and ask about the data used to build and design the technology. What analysis has been done to eliminate or reduce the risk of bias and discrimination?
  • Employees should also ensure that the test results have been assessed for bias and question how current these impact assessments are.
  • When adopting the technology, employers should ensure the process allows for human intervention: if a candidate needs adjustments because of a disability, make it clear with whom and how they should make contact to discuss what might be required.
  • If your organisation introduces new technology, ensure its use is subject to ongoing impact assessments to identify early any change in the recruitment results as part of your ongoing diversity and inclusion analysis.
  • Implement measures to minimise the risks around bias and discrimination so that, in the event of a legal challenge, a reasonable-steps defence might be used. However, to do this these measures must be maintained, kept under review, and updated.

This is an area where equality impact assessments and data protection impact assessments should go hand in hand in the sourcing, adoption, and usage of these technologies, as well as monitoring outcomes.

Keeping up to date and monitoring legal developments in this area

Keeping up to date is particularly important when several regulators, including the Information Commissioner’s Office in the UK, the European Commission and the Financial Conduct Authority are becoming increasingly vocal about the use of AI and ADM. There is likely to be new legislation and guidance or suggestions about best practice in the near future.

In November 2021, the UK’s All Party Parliamentary Group issued a report calling for action, including new legislation. Their recommendations include an algorithms Act to ensure accountability, improved digital protection and statutory guidance.

Those operating in a global environment and who may be using this technology across the jurisdictions in which they operate should keep an eye on the European Commission’s proposal to introduce new legislation (draft regulations are currently under consideration). There are also proposals coming from the United States and individual European countries. One recent example is Germany, where a new obligation to consult with the Works’ Council (a consultative body representing workers) when introducing AI in the workplace, has been introduced.

At a practical level, the Trades Union Congress (TUC) have issued a manifesto containing recommendations, but it is also seeking changes, including the introduction of new legal protections to safeguard employees when it comes to the use of AI and ADM in the workplace. These include a focus on job applicants.

The TUC’s recommendations include:

  • Changing the burden of proof in discrimination claims which seek to challenge AI or ADM systems in the workplace so that the employer will have to disprove discrimination.
  • Extending liability to all entities in the ‘value chain’, which means the providers and developers could be held liable for discrimination alongside an employer.
  • Like the new legislation in Germany, the TUC’s manifesto proposal includes a new statutory duty to consult with recognised trade unions about the deployment of what they describe as high-risk AI and ADM systems in the work environment.
  • Requiring employers to maintain a register, with information about how AI and ADM are being used in their workplace. This part of the manifesto also recommends making this register accessible to existing employees, workers and job applicants and imposing a requirement to include this information in the section 1 statement or contract of employment.

The use of AI is likely to becoming increasingly popular. Prior to integrating it into existing recruitment models, employers should take full precautions to ensure that the technology does not implement discriminatory practices. As this is an area that is expanding drastically, being informed on the implementation and regulation of the technology is vital.

If you have any questions on the use of AI/ADM in recruitment, please contact Audrey Williams

For further information please contact:

This article is for general information purposes only and does not constitute legal or professional advice. It should not be used as a substitute for legal advice relating to your particular circumstances. Please note that the law may have changed since the date of this article.