Automated decision-making (ADM) refers to the process where decisions are made by machines, algorithms, or other automated systems, with minimal or no human intervention. This is often closely linked to profiling, which analyses aspects of an individual’s personality, behaviour, interests and habits to make predictions or decisions about them. Those who think that ADM and profiling, taken together, are a force for good point to the saving in time and resource, plus consistency of decisions. Those who are sceptical are concerned about errors in data input leading to the wrong decision, or harsh decisions that do not take into account factors that human decision makers do.
The GDPR has a very important safeguard in relation to automated decision making. It provides that a person has a right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning them. Therefore, ADM is not precluded in all circumstances, as is sometimes suggested. Importantly, a person can ask for a human intervention, enabling them to confirm and have an explanation for what has happened, and to object to ADM. This, though, may change, as many have been lobbying government to ease restrictions on ADM.
Use of such technology has had mixed results. In the UK, for example, ADM was used to provide grades in national examinations disrupted by COVID-19 restrictions, and it did not end well. It was summarised in August 2020 as students being subjected to an unfair, unaccountable debacle.
There remains great hope for the use of ADM and AI in the public sector. In 2018, the Data Justice Lab believed 53 of 96 UK local government authorities were using it. Further, it is believed that at least a quarter of police authorities are using AI for predictions, risk assessments, and decision making. Sadly, most news coverage has been of the problems; for example, in 2021, Durham Police were found to be using a tool to predict reoffending, but Big Brother Watch observed it has serious flaws and introduced bias and discrimination. To address concerns, in 2021 the UK Government introduced the Algorithmic Transparency Recording Standard, updated in 2023; however, it is still not fully utilised across the whole of Government. It has recently been reported that Ministry of Justice has been testing algorithmic tools to predict the likelihood of an individual – who may not be someone with a past criminal conviction – committing a violent crime such as murder. In contrast, other nations like Canada have embedded transparency of use via a Directive, and so have some municipalities like Amsterdam and Helsinki.
What are the different types of facial recognition?
Facial recognition is being used more and more widely. The UK Home Office (that covers immigration and law enforcement) confirmed in 2023 that the police use three types of facial recognition:
- Retrospective Facial Recognition (RFR)
- Live Facial Recognition (LFR)
- Operator Initiated Facial Recognition (OIFR)
Retrospective Facial Recognition – RFR is used after an event or incident as part of a criminal investigation. Images are typically supplied from CCTV, mobile phone footage, dashcam or doorbell footage, or social media. These images are then compared against images of people taken on arrest to identify a suspect. A South Wales Police study found that without RFR identifications take around 14 days, whereas with it they typically take minutes.
Live Facial Recognition – LFR enables police to identify people, and its deployments are targeted, intelligence-led, time-bound, and geographically limited. Before a deployment, the police will inform the public where they intend to use the technology and where they can obtain more information on its use. Technology uses live video footage of crowds and compares images to a specific list of people wanted by the police. Following a possible LFR alert, a police officer decides what action to take.
It is an operational decision for individual police forces whether, how, and when to use the technology, in line with the College of Police Authorised Professional Practice. It is understood all police forces have the technology, and South Wales Police (SWP), the Metropolitan Police Service (MPS – covering London) and Northamptonshire Police have all been public in their use of LFR.
Operator Initiated Facial Recognition – OIFR is a mobile app that allows officers, after engaging with a person of interest, to photograph them and check their identity.
There are concerns about the use of facial recognition. In 2020, the Court of Appeal agreed with Liberty’s submissions, on behalf of Cardiff resident Ed Bridges, and found that South Wales Police’s use of facial recognition technology breached privacy rights, data protection laws, and equality laws. The Court was concerned about the use of biometric data, and Parliament has expressed a view that it needs further regulation but so far none has been put in place. Concerns are in focus again because in March 2025, the UK’s Metropolitan Police confirmed that its first permanent installation of LFR cameras will be in the South London suburb of Croydon.
If you have questions or concerns about auto decisions and facial recognition, please contact technology lawyers James Tumbridge and Robert Peake.
This article is for general information purposes only and does not constitute legal or professional advice. It should not be used as a substitute for legal advice relating to your particular circumstances. Please note that the law may have changed since the date of this article.