The use of artificial intelligence (AI) is now widespread. You may not even be aware that you rely on it. The problems, particularly with generative AI, mean it is important to know how these technologies work and their impact on both individuals and businesses. In this article, our fintech partner Simon Deane-Johns considers the use and regulation of AI.

What is AI used for?

AI has a number of uses including: clustering, i.e. putting items of data into new groups (discovering patterns); classifying (putting a new observation into pre-defined categories based on a set of ‘training data’); predicting (assessing relationships among many factors to assess risk or potential relating to particular conditions, e.g. creditworthiness); and generating new content.

Unacceptable uses for AI

There is a long list of problems or challenges associated with the development, deployment and use of AI. These include issues with training data, the inability to remove bias, ‘hallucination’, explainability and the fact that AI involves only pattern-detection, not ‘truth’. The default position among many AI technologists is that AI development should free-ride on human creativity and personal data, which has implications for copyright, trade marks and privacy. As a result, an AI might only be allowed to run in a fully automated way where commercial parties are able to knowingly accept a certain level of inaccuracy and bias, for example, and losses of a quantifiable scale. Yet disasters have arisen through algorithmic trading and where markets for some instruments suddenly grind to a halt through human distrust of the outputs.

But an AI should not be used to fully automate decisions that affect an individual’s fundamental rights and freedoms, grant benefits claims, approve loan applications, invest a person’s pension pot, individual pricing, or predict, say, criminal conduct.

Human intervention in automated decision-taking is no panacea. So AI might be used among the steps toward a decision, but a human should be able to explain why and how the decision was reached, the parameters and so on, to be able to re-take the decision if necessary.

AI and data protection

The Information Commissioner’s Office (ICO) has identified AI as a priority area and is focusing in particular on the following aspects: (i) fairness in AI; (ii) dark patterns; (iii) AI as a Service (AIaaS); (iv) AI and recommended systems; (v) biometric data and biometric technologies; and (vi) privacy and confidentiality in explainable AI.

In addition to the basic principles of UK GDPR and EU GDPR compliance at Articles 5 and 6 (lawfulness through consent, contract performance, legitimate interests; fairness and transparency; purpose limitation; data minimisation, accuracy; storage limitation; and integrity and confidentiality), AI raises a number of further issues. These include:

  • The AI provider’s role as data processor or data controller.
  • Anonymisation, pseudonymisation and other AI compliance tools.
  • Taking a risk-based approach when developing and deploying AI.
  • Explaining decisions made by AI systems to affected individuals.
  • Only collecting the data needed to develop the AI system and no more.
  • Addressing the risk of bias and discrimination at an early stage.
  • Investing time and resource to prepare data appropriately.
  • Ensuring AI systems are secure.
  • Ensuring any human review of AI decisions is meaningful.
  • Working with external suppliers to ensure AI use will be appropriate.
  • Profiling and automated decision-making – important to consider that human physiology is ‘normally’ distributed but human behaviour is not.
  • Right to object to solely auto decision, except in certain situations where you must at least have the right to human intervention anyway, with further restrictions on special categories of personal data.
  • The lawful basis for web-scraping (also being considered by the IPO in terms of copyright protection).

Can the use of AI really be governed?

Given the scale of the players involved in creating AI systems, and the challenges around competition and lack of explainability, there’s a very real risk of regulatory capture by Big Tech.

The incentives to achieve scale over rivals or for start-ups to get rich quick have favoured early release of AI systems over concerns about the other challenges. AI should be treated like aviation, health and safety, and medicines. All stakeholders need to be included in the development, deployment and use of AI, not just the tech team. As a result, it would be unwise for the next generation of AI to launch into unregulated territory.

In particular, there are key liability issues to be solved and mechanism for attributing and apportioning causation and liability upstream and downstream among developers, deployers and end-users.

To address concentration risk and barriers to entry, there needs to be easier portability and the ability to switch among cloud providers.

In the absence of regulation, participants (and victims) will look to contract and tort law (negligence, nuisance, and actions for breaches of any existing statutory duties). Better to have certainty from the outset than to wait for litigation to restore a sense of balance.

Regulatory measures

Outside the EU, the UK is a rule-taker when it comes to regulating issues that have any global scale. China, EU and the US will all drive regulation, but geography and trade links means the trade bloc on the UK’s doorstep is the most important.

Examples of regulatory measures from the EU, US and China seek to draw some red lines in areas impacted by AI to at least force the industry to engage with legislators and regulators if the law is not to overly restrict development and deployment of AI.

The UK government has so far avoided directly regulating AI. Some of the UK’s 90 regulatory bodies are using their current powers to address the risks of AI (such as the ICO’s focus on the implications for privacy). But the UK’s Intellectual Property Office has shelved a long-awaited code setting out rules on the training of AI models using copyrighted material, dealing a blow to the creative industry.

How is the EU regulating AI?

The Artificial Intelligence Act is expected to enter into force in 2024 with a two-year transition period (but only six months for prohibited AI practices and 12 months for general-purpose AI). The Act proposes a risk-based framework for AI systems, with AI systems presenting unacceptable levels of risk being prohibited. The Act identifies, defines and creates detailed obligations and responsibilities for several new actors involved in the placing on the market, putting into service and use of AI systems. Perhaps the most significant of these are the definitions of “providers” and “deployers” of AI systems.

The Act covers any AI output which is available within the EU, so would cover UK companies providing AI services in the EU.

The AI Act defines an AI system as:

“…a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

The AI Act prohibits ‘placing on the market’ AI systems that: use subliminal techniques, exploit vulnerabilities of specific groups of people, create a social score for a person that leads to certain types of detrimental or unfavourable treatment, or which categorise a person based on classification of their biometric data; assess persons for their likelihood to commit a criminal offence based on an assessment of their personality traits; as well as the use of real-time, remote biometric identification systems in publicly accessible spaces by or on behalf of law enforcement authorities (except to preserve life). There are also compliance requirements for high-risk AI systems.

In addition, the draft AI Liability Directive and revised Product Liability Directive will clarify the rules on making claims for damage caused by an AI system and impose a rebuttable presumption of causality on an AI system, subject to certain conditions. The two directives are intended to operate together in a complementary manner. They will likely be formally approved in early 2024, and will apply to products placed on the market 24 months later.

The Digital Services Act (DSA) protects EU-based users of online communication, e-commerce, hosting and search services, by exempting intermediary service providers (“ISPs”) from certain liability for performing certain duties. An ISP is covered if it is either based in the EU or has a substantial connection (i.e. a significant number of users as a proportion of the EU population or by targeting its activities at one or more EU countries). Services such as Bing, Google Search, Facebook, Instagram, Snapchat, TikTok, YouTube, X/Twitter, AliExpress and LinkedIn already face investigations under the DSA, including how they mitigate the risks of creating and spreading information using generative AI and related risks to electoral processes.

The Digital Markets Act (DMA)  builds on existing competition law by rooting out unfair practices of very large digital platform operators (“gatekeepers”) when providing services that other businesses use to reach their own customers online. Under the DMA, six companies have been designated as gatekeepers, since they effectively act as private rule-makers who are in a position to create ‘bottlenecks’ and ‘choke points’ that could potentially limit access, unfairly exploit personal and business data for their own purposes and/or impose unfair conditions on market participants.

The Machinery Products Regulation covers emerging technologies including whether the machinery is safe, taking into account interactions between components like AI. In-scope machinery and products imported into the EU from third countries (such as the UK) will need to adhere to the Machinery Products Regulation.

The Data Governance Act established mechanisms to enable the reuse of some public sector data that will be of benefit to the development of AI solutions.

From September 2025, the Data Act will require providers of products and related services to make the data generated by their products easily accessible to business and consumer users, who will then be able to provide the data to third parties or use it for their own purposes, including AI.

If you have questions about the use or regulation of AI, please contact Simon Deane-Johns.

For further information please contact:

This article is for general information purposes only and does not constitute legal or professional advice. It should not be used as a substitute for legal advice relating to your particular circumstances. Please note that the law may have changed since the date of this article.