Businesses (and their marketing agencies) are increasingly reliant on artificial intelligence (AI) technology for content creation; AI is capable of producing articles, social media posts, marketing materials, and more with remarkable speed and efficiency. Its ability to analyse vast amounts of data and generate coherent text has made AI an attractive option for those looking to streamline content production processes.

However, AI-generated content can sometimes include inaccuracies, misrepresentations, or offensive material that can lead to serious legal repercussions. An over-reliance on AI without any human review before publication, carries significant legal risk for any organisation.

Defamation risks

Defamation involves the publication of false statements that cause, or have the potential to cause, serious harm to the reputation of an individual or organisation. In the context of AI-generated content, the risk of defamation arises principally when the AI produces statements that are untrue or misleading about a person or company. Since AI lacks the nuanced understanding and ethical considerations of human writers, it may inadvertently generate defamatory content.

By way of example, as part of a media campaign an AI system might produce a news article that falsely accuses a competitor of criminal activity or unethical behaviour. If published without human review, such content could lead to liability for the publisher; defamation claims are not only very costly in terms of legal fees and damages, but are also extremely embarrassing for the defendant party.

Passing-off and misrepresentation

Passing-off is when a business misleads consumers into thinking their products or services are associated with another brand or person, without permission. In the context of AI-generated content, passing-off can occur if the AI creates content that falsely implies endorsement or affiliation with a well-known individual or brand.

By way of example, if an AI-generated advertisement implies that a celebrity endorses a product without their consent, this could lead to an expensive and reputation-harming passing-off claim.

There have been examples of celebrities and sporting personalities taking action in the past (albeit the most well-known pre-dating AI technology). In 2013, the singer Rihanna successfully sued fashion retailer Topshop over the sale of a design of a T-shirt because it was likely to lead people to buy it in the false belief that she had approved or authorised it.

Similarly, if the AI uses a company’s trademark or likeness in a way that confuses consumers about the source or sponsorship of the product, the affected company could bring a claim.

Liability for deepfakes

A particularly troubling aspect of AI-generated content is the creation of deepfakes. Deepfakes are hyper-realistic digital forgeries created using AI technology, where the likeness of an individual is superimposed onto another’s body or voice, making it appear as though they are saying or doing things they never did. These can be used maliciously to damage reputations, spread misinformation, or commit fraud.

There is increasing concern that deepfakes will play a significant role in the political campaigns for the US and potential UK elections this year, as they can potentially influence voter perceptions and outcomes. The misuse of AI technology can have dire consequences, including the erosion of public trust and legal actions against the creators and disseminators of such content.

The potential liability for businesses involved in creating or distributing deepfakes, whether in a political or purely commercial context, is significant, exposing them to the risk of defamation, privacy and passing-off claims.

Advertising rules

The Advertising Standards Authority (ASA) has confirmed “… even if marketing campaigns are entirely generated or distributed using automated methods, they still have the primary responsibility to ensure that their ads are compliant.”

The ASA has stringent guidelines to ensure that advertisements are truthful, not misleading, and do not cause harm or serious widespread offense. Breaching these rules can lead to severe penalties, including fines, mandatory corrections, and damage to a business’s reputation.

Examples of misleading false and misleading information include:

  • incorrect efficacy claims relating to a product;
  • images that are photoshopped or subjected to social media filters; and
  • socially irresponsible adverts based on biases that are amplified by the AI system (such as generative AI tools tending to portray men, individuals with lighter skin tones or idealised body standards in higher-paying occupations).

The importance of human oversight

To mitigate these risks, businesses can implement robust human verification processes before publishing AI-generated content:

  1. Legal consultations: regularly consult with legal professionals to ensure that the content complies with relevant laws and regulations.
  2. Implement comprehensive review procedures: establish a team of human reviewers to scrutinise AI-generated content before publication. This team should be well-versed in the legal and ethical implications of content creation.
  3. Use reliable AI systems: invest in high-quality AI systems that have built-in safeguards to reduce the likelihood of generating harmful or misleading content.
  4. Regular training and updates: continuously train AI systems with accurate and up-to-date data to enhance their understanding and reduce errors.
  5. Transparency and accountability: maintain transparency about the use of AI in content creation and establish accountability mechanisms for any errors or issues that arise.
  6. Adherence to ASA and other regulatory guidelines: ensure that all AI-generated advertising content adheres to the latest ASA rules, avoiding misleading claims and unauthorised endorsements.

While AI technology offers significant advantages for content creation, there are serious legal risks of using it without human verification. Defamation, passing-off and privacy claims, together with regulatory compliance issues, can lead to costly and embarrassing legal proceedings. However, by implementing thorough review processes and adhering to best practices, businesses can harness the benefits of AI while mitigating potential legal pitfalls. Ensuring that AI-generated content is accurate, respectful, and legally compliant is not only a legal necessity but also a crucial aspect of maintaining trust and credibility with consumers.

If you have questions about the use of AI and deepfakes, please contact Will Charlesworth.

For further information please contact:

This article is for general information purposes only and does not constitute legal or professional advice. It should not be used as a substitute for legal advice relating to your particular circumstances. Please note that the law may have changed since the date of this article.