LinkedIn reverses course on plan to use UK user data for AI training
On September 18th 2024, some LinkedIn users in the UK noticed a new setting in their account interface which allowed them to ‘opt out’ of having their data used for training LinkedIn’s generative AI tool. LinkedIn had not given notice to UK users of the new data processing, nor updated its terms of service or privacy policy to reflect the changes.
The surprise change led a number of high profile LinkedIn users to notify their followers of the change of approach and how to enable the ‘opt out’ within the platform’s settings.
Within 48 hours of the new setting having been noted and publicised, LinkedIn’s head of privacy for the EMEA region posted an update stating that the use of user data for training generative AI was not being rolled out across the EU and Switzerland at present; adding that the UK was also being added to the list of regions which would be excluded from the new data use. By September 20th, LinkedIn had removed the ‘opt out’ setting for UK users.
The use of data for training AI is an increasingly common consideration for both consumers and businesses when engaging with service providers. For businesses, in particular, the use of data supplied to service providers for AI training requires careful consideration, as it carries risks for the lawfulness of personal data processing and for the protection of confidential business information. The recent LinkedIn episode is a useful reminder of the importance of transparency about the purposes for which client data will be used, and the risk of seeking to change those purposes without on short notice.
Advocate General opines on the scope of disclosure required for automated decisions
The EU Advocate General (AG) has issued an Opinion on the obligation of data controllers, pursuant to Art. 22 of the GDPR, to provide a data subject with an explanation of the logic underpinning an automated decision affecting them. The facts of the matter are straightforward: The data subject applied to a mobile phone provider for a contract which would have attracted a monthly payment of €10; as part of the application process, a credit check was requested from Dunn & Bradstreet (D & B), and on the basis of the result of that check, the provider refused to enter into a contract with the individual as she was deemed not to be creditworthy.
The individual exercised her right under Art. 15(1)(h) of the GDPR to have information about the automated decision, but was unsatisfied with the outcome. She then complained to the Austrian data protection authority, which concluded that D & B had failed to supply the individual with meaningful information about the logic involved in the automated decision-making which resulted in the refusal to enter into the mobile phone contract.
On appeal, the Austrian court agreed. Expert evidence in that hearing was that D & B ought to be required to disclose to the data subject technical information including the algorithm relied upon, even though (i) such information was unlikely to be understood due to its complex nature, and (ii) despite it being proprietary to D & B.
Thankfully, the AG disagreed with the expert, finding that found that the algorithms underlying automated decisions are unlikely to meet the requirement to provide meaningful information to a data subject. The AG concluded that ‘as a general rule, the controller should provide the data subject with general information, in particular on the factors taken into account in the decision-making process and their respective relevance at the aggregate level’ in order for the data subject to be able to challenge an automated decision. According to the AG, the data controller should provide ‘information that is concise, easily accessible and easy to understand, and formulated in clear and plain language.’
The AG, however, noted that prior case law had held that under some circumstances a party may be required to provide information which it considered to be a trade secret (e.g. an algorithm) to the court in order to allow a balancing test to be carried out; the result of which might be disclosure of that information to an opposing party. The AG then concluded that in cases where an automated decision within GDPR Article 22 was in issue, a respondent might be required to submit proprietary information such as an algorithm to a court or a data protection authority in order for a balancing test to be carried out, following which some disclosure to the affected individual could be ordered.
The possibility that a data protection authority could have access to a proprietary algorithm will raise considerable concern for businesses. The CJEU will consider the referral in due course, and it is hoped that its decision will provide clarity and certainty that trade secrets will be afforded proper protection.
German court finds that telecoms provider had a legitimate interest in disclosing customer data to credit rating agency
A customer discovered that his telecommunications provider had supplied his personal data to a credit rating agency following the conclusion of his contract, and did so without his consent and without a legal basis. He then applied to the court seeking damages and injunction preventing any such further processing of his data.
The court rejected the claim, finding that the transmission of the claimant’s personal data was justified under Art. 6(1)(f) of the GDPR, as it was necessary to protect the legitimate interests of the business and those of third parties, and was not outweighed by the claimant’s interests or fundamental rights.
In particular, the processing was necessary for the prevention and detection of fraud; the credit ratings agency being able to detect cases where multiple telecoms contracts were being concluded using the same personal details, which may be used without authorisation or otherwise fraudulent. The court found that the personal data in question was so minimal that the claimant’s interests did not outweigh the legitimate interests in question.
On the question of damages, the court found that in any event, there was no evidence to support a claim that he had suffered emotional distress or other negative feelings, noting in particular that his credit rating was in fact positive and he was classed as ‘very low risk.’
UK regulators put big tech on notice of enforcement on the horizon
The head of the UK’s communications regulator, Ofcom, has warned that social media companies have to date not taken sufficiently seriously the safety of their users.
Speaking at a technology conference this week, Melanie Dawes remarked that the UK’s Online Safety Act – expected to enter into force in 2025 – will require those offering social media and gaming services, amongst others, companies to act quickly in conducting risk assessments in order to mitigate the risk of any illegal harms to their users. Those that do not have in place adequate measures, could face regulatory action including large fines of up to 10% of global annual turnover (even higher than the maximum of 4% for breaching the UK GDPR).
The head of the UK’s Competition and Markets Authority (CMA), Sarah Cardell, also had warnings for tech companies. Citing the recent litigation in the US and the EU against Google alleging abuse of its dominant position, she posited that the UK law allowed a more agile regulatory response than seen in those jurisdictions, where enforcement action can drag on for years.
For more information, contact James Tumbridge and Robert Peake.