The collection and use of children’s personal data has increasingly been a focus for legislators and regulators, in the UK and elsewhere. There has been significant concern about the use of social media by teens and children under the age of 13.
Most recently, the rise of ‘agentic AI’, including chatbots, has raised concerns for children’s safety. It was reported this month that parents in the United States who control Google Family Link accounts, which allow parents to set up and manage email and You Tube accounts for their children, received notification that Google’s Gemini AI chatbot would soon be made available for use by children, including those under the age of 13.
In this article, our data protection lawyers James Tumbridge and Robert Peake consider a number of recent developments in the UK and elsewhere as examples of the approaches to protecting children online, and the challenges being faced by regulators and businesses.
Rules for the use of children’s personal data
In the UK, a child can consent to the processing of their personal data at the age of 13, the lowest age permissible under the UK GDPR; many EU Member States have maintained the default age of 16 in the EU GDPR.
The UK’s Data Use and Access Bill, which continues to be debated, has seen some proposed amendments fall away, including provisions preventing children under the age of 18 from consenting to the use of their sensitive personal data for automated decision-making, and raising the age at which children could consent to personal data use by social networking services from 13 to 16.
Children’s broader online safety
The UK Online Safety Act (the Act) was enacted to extend additional protections in the digital space, in particular for children. It is of broader application than the rules focussing on personal data use; under the Act, a child is anyone under the age of 18. For an overview of the Act, see our previous note here.
The Act is overseen by the regulator Ofcom (not by the ICO) which has issued codes of practice setting out the core obligations for regulated services.
Ofcom has recently published two draft Protection of Children statutory codes focussed on (i) user-to-user services, and (ii) search services, introducing new obligations including:
- Preventing children from encountering pornography, self-harm, suicide, and eating disorder content (‘Primary Priority Content’).
- Protecting children in vulnerable age groups against harm from content which promotes dangerous stunts, is abusive or incites hatred, bullying content, and other content presenting a material risk of significant harm.
- Carrying out a risk assessment to identify aspects of the particular service which present risks of harm for children, and adopting appropriate mitigation strategies.
- Documenting the risks and explaining in their terms of service how they are mitigated.
- Allowing individuals to report with ease any content that is harmful to children, and operate a complaints procedure.
Preventing children from accessing certain types of content necessarily requires service providers to check the age of their users. The approach to age verification has been a subject of much debate, particularly as it impacts all online users, not only children; adult users will also be faced with proving their age to access services which they are not lawfully restricted from accessing. The regulator has been clear that merely restricting use of a service by way of terms of service will be insufficient.
Age verification is also proving challenging elsewhere, including in Canada, where the Office of the Privacy Commissioner of Canada has announced that it intends to issue guidance on age verification for public consultation.
US measures to protect children online – a mixed bag
In the United States, there have been various initiatives seeking to protect children online, and in particular to limit social media access and use by younger children. In Virginia, a law has recently been passed which requires social media platforms to seek to identify whether a user is a minor under the age of 16, and to limit social media use for minors to one hour per day. The limitation on use can be overridden with parental consent.
However, the future of the Virginia law was cast in doubt by US court rulings on similar measures introduced elsewhere. For example, a US federal court permanently barred the attorney general of Ohio from enforcing a law adopted there which would have required social media platforms to verify that a user was at least 16 years old in order to create an account, and to obtain parental consent before an account could be created by a child under 16.
The court held that the law violated the free speech guarantees under the First Amendment, by restricting children’s ability to engage in self-expression, and applied to some websites but not others, effectively curtailing the free speech only for those within scope of the law. An earlier similar law in Arkansas was also prevented from taking effect by the US federal courts.
Notwithstanding those challenges at state level, the US’s federal law, the Children’s Online Privacy Protection Act of 1998 (COPPA), sets rules around the collection of personal information of children under the age of 13, as well as marketing to those under 13. The Federal Trade Commission (‘FTC’) overseas COPPA, and has recently issued a Rule that will add further obligations to businesses within scope, including: enhanced notifications to parents; prior ‘verifiable’ parental consent for the collection, use, or disclosure of personal information for children under 13; and increased parental ability to review and seek deletion of their children’s data.
If you have questions or concerns about protecting the data of children online, please contact James Tumbridge and Robert Peake.
This article is for general information purposes only and does not constitute legal or professional advice. It should not be used as a substitute for legal advice relating to your particular circumstances. Please note that the law may have changed since the date of this article.