On 25 November 2022, the UK Information Commissioner’s Office (ICO) and the Office of Communications (OFCOM) (together, the Regulators) released a joint statement setting out their shared views on the interactions between online safety and data protection (the Statement). The Statement, which is primarily intended for online services providers in scope of the Online Safety Bill, follows the recent announcement that the Online Safety Bill will return to Parliament and be provisionally debated on the 5 December.

Given that the Online Safety Bill is still going through the parliamentary process, the Regulators have approached the Statement based on the current drafting of the Online Safety Bill and the broad principles it sets out; we can expect that the Statement may evolve to reflect any future drafting amendments to the Online Safety Bill.

In essence, the Statement picks out areas where online service providers might breach data protection rules in their efforts to comply with their Online Safety Bill obligations and outlines how the Regulators will work together to guide online service providers to comply with both.  It is a good example of the operation of the new Digital Regulatory Cooperation Forum, through which different UK regulators will coordinate policy and enforcement in the digital sphere.

Background

The Online Safety Bill sets out rules to protect UK users from online harm by assessing and responding to risks of harmful and illegal content. These will apply to online service providers and apps such as social media and messaging platforms, as well as other services that people use to share content online, and search engines. Ofcom will take on new regulatory duties under the Online Safety Bill and the ICO will monitor the data protection compliance of online service providers as they implement the Online Safety Bill.

The Regulators have committed themselves under the Statement to work together to provide a clear and coherent regulatory landscape for online services that is proportionate, transparent and outcome-focused. You can read more about the Online Safety Bill here.

The Statement is framed by the Regulators’ shared ambitions, being:

  1. They want users of online services to have confidence that their safety and privacy will be upheld and that the Regulators will take prompt and effective action when providers fail in their obligations.
  2. They want providers of online services of all sizes to comply with their obligations and to continue to innovate and grow, supported by regulatory clarity without  undue burden.

Balancing online safety and data protection risks

Under the Online Safety Bill, online service providers will be required to mitigate the spread of harmful content and ensure appropriate measures are in place to limit access to age-restricted content. The Statement notes that, while  the design of online services can enhance users’ privacy and online safety,  some features that enhance privacy could have a negative impact on online safety, or vice versa. The Statement provides the following examples of risks that online service providers will need to consider:

  • Signing in to online services

An online service provider might offer users tools to verify their identity so users have some assurance that all users are who they say they are;  or  age assurance measures might be used to stop children below a certain age accessing services or age inappropriate content. 

The collection, use and storage of this personal information is regulated by the ICO. The Statement advises that, where an online service needs to verify that its users are over a qualifying age, its data collection should be limited to ensuring that the user is above that age, rather than collecting and processing their exact age, and that any further use of that personal data is allowed only where the new purpose is compatible with the verification process (i.e. generally not allowed without a separate further consent).

Further, in verifying age, online service providers are highly likely to process children’s personal data, so the Statement makes clear that, in complying with obligations under the Online Safety Bill, the ICO’s Children’s Code (a statutory code of practice under the Data Protection Act 2018) must still be complied with.

The Regulators plan to commission joint research into the accuracy of a range of age assurance technologies, the findings of which will underpin  the guidance they will issue.

  • User participation in online services

Online service providers may present content by using content recommendation algorithms, or systems which aim to rank or return content that matches a user’s perceived interest. Where personal data is used to determine the content served to users, online service providers will need to comply with their obligations under data protection law. For example, services need to be transparent about the way in which they process personal data in order to make content recommendations.

As noted above, online services that process children’s data will have to conform to the standards of the ICO’s Children’s Code, including standard 5 on the detrimental use of data. Online services providers are advised to make sure their use of content recommendation algorithms does not involve processing children’s data in ways that have been shown to be detrimental to their wellbeing or that go against industry codes of practice, other regulatory provisions or government advice.

  • Analysis content and identifying harmful activity

The Online Safety Bill is, at its core, focused on limiting the access to and spread of harmful and illegal content online. The current draft of the Online Safety Bill is not prescriptive as to how this can be achieved: there is a range of different measures which can be used by online service providers to promote online safety and minimise the risks of users encountering content which is illegal or harmful and contrary to their terms of service. This includes automated content classification systems (i.e. systems that use algorithms designed to identify and classify whether content is illegal or harmful). Platforms might also choose to use techniques which seek to analyse patterns of potentially harmful user behaviour.

Online service providers are advised, where they use automated processes to link identifiable individual users to potentially illegal or harmful activity, to ensure that information linked to individuals or to their accounts is processed responsibly and fairly. Online service providers must be transparent about how they use personal data to make decisions, use no more personal data than is necessary to do so, and ensure that the processes that link users to illegal activity do so with an acceptable level of accuracy (i.e. without too many false positives).

Regulatory cooperation and next steps

Given the overlap on the responsibility to balance the regulation of online safety (OFCOM) with stringent data protection (ICO), the Regulators have committed to ensuring consistency in each other’s approach to regulatory requirements and in guidance to seek solutions that enhance users’ safety and preserve their privacy. Where there are tensions between privacy and safety objectives, the Regulators will provide clarity on how compliance can be achieved with both regimes.

In terms of next steps, OFCOM will prepare codes of practice and guidance for online service providers on compliance with the online safety regime and will consult the ICO, among others, in the preparation of these. The ICO meanwhile will prepare guidance on data protection expectations for online services deploying safety technologies (e.g. age assurance, content moderation) and will consult with OFCOM, among others, in its preparation. We can anticipate that such guidance will be published on, or shortly before, enactment of the Online Safety Bill. The Bill is currently in the report stage of the House of Commons (starting 5 December 2022), due for its third reading before progressing to the House of Lords.

As regards enforcement, the Regulators have stated that they will work together as necessary to take action against online service providers that do not meet their obligations, sharing information and intelligence as appropriate and coordinating approaches to compliance and enforcement.