The authors acknowledge the assistance of Salma Khatab, paralegal, in researching and preparing some aspects of this blog

The UK Department for Science, Innovation, and Technology (DSIT) has published its response to its consultation on its white paper, ‘A pro innovation approach to AI regulation’ (the Response). The Response outlines key investment initiatives and regulatory steps.  It confirms that, for the present, the UK will follow its proposed approach of setting cross-sectoral principles to be enforced by existing regulators rather than passing new legislation to regulate AI.  Alongside this, the government has published guidance setting out considerations that regulators may wish to have when developing tools and guidance to implement the principles.

Proposed regulatory framework

The government confirms its plans to introduce five cross-sectoral principles to enable existing regulators to interpret and apply responsible AI innovation. The principles are:

  • Safety, security and robustness.
  • Appropriate transparency and explainability.
  • Fairness.
  • Accountability and governance.
  • Contestability and redress.

Development of a central function, to support effective risk monitoring, regulatory coordination, and knowledge exchange, has already begun.

No new statutory obligations – for now

The UK’s approach contrasts with the EU’s approach of introducing statutory obligations for supply chain participants for certain AI use cases.

Regulators will not initially face any new statutory duties to have due regard to these principles, though the government anticipates introducing this duty after reviewing an initial period of non-statutory implementation. 

The Response also considers “highly capable general-purpose AI”, defined as models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models.  Some of the risks of this type of technology were outlined in the papers on the state of frontier AI prepared for the AI safety summit in November 2023.  The government has now confirmed that it believes binding measures are likely to be required in the future, but it will not “rush to regulate”, because ”introducing binding measures too soon, even if highly targeted, could fail to effectively address risks, quickly become out of date, or stifle innovation”.

It acknowledges that it is common for the law to allocate liability to the last actor in the chain, generally those using the technology.  This means that the actors most able to address risks and harms, those creating the technology, are not necessarily incentivised to develop responsible AI or held accountable when they do not.  This could undermine innovation and dampen innovation, a risk the Commission confirms the EU looked to tackle with the AI Act.

In the meantime, the AI Safety Institute, established at the AI Safety Summit, will continue to evaluate the risks around these systems.  It recently published its approach to evaluations with two case studies presented at the summit.

Guidance for regulators on implementing the principles

The government will take a phased approach to issuing guidance for regulators.  In the first phase, published to coincide with the white paper response, the government “supports regulators by enabling them to properly consider the principles and to start considering developing tools and guidance…if they have not done so already”.  In phase two, to be released by summer 2024, it will expand and provide further detail, following feedback from regulators and other stakeholders.  Phase three will involve collaborative working with regulators to identify areas for potential joint tools and guidance across regulatory remits.

The guidance provides suggestions on what regulators “could” do, rather than what they should or must do.  Regulators may take a technology agnostic approach to regulation, and may decide the guidance is not relevant to them provided they are satisfied that their regulatory framework adequately covers the issues around AI adoption. It also suggests that regulators develop tools and guidance that promote knowledge and understanding as relevant in the context of their remit, rather than setting out step-by-step processes. Regulators are encouraged to collaborate and share knowledge through existing mechanisms, like the Digital Regulation Cooperation Forum, as well as new ones. 

The guidance suggests that regulators may wish to cite horizontal standards produced by organisations like BSI, ISO, and IEC, and references specific standards relevant to each of the principles.

In relation to accountability, the guidance acknowledges the issues around liability in the supply chain and where it falls. It suggests that regulators consider whether their regulatory powers or remits allow them to place legal responsibility on actors in the supply chain that are best placed to mitigate the risks.  Where legal responsibility cannot be assigned to an actor in the supply chain that operates in a regulatory remit, it suggests they encourage the AI actors within the remit to ensure good governance in who they outsource to.  In practice, this is likely to translate into regulators considering whether they could provide more guidance on due diligence and contractual safeguards required to use third party AI suppliers.

AI Regulation Roadmap

The government has written to regulators including the Office of Communications (Ofcom); Information Commissioner’s Office (ICO); Financial Conduct Authority (FCA); Competition and Markets Authority (CMA) asking them to publish an update outlining their strategic approach by 30 April.  Regulators are tasked with summarising steps they are taking in line with expectations set out in the white paper, assessing sector-specific AI risks, identifying skills gaps and actions to address these, and outlining anticipated activities over the coming 12 months.

DSIT has also committed to a roadmap for 2024 to:

  • continue to develop the UK’s domestic policy position on AI regulation;
  • progress action to promote AI opportunities and tackle AI risks;
  • build out the central function and support regulators;
  • encourage effective AI adoption and provide support for industry, innovators, and employees; and
  • support international collaboration on AI governance.

Investment in Skills and Technology

The government has pledged over £100 million to develop regulators’ technical capabilities and nurture AI innovation. This includes £10 million to jumpstart regulators’ AI capabilities and a £9 million partnership with the US as part of the International Science Partnerships Fund.

Promoting AI governance and AI assurance

The Response emphasises promoting transparency and accountability in AI deployment across sectors. The Algorithmic Transparency Recording Standard (ATRS) established a standardised way for public sector organisations to proactively publish information about how and why they are using algorithmic methods in decision-making.  Following a successful pilot, it will be making use of the ATRS a requirement for all government departments.

Since the publication of the Response, the government has also published an Introduction to AI assurance. This includes an overview of AI assurance and the tools that can be used (e.g. audits, performance testing), notes on AI assurance in practice, and key actions for organisations.

From CDEI to RTA

The government also announced the rebranding of the Centre for Data Ethics and Innovation (CDEI) as the Responsible Technology Adoption Unit (RTA). This change looks to reflect its role more accurately within DSIT to develop tools and techniques that enable responsible adoption of AI in the private and public sectors.

Further engagement on intellectual property, but no code of practice

DSIT and the Department for Culture, Media and Sport will “lead a period of engagement with the AI and rights holder sectors, seeking to ensure the workability and effectiveness of an approach that allows the AI and creative sectors to grow together in partnership”.  The government looks to resolve tensions between, on the one hand, AI developers’ need to access large and diverse datasets to train their models and, on the other hands, creative industries and rights holders’ concerns around the use of copyright protected content.  The Intellectual Property Office previously convened a working group made up of rights holders and AI developers, but this group was unable to agree an effective voluntary code.  

Our take

A key part of the government’s rationale for following this approach was a desire to utilise and build on the existing regulatory framework.  The current framework imposes both horizontal and vertical binding requirements that govern the use of AI.  Horizonal requirements include those set out under data protection law, while vertical requirements include the FCA’s Consumer Duty.  These existing duties already require organisations using AI to address security, transparency and explainability, fairness, accountability, and rights of redress.

The Response confirms that, for the present, organisations will have no new UK statutory obligations to comply with when developing or using AI.  However, going forward, regulators will have regard to the principles for future enforcement action.  The Response presents an opportunity for organisations to develop an AI governance programme to address existing horizontal and, where applicable, vertical requirements alongside the principles. 

Many UK organisations will also find themselves caught in the scope of the binding requirements set out in the EU AI Act.  A robust AI governance framework will be vital to ensure those requirements are identified and addressed alongside discharging obligations under UK legislative requirements and the AI principles.