In the early hours of the morning of Saturday 9 December 2023, the European Parliament (the Parliament) and the Council of the EU (the Council) reached an agreement on the outstanding points on the EU’s Regulation on artificial intelligence (AI Act).  Talks had previously stalled over how to regulate AI trained on large amounts of data and able to perform a wide range of functions, referred to as ‘foundation models’ or ‘general purpose AI’.  The lack of progress had led to questions over whether the AI Act would be finalised before the end of Spain’s presidency of the Council at the end of the year, and whether an agreement would be possible before the next Parliament elections in June 2024.  Extending this week’s talks by a further day, however, the Parliament and Council were able to reach an agreement on the outstanding points.

While some details are still to be finalised, we now have a view of what the final agreement will look like from the  Commission, the Council, and the Parliament.

Definitions and scope

The final text will define AI with reference to the OECD definition. The OECD defines an AI system as “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.  According to the Council, this approach looks to distinguish AI from “simpler software systems”.

The AI Act will apply to providers placing an AI system on the market or putting it into service in the EU, regardless of the provider’s location.  It will also apply to users of AI systems located in the EU, and to both providers and users located in a third country where the output produced by the system is used in the EU.

The AI Act does not apply to areas outside the scope of EU law and should not affect member states’ powers in relation to national security.  It does not apply systems used for purely military or defence purposes.  Similarly, it does not apply to AI systems used solely for research and innovation or for individuals using AI outside the professional sphere.

The Council has also confirmed that the AI Act will clarify the roles and responsibilities of providers and users of AI systems, and clarify the relationship between responsibilities under the AI Act and other legislation, such as sectoral legislation or data protection legislation.

A tiered approach to regulating AI

The final agreement maintains a tiered approach to regulating AI to maintain “an extremely delicate balance: boosting innovation and uptake of artificial intelligence across Europe whilst fully respecting the fundamental rights of our citizens”, as Spain’s secretary of state for digitisation and artificial intelligence confirmed.

Minimal risk applications

The Commission has emphasised that minimal risk applications will benefit from “a free pass and absence of obligations”.  Minimal risk applications will include AI-enabled recommender systems and spam filters.

High-risk AI systems

Some AI systems will be categorised as high risk due to their “significant potential to harm health, safety, fundamental rights, environment, democracy and the rule of law”.  A mandatory fundamental rights impact assessment will be applicable to these systems, among other requirements.  

General purpose AI / ‘foundation models’

Specific rules have been agreed for AI systems that can be used for a range of different purposes, referred to as general purpose AI systems.  These rules cover the models on which these systems are based, referred to as general purpose AI models or ‘foundation models’.

These models will be subject to transparency requirements, including drawing up technical documentation, putting in place policies to comply with EU copyright law, and disseminating detailed summaries about the content used for training. 

For general purpose AI models that meet a set of criteria indicating that they pose a ‘systemic’ risk, there will also be additional obligations.  The obligations will include conducting model evaluations, assessing and mitigating systemic risks, conducting adversarial testing, reporting to the Commission on serious incidents, ensuring cybersecurity, and reporting on their energy efficiency. Until harmonised EU standards are published, these models may rely on codes of practice drawn up with the EU AI Office (see below) to comply with the AI Act.

Banned applications

Some applications of AI were considered to present an unacceptable threat to fundamental rights and freedoms and have been prohibited.  The banned applications are:

  • biometric categorisation systems that use sensitive characteristics (e.g., political, religious, or philosophical beliefs, sexual orientation, or race);
  • certain uses of predictive policing;
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  • emotion recognition in the workplace and educational institutions;
  • social scoring based on social behaviour or personal characteristics;
  • AI systems that manipulate human behaviour “to circumvent their free will”;
  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation); and
  • Real time one from many biometric identification in publicly accessible spaces except as set out below.

Law enforcement exceptions

The AI Act allows the use of real-time remote biometric identification systems (sometimes referred to as live facial recognition) in publicly accessible spaces where strictly necessary for law enforcement purposes.  This is only permissible in certain exceptional circumstances, however, such as identifying the victims of crimes like abduction trafficking, or sexual exploitation, prevention of a specific terrorist threat, or identification of person suspected of committing specific crimes such as terrorism, murder, rape, or armed robbery.  Law enforcement authorities must also comply with additional safeguards when using real-time remote biometric identification systems in these circumstances.

Sandboxes. ‘real world testing’, and derogations for SMEs

To help facilitate innovation, the AI Act will provide for regulatory sandboxes and “real-world-testing”, established by national authorities to develop and train innovative AI before placement on the market.  The regulatory sandboxes will provide a controlled environment for the development, testing, and validation of innovative systems, while the “real-world-testing” provisions address allowing testing of AI systems in real world conditions under specific conditions and safeguards.  The agreement also includes a list of actions to support smaller companies and provide for some limited and clearly specified derogations.

Penalties, enforcement, and governance

The maximum fines go up to the higher of €35 million or 7% of global turnover for breaches relating to banned AI applications, with fines of €15 million or 3% for breaches of the Act’s other obligations and €7.5 million or 1.5% for the supply of incorrect information.  More proportionate caps will be imposed on administrative fines for SMEs and start-ups.

Member states’ market surveillance authorities will supervise the implementation of the rules at national level and an individual may make a complaint relating to non-compliance to the relevant market surveillance authority.  

A new AI Office will also be created within the Commission to ensure coordination at European level.  The AI Office will also oversee general purpose AI models, with input from a panel of independent scientific experts, and contribute to creating standards and testing practices. 

An AI Board will also be formed from member states’ representatives, which will act as a coordination platform and advisory body to the Commission, and will contribute to aspects of implementation such as the design of codes of practice for general purpose AI models.  An advisory forum for stakeholders, such as industry representatives, SMEs, start-ups, civil society, and academia, will be set up to provide technical expertise to the AI Board.

When will it be finalised and when will it apply from?

Once details are finalised, the AI Act’s text will need to be formally adopted by both the Parliament and the Council.  It will then be published in the official journal, and will ‘enter into force’ 20 days after that date.  However, most provisions will only apply two years from entry into force, except for the prohibitions, which will apply after 6 months, and the rules on general purpose AI, which will apply after 12 months.

In the meantime, the Commission is launching an AI Pact to encourage voluntary implementation of key obligations ahead of these deadlines.

Our take

Political will to get the AI Act over the line appears very strong.  Some details must still be finalised before the AI Act can be published in the Official Journal, but at that point, the real countdown to the provisions applying will begin.  We do have sufficient detail at this stage to determine which applications will come within the scope of the AI Act and the high level obligations that will apply to them.  So where it is possible that applications will come within scope, any organisation deploying AI in the EU should ensure their AI governance process requires them to assess against the list of banned and high risk applications and applications that require transparency and start to work through what is going to be required to document the technical details, user instructions, risk and fundamental rights impact assessments and mitigation steps taken. This will need to be revisited as further detail will follow through guidelines and standards that augment the concepts but an initial assessment to sensitise the organisation to the requirements is highly recommended. Even if the organisation is not obviously caught it should watch this space very carefully: understanding the regime will pay dividends as it is highly likely that the prohibitions, risk classifications and standards that emanate from the AI Act will influence EU regulators of lower risk AI systems and also regulators around the world.