On 11 May 2023, members of the European Parliament passed their compromise text of the AI Act (the AI Act) at the committee stage, taking this law a step closer to being finalised.

The compromise text (the Parliament Draft), which amends the Commission’s original proposal, includes quite a large number of amendments, some of which will most likely not make the final cut following the trilogue negotiations. Nevertheless, there are a number of key points worth noting.

Definition of AI: The Parliament Draft changes the definition of Artificial Intelligence so that it is now defined as “ a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments”. This aligns more closely to the European Council’s definition of AI than the original Commission’s definition, which was criticised as being overly broad.

Prohibited AI: The Parliament Draft refined and expanded the list of prohibited AI to include AI-based biometric categorisation systems that categorise people according to sensitive or protected attributes, certain uses of AI-based emotion recognition systems and AI systems that build or expand databases of facial images by scraping images from the internet.

High Risk AI: The Parliament Draft expands the list of AI-use cases that may be categorised as “high risk” (e.g. to include AI systems intended to be used by social media platforms that are “very large online platforms” under the DSA in connection with their recommender systems). It also includes an additional qualifier, namely that for a listed AI system to be classified as “high-risk” for the purposes of the AI Act, the AI system must pose a significant harm to people’s health, safety or fundamental rights. The Parliament Draft requires the Commission to prepare guidance on when such harm would arise six months before the law comes into effect and providers of the type of AI systems listed as likely to be high risk who do not think that this harm threshold has been met will be expected to notify the national supervisory authority accordingly.

The Parliament Draft tightens the obligations on providers of high risk AI systems, and importantly introduces a specific obligation on deployers to conduct a “fundamental rights impact assessment” before deploying high risk AI.

General purpose AI: The Parliament Draft retains the concept of “General Purpose AI”, which it defines as “an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed”. However, triggered by the ever-increasing use of tools such as ChatGPT and Bard, the Parliament Draft also introduces additional requirements for “foundation models”, which it defines as an “AI model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks”.

Specifically, providers of foundation models are subject to obligations to undertake risk assessments and mitigate reasonably foreseeable risks; and to establish appropriate data governance measures, obligations relating to the design of the foundation model (including from an environmental impact perspective) and an obligation to register the foundation model in an EU database. There are also additional transparency requirements where foundation models are designed to create content, including a requirement to disclose that the content was generated by AI and to provide a summary of the use of training data that is protected under copyright law.

General principles applicable to providers and deployers: The Parliament Draftintroduces general principles that operators of all AI systems should use their best efforts to comply with, which include technical robustness, transparency, non-discrimination and fairness and privacy and data governance. The Council Draft left most compliance obligations with providers so this could lead to a significant increase in responsibility for deployers (for example, in relation to the use of AI enabled recruitment applications).

Penalties: The Parliament Draft slightly changes the financial penalties for non-compliance, increasing the highest fine to EUR40,000,000 or 7% of worldwide annual turnover, and adding to and recutting the scope of the other fining thresholds.

Next steps:

The Parliament Draft will now face the vote of the European Parliament sitting in plenary, which is anticipated mid June, before the three-way negotiations of the AI Act between lawmakers, EU member states, and the Commission— known as “trilogues” — starts. This is likely to be under the Spanish presidency of the Council, which commences in July. Once it is published in the Official Journal of the European Union, there will be a transition period (which currently looks slated to be between 24 and 36 months).

Our take:

The AI Act will have significant ramifications for the development and use of AI systems in the European market and overseas given its extra territorial effect. However, even with a fair wind, the law will be unlikely to apply until the second half of 2025 at the earliest (the Application Date). It does not have retrospective effect except to systems where there is a significant change to their design or intended purpose after the Application Date.  Therefore, whilst companies operating in this space must pay close attention to the AI Act and what they need to do to comply with it, particularly with the recent explosion of interest in and use of foundation models, the more immediate focus should be on ensuring that their development and deployment of AI systems today comply with the relevant laws that are already in effect, including data privacy, product liability, human rights and discrimination laws (among others).

The Council’s (representing the governments of the EU Member States) position for these negotiations was agreed on 6 December 2022 (Council Draft) and can be found here.