The EU AI Act’s AI literacy obligation applied from 2 February 2025.  This applies to anyone doing anything with AI where there is some connection to the EU – to providers and deployers of any AI systems.

The AI Act gives little away on what compliance would look like though. Fortunately, the Commission’s AI Office recently provided guidance in the form of Questions & Answers, setting out its expectations on AI literacy.

The obligation

Providers and deployers of AI systems must “take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf” (Article 4).

Recital 20 sums up the requirement as equipping the relevant people with “the necessary notions” to make informed decisions about AI systems.

The definition also refers to making an informed deployment, as well as gaining awareness about the opportunities and risks of AI and possible harm it can cause.

Who needs to be AI literate?

Providers, deployers, and affected persons, as well as staff and other persons dealing with the operation and use of AI systems.

The Commission confirms that it is anyone under the provider’s / deployer’s operational remit, so could be contractors, service providers, or clients.

What is a “sufficient” level of AI literacy?

The Commission will not be imposing strict (or specific) requirements, as this is context-specific.

Organisations need to tailor their approach – for example, organisations using high-risk AI systems might need “additional measures” to ensure that employees understand those risks (and in any event, will need to comply with their Article 26 obligation to ensure staff dealing with AI systems are sufficiently trained to handle the AI system and ensure human oversight).

Where employees only use generative AI, AI literacy training is still needed on relevant risks such as hallucination.

The Commission does not plan to provide sector-specific guidance, although the context in which the AI system is provided or deployed is relevant.

For those who already have a deep technical knowledge, AI literacy training may still be relevant – the organisation should consider whether they understand the risks and how to avoid or mitigate them, and other relevant knowledge such as the legal and ethical aspects of AI.

The Commission points to its living repository on AI literacy as a potential source of inspiration.

Is there a “human-in-the-loop” exemption?

No, in fact AI literacy is more important for humans in the loop.  To provide genuine oversight, they need to understand the AI systems they are overseeing.

What are the consequences of not doing it?

Enforcement will be by market surveillance authorities and can begin from 2 August 2026 (when the provisions on their enforcement powers come into force).

The Commission includes a question on whether penalties could be imposed for non-compliance from 2 February 2025 when enforcement begins, but does not provide an answer, simply stating that there will be cooperation with the AI Board and all relevant authorities to ensure coherent application of the rules.

The detail on what enforcement will look like is also yet to come.  The AI Act does not provide for any specific fines for non-compliance with the AI literacy obligation. In its AI Pact webinar on 20 February 2025, the Commission flagged that although Article 99 AI Act sets maximum penalties in other areas, it does not prevent member states from including specific penalties for non-compliance with the AI literacy obligation in their national laws.  The Commission also flagged that AI literacy would be likely to be taken into account following breach of another obligation under the AI Act.

The Commission also mentions the possibility of private enforcement, and individuals suing for damages – but also acknowledges that the AI Act does not create a right to compensation.

Our take

The Commission does not give much away on what AI literacy programmes should look like – but, ultimately, as it highlights, what is “sufficient” will be personal to each organisation.

To shape an AI literacy programme, it will first be necessary to work through:

  • Who are the different stakeholders involved in using AI? This needs to cover everyone – those involved in AI governance, developers, anyone involved in using AI, service providers, clients, and affected persons.
  • What does each group already know and what does each group need to know?  For example, AI governance committee members may need a deeper understanding of how AI works.  Data scientists may need to focus on legal and ethical issues.  For employees making occasional use of generative AI, a shorter session on the risks and how the organisation manages them could be appropriate.
  • What medium would be most appropriate?  E.g. a workshop format might work well for AI governance committee members or data scientists, while an e-learning could be sufficient for employees making occasional use of generative AI.
  • When will the training be delivered?  As mentioned above, the obligation already applies.
  • How will we track attendance and ensure that completion is sufficiently high?

The Commission’s guidance deals with the specific AI literacy obligation under the AI Act.  But really, AI literacy is crucial for all organisations using AI, regardless of whether the AI Act applies. AI literacy is essential for building a strong AI governance programme equipped to manage the range of legal and organisational risks that come with AI use.