The EU’s AI Act imposes extensive obligations on the development and use of AI.  Most of the obligations in the AI Act look to regulate the impact of the specific use cases on health, safety, or fundamental rights.  These sets of obligations apply to ‘AI systems’.  A tool will fall out of scope of much of the AI Act if it is not an AI system.  A separate set of obligations apply to general-purpose AI models (not discussed here).

This is a definition that really matters – the prohibitions are already in effect, and carry fines of up to 7% of annual worldwide turnover for non-compliance.  For high-risk AI systems and AI systems subject to transparency obligations, obligations begin to apply from 2 August 2026 (with fines of up to 3% annual worldwide turnover). 

The ‘AI system’ definition was the subject of much debate and lobbying while the AI Act went through the legislative process.  The resulting definition at Article 3(1) AI Act leaves many unanswered questions.  Recital 12 provides additional commentary, but does not completely resolve those questions.

The European Commission’s draft guidelines on the definition of an artificial intelligence system (the guidelines) were welcomed to help organisations assess the extent to which their tools might be ‘AI systems’.

The guidelines – at a glance

The guidelines appear to lack an obvious underlying logic to the examples that fall inside and outside of scope.  The contradictions included in recital 12 on whether or not “rules defined solely by natural persons” are caught appear to have been replicated and magnified.

There are some specific examples of systems that may fall out of scope that are likely to be welcome – for example, the suggestion that linear or logistic regression methods could fall out of scope will be welcome to the financial services industry.  These methods are commonly used for underwriting (including for life and health insurance) and for consumer credit risk and scoring.  If included in the final guidelines, this exclusion could be hugely impactful, as many systems that would otherwise have been in scope for the high-risk AI systems obligations would find themselves outside the scope of the AI Act (with the caution that the guidelines are non-binding and might not be followed by market surveillance authorities or courts).

The guidelines work through the elements of the AI system definition as set out at Article 3(1) and Recital 12 (the text of which is included at the end of this post for reference).  The key focus is on techniques that enable inference, with examples given of AI techniques that enable inference and systems that will fall out of scope because they only “infer in a narrow manner”. 

However, the reasoning for why systems are considered to ‘infer’ and be in-scope, or ‘infer in a narrow manner’ and be out of scope is not clear.  The guidelines appear to be suggesting that systems using “basic” rules will fall out of scope, but complex rules will be in-scope, regardless of whether the rules are solely defined by humans:

  • Logic and knowledge-based approaches such as symbolic reasoning and expert systems may be in-scope (see paragraph 39).
  • However, systems that only “infer in a narrow manner” may be out of scope, where they use long-established statistical methods, even where machine learning assists in the application of those methods or complex algorithms are deployed.

In practice, this means that drawing conclusions over whether a particular tool does or does not ‘infer’ will be complex.

The remainder of this post summarises the content of the guidelines, with practical points for AI governance processes included in Our take.  We have set out the key text from Article 3(1) and recital 3 in an Appendix.

What’s in scope?

The guidelines break down the definition of ‘AI system’ into:

  • Machine-based systems.
  • ‘designed to operate with varying levels of autonomy’ – systems with full manual human involvement are excluded.  However, a system that requires manually provided inputs to generate outputs can demonstrate the necessary independence of action, for example an expert system following a delegation of process automation by humans to produce a recommendation.
  • Adaptiveness (after deployment) – this refers to self-learning capabilities, but the presence of the word ‘may’ means that self-learning is not a requirement for a tool to meet the ‘AI system’ definition.
  • Designed to operate according to one or more objectives.
  • Inferencing how to generate outputs using AI techniques (5.1) – this is discussed at length, as the term at the heart of the definition.  The guidelines discuss various machine learning techniques that enable inference (supervised, unsupervised, self-supervised, reinforcement, and deep learning). 

However, they also discuss logic and knowledge-based approaches, such as early generation expert systems intended for medical diagnosis.  As mentioned above, it is unclear why these approaches are included in light of some of the exclusions below and at what point such a system would be considered to be out of scope.

The section discussed below on systems out of scope discusses systems that may not meet the definition due to their limited ability to infer.

  • Generate outputs such as predictions, content, recommendations, or decisions.
  • Influence physical or virtual environments – i.e., influence tangible objects, like a robot arm, or virtual environments, like digital spaces, data flows, and software ecosystems.

What’s (potentially) out of scope? – AI systems that “infer in a narrow manner” (5.2)

The guidelines discuss four types of system that may fall out of scope of the AI system definition.  This is because of their limited capacity to analyse patterns and “adjust autonomously their output”.

Systems for improving mathematical optimisation (42-45):

Interestingly, the guidelines are explicit that “Systems used to improve mathematical optimisation or to accelerate and approximate traditional, well established optimisation methods, such as linear or logistic regression methods, fall outside the scope of the AI system definition”. This clarification could be very impactful, as regression techniques are often used in assessing credit risk and underwriting, applications that could be high-risk if carried out by AI systems.

The guidelines also consider that mathematical optimisation methods may be out of scope even where machine learning is used – “machine-learning based models that approximate functions or parameters in optimization problems while maintaining performance” may be out of scope, for example where “they help to speed up optimisation tasks by providing learned approximations, heuristics, or search strategies.”

The guidelines place an emphasis on long-established methods falling out of scope.  This could be because the AI Act looks to address the dangers of new technologies for which the risks are not yet fully understood, rather than well-established methods.  It also emphasises the grandfathering provisions in the AI Act – AI systems already placed on the market or put into service will only come into scope for the high-risk obligations where a substantial modification is made after 2 August 2026.  If they remain unchanged, they could remain outside the scope of the high-risk provisions indefinitely (unless used by a public authority).  There is no grandfathering for prohibited practices.

Systems may still fall out of scope of the definition even where the process they are modelling is complex, for example, machine learning models approximating complex atmospheric processes for more computationally efficient weather forecasting.  Machine learning models predicting network traffic in a satellite telecommunications system to optimise allocation of resources may also fall out of scope.

It is worth noting that, in our view, systems combining mathematical optimisation with other techniques would be unlikely to fall under the exemption, as they could not be considered “simple”.  For example, an image classifier using logistic regression and reinforcement learning would likely be considered an AI system.

Basic data processing (46-47):

Unsurprisingly, basic data processing based on fixed human-programmed rules is likely to be out of scope.  This includes database management systems used to sort and filter based on specific criteria, and standard spreadsheet software applications that do not incorporate AI functionality.

Hypothesis testing and visualisation may also be out of scope, for example using statistical methods to create a sales dashboard.

Systems based on classical heuristics (48):

These may be out of scope  because classical heuristic systems apply predefined rules or algorithms to derive solutions.  It gives a specific example based on a (ground-breaking and highly complex) chess-playing computer that used classical heuristics, but did not require prior learning from data. 

Classical heuristics are apparently excluded because they “typically involve rule-based approaches, pattern recognition, or trial-and-error strategies rather than data-driven learning”.  However, it is unclear why this would be determinative, as paragraph 39 suggests various rule-based approaches that are presumably in scope.

Simple prediction systems (49-51):

Systems whose performance can be achieved via a basic statistical learning rule may fall out of scope even where they use machine learning methods.  This could be the case for financial forecasting using basic benchmarking.  This may help assess whether more advanced machine learning models could add value.  However, there is no bright line drawn between “basic” and “advanced” methods.

Our take

The guidelines appear designed to be both conservative and business-friendly simultaneously, leaving the risk that we have no clear rules on which systems are caught. 

The examples at 5.2 of systems that could fall out of scope may be welcome – as noted, the reference to linear and logistic regression could be welcome for those involved in underwriting life and health insurance or assessing consumer credit risk.  However, the guidelines will not be binding even when in final form and it is difficult to predict how market surveillance authorities and courts will apply them.

In terms of what triage and assessment in an AI governance programme is likely to look like as a result, there is some scope to triage out tools that will not be AI systems, but the focus will need to be on whether the AI Act obligations would apply to tools:

1. Triage

Organisations can triage out traditional software not used in automated decision-making and with no AI add-ons, such as a word processor or spreadsheet editor. 

However, beyond that, it will be challenging to assess for any specific use cases whether it can be said to fall out of scope because it does not infer, or only does so “in a narrow manner”.

2. Prohibitions – focus on whether the practice is prohibited

Documenting why a technology does not fall within the prohibitions should be the focus, rather than whether the tool is an AI system, given the penalties at stake. 

If the practice is prohibited, assessing whether it is an AI system may not be productive – prohibited practices are likely to raise significant risks under other legislation.

3. High-risk AI systems and transparency obligations

For the high-risk category and transparency obligations, again we would recommend leading with an assessment of whether the tool could fall under the use cases in scope for Article 6 or Article 50. 

To the extent that it does, an assessment of whether the tool may fall out of scope of the ‘AI system’ definition would be worthwhile, taking the examples in section 5.2 into account. 

We will be monitoring regulatory commentary, updates to the guidelines, and case law carefully, as this is an area where a small change in emphasis could result in a significant impact on businesses.

Appendix – Article 3(1) and Recital 12

The definition of ‘AI system’ is set out at Article 3(1):

‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments; [emphasis added]

Recital 12 gives further colour:

“The notion of ‘AI system’ … should be based on key characteristics of AI systems that distinguish it from simpler traditional software systems or programming approaches and should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations. A key characteristic of AI systems is their capability to infer. This capability to infer refers to the process of obtaining the outputs, such as predictions, content, recommendations, or decisions, which can influence physical and virtual environments, and to a capability of AI systems to derive models or algorithms, or both, from inputs or data. The techniques that enable inference while building an AI system include machine learning approaches that learn from data how to achieve certain objectives, and logic- and knowledge-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved. The capacity of an AI system to infer transcends basic data processing by enabling learning, reasoning or modelling…”

The highlighted texts appears to introduce a contradiction in looking to exclude rules defined solely by humans, but include logic-based, symbolic, or expert systems (for which the rules are defined by humans).