AI System vs. AI Model: A Practical Guide to the AI Act's Most Important Definition
Why a simple logistic regression model might be regulated AI, but a complex rules engine isn't.
The EU AI Act is now law. For teams building and deploying software, that shifts the discussion from abstract legal talk to immediate, production-grade decisions. The first, and most important: Is the system I'm building an "AI system" under the Act?
That answer sets your compliance path, influences your architecture, and defines your operational duties. Get it wrong, and you’re laying down serious legal and technical risk.
This guide turns the legal definition into engineering terms. First, though, we need to separate the two core entities the law addresses: AI systems and AI models.
AI systems are end-to-end applications that deliver capabilities. Think of a loan approval app that uses a model to generate credit scores. This is what users touch, and it’s what the Act’s main rules on risk management, transparency, and human oversight apply to [1, Title III].
AI models are components inside systems. The Act sets distinct obligations for model providers, especially General-Purpose AI (GPAI) models like Gemini or GPT-4, centered on documentation and transparency [1, Chapter V].
This article focuses on the foundational question: what exactly is an AI system?
Deconstructing the Definition
Article 3(1) gives us the legal blueprint [1]:
"...a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."
For engineers, this boils down to three tests:
1. Baseline Conditions
The system is machine-based (runs on hardware/software) and operates with varying autonomy. Autonomy means "some degree of independence of actions from human involvement" [1, Recital 12]. These criteria cover nearly all modern software.
2. Optional Condition
The system "may exhibit adaptiveness" after deployment—self-learning capabilities that let behavior change during use [1, Article 3(1)]. The word “may” makes this optional, not required.
3. The Core Test
The key capability is to infer. Recital 12 states: "[a] key characteristic of AI systems is their capability to infer" [1]. This capability "transcends basic data processing by enabling learning, reasoning or modelling" [1, Recital 12].
The Act recognizes two families of inference techniques:
Machine Learning: Systems that "learn from data how to achieve certain objectives"—supervised, unsupervised, and reinforcement learning [1, Recital 12].
Logic- and Knowledge-Based: Systems that "infer from encoded knowledge or symbolic representation of the task to be solved"—expert systems, knowledge graphs, inference engines [1, Recital 12].
The Bright Line: What's In, What's Out
Knowing what falls outside the Act’s scope is as important as knowing what’s inside. The Act explicitly excludes systems "based on the rules defined solely by natural persons to automatically execute operations" [1, Recital 12]. That creates a clear safe harbor for traditional software.
Safe Harbor: Clearly NOT AI Systems
The Commission’s guidelines build on this by pointing to systems that "operate based on fixed human-programmed rules, without using AI techniques" [2, para. 46].
Basic Data Processing
Database queries that sort or filter data don’t qualify as AI. The same goes for standard spreadsheets without AI features and BI dashboards that only do descriptive analysis [2, para. 46–47]. These perform predetermined operations.
Classical Heuristics
Consider a traditional chess program using a minimax algorithm with a human-designed evaluation function. Sophisticated, yes, but it applies predefined, experience-based rules rather than learning or reasoning to derive new logic [2, para. 48]. The point: it executes strategies programmers wrote, instead of discovering them through learning or inference.
Trivial Estimation
The guidelines exclude systems using a "basic statistical learning rule" mainly for benchmarking. Their example—"using the average temperature of last week for predicting tomorrow's temperature"—shows a system so simple it doesn’t meet the inference threshold the Act has in mind [2, para. 49].
The Grey Zone: When Guidelines Contradict Law
Here’s where it gets tricky—and where engineers should be careful.
The Commission’s non-binding guidelines propose extra carve-outs that appear to conflict with the Act’s plain text. The biggest is linear and logistic regression. The guidelines suggest these methods may "fall outside the scope" because they allegedly "do not transcend 'basic data processing'" [2, para. 42].
That clashes with the Act’s definition:
What happens in logistic regression: The algorithm learns coefficients from your data through optimization (gradient descent or maximum likelihood estimation).
Why that’s inference: The model discovers optimal weights by analyzing patterns in the training set.
The contradiction: The guidelines acknowledge these models "have the capacity to infer" [2, para. 42].
The attempted rationale—that these methods have been "used in a consolidated manner for many years" [2, para. 42]—doesn’t change what they are. Age doesn’t turn learning into non-learning.
This matters because it creates legal uncertainty. The guidelines themselves note they are "not binding" and that "any authoritative interpretation... may ultimately only be given by the Court of Justice" [2, para. 7].
The prudent engineering decision: Follow the law, not the guidelines. Treat any system that learns from data—including linear and logistic regression—as an AI system under the Act.
Applying the Framework: Real-World Classification
To make this concrete, start with a simple question: does the system generate predictions, content, recommendations, or decisions? If it’s purely descriptive—like a basic data visualization—it’s likely out of scope. If it produces any of those outputs, look at how they’re generated.
Expert Systems
Not AI: A workflow engine executing a fixed decision tree hard-coded by a business analyst—it follows human-authored logic exactly as written [1, Recital 12].
Is AI: A medical system using a reasoning engine to infer potential diagnoses from a knowledge graph [2, para. 39]. It derives new conclusions from encoded knowledge, not just executes predetermined rules. If used for diagnostic purposes, it would likely be high-risk under Annex III [1, Annex III, point 5(b)], triggering Title III requirements.
Forecasting Tools
Not AI: A script that calculates next quarter’s sales by adding a fixed 5% to the previous quarter—it’s a simple calculation.
Is AI: A Prophet or ARIMA model that learns trends from historical data to forecast energy consumption for the power grid. And because it’s used for "management and operation of... supply of electricity", it falls within a high-risk category [1, Annex III, point 2(a)].
Foundation Model Applications
Any customer service chatbot built with the Gemini API or GPT-4 is an AI system, no matter how simple your implementation. In that case, you become a "deployer" under the Act [1, Article 3(4)], which means you must follow the provider’s instructions for use and independently assess whether your specific use case creates high risks [1, Article 26].
Global Perspective: International Alignment
The EU definition doesn’t stand alone. It aligns with a global consensus across major standards bodies:
OECD: Serves as the blueprint for the Act, focusing on systems that infer how to generate outputs [3].
NIST: Uses a similar definition emphasizing machine-based systems making predictions with varying automation [4].
ISO/IEC 22989: Centers on inference as the "process by which conclusions are derived from known premises" [5].
This shows broad agreement that inference, automation, and environmental impact are core AI hallmarks. There are nuances, though. The EU’s emphasis on learning from data or reasoning over knowledge is more specific than some international definitions, which can create compliance challenges for global deployments.
From Classification to Compliance
Once you decide a system is an AI system, the work starts. Keep a registry of all algorithmic systems with formal classifications, and create Architecture Decision Records that document your reasoning, the tests you applied, and the evidence you considered. These become key compliance artifacts.
For systems that qualify as AI, check whether the intended purpose appears in Annex III—credit scoring, recruitment, critical infrastructure management [1, Annex III]. If it does, you’re dealing with a high-risk system and need comprehensive measures:
Human Oversight
High-risk systems must be "designed and developed in such a way... that they can be effectively overseen by natural persons" [1, Article 14(1)]. That’s more than a stop button. It means thoughtful intervention tools, review processes, and clear accountability [1, Article 14(3)].
Data Governance
Assess the relevance, representativeness, and suitability of training, validation, and testing datasets [1, Article 10(2)]. Include systematic checks for potential biases that could cause discriminatory outcomes [1, Article 10(2)(f)].
Security & Robustness
Beyond standard cybersecurity, test against AI-specific threats like "data poisoning... model poisoning... adversarial examples" [1, Article 15(4)]. These target the learning mechanisms themselves, not just the surrounding infrastructure.
Post-Market Monitoring
Compliance continues after deployment. You need ongoing monitoring to detect anomalies and manage residual risks as the system runs in production [1, Article 72]. This helps catch drift, emerging biases, and unexpected behaviors before they cause harm.
The Philosophy Behind the Definition
The Act’s focus on inference rather than complexity reflects a clear regulatory view: systems that derive their own logic need different governance than those executing human logic. A complex rules engine with perfect accuracy but human-written rules is more predictable and auditable than a simple model that learns from data.
That makes sense from a risk perspective. When humans write the rules, decisions can be traced back to specific logic. When systems learn or reason, the logic emerges from data or knowledge processing in ways that may be opaque or unexpected [1, Recital 12]. The framework responds to that difference in how systems reach decisions.
Understanding this distinction helps you build systems that are both innovative and compliant. The Act doesn’t penalize complexity or sophistication—it recognizes that systems which generate their own logic through learning or reasoning need special oversight, however simple or complex they appear.
References
[1] European Parliament and Council of the European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). Official Journal of the European Union, L 2024/1689. https://eur-lex.europa.eu/eli/reg/2024/1689/oj
[2] European Commission. (2025). Commission Guidelines on the definition of an artificial intelligence system established by Regulation (EU) 2024/1689 (AI Act). https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-ai-system-definition-facilitate-first-ai-acts-rules-application
[3] Organisation for Economic Co-operation and Development. (2024). Explanatory memorandum on the updated OECD definition of an AI system. OECD Artificial Intelligence Papers, No. 8. OECD Publishing. https://doi.org/10.1787/623da898-en
[4] National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). NIST AI 100-1. https://doi.org/10.6028/NIST.AI.100-1
[5] International Organization for Standardization. (2022). Information technology — Artificial intelligence — Artificial intelligence concepts and terminology (ISO/IEC 22989:2022). https://www.iso.org/standard/74296.html
Disclaimer: The ideas, arguments, and insights in this article are entirely my own, born from my professional experience and reading. To bridge the gap between concept and clear prose, I partner with AI tools to refine the language, assisting with grammar and suggesting more precise phrasing. Every sentence is personally reviewed, and I hold full editorial responsibility for the final content and its message.