A comparative view of AI definitions as we move toward standardization

Discussions of Artificial Intelligence (AI) regulation will be heating up in 2024 with a provisional agreement for the EU AI Act having been reached in December 2023. The evolution of the EU AI Act is progressing toward a technology-neutral definition for AI to be applied to future AI systems. In the coming months, multiple states will agree on precise legal definitions, which reflect moral considerations of the role that AI will and will not be allowed to play in Europe for the very first time. And formally defining AI is an ongoing debate. 

Precise definitions within a rapidly expanding field are perhaps not the first things that come to mind when asked about pressing issues concerning AI. However, as its influence grows, arriving at one seems essential when considering how to regulate it. Agreeing on what AI is–and what it is not–on a transnational level, is proving to be increasingly important. Online spaces rarely respect sovereignty, and the role of AI in public life is expected to increase rapidly. 

Different countries and organizations have different definitions, though the AI Act is expected to provide some standardization, not only within the EU but also outside of it due to its influence. Other than providing a framework for businesses to operate within in the future, it further shows the anticipation of what, how and where AI will act and what it will develop towards. Let’s consider how different organizations and states currently are defining AI systems.

OECD

So far, the AI ACT’s definition of AI systems is expected to follow the OECD’s current definition. This currently seems to be the most influential definition and it reads as follows: 

An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

Notably, the OECD’s definition has undergone changes from its first draft to the current one above. The removal of “human-based inputs” and the addition of “decisions” when referring to outputs reflects a potential for vastly limiting human-centred decisions and actions. While acknowledging that different systems vary in their autonomy, this change opens up the potential for full autonomy. This can be controversial, to say the least, and can be expected  to feed into the growing concerns of AI alignment. As we await the EU AI Act, if they indeed adopt the same or even a similar definition, it will be interesting to see their definition of personhood, considering the removal of “human-based” under inputs. 

ISO

The International Organization for Standardization has defined AI systems as follows:

AI

<engineered system> set of methods or automated entities that together build, optimize and apply a model (3.1.26) so that the system can, for a given set of predefined tasks (3.1.37), compute predictions (3.2.12), recommendations, or decisions

Note 1 to entry: AI systems are designed to operate with varying levels of automation (3.1.7).

Note 2 to entry: Predictions (3.2.12) can refer to various kinds of data analysis or production (including translating text, creating synthetic images or diagnosing a previous power failure). It does not imply anteriority.

<discipline> study of theories, mechanisms, developments and applications related to artificial intelligence <engineered system> (3.1.2)

AI System:

engineered system featuring AI <engineered system> (3.1.2)

Note 1 to entry: AI systems can be designed to generate outputs such as predictions (3.2.12), recommendations and classifications for a given set of human-defined objectives.

Note 2 to entry: AI systems can be designed to operate with varying levels of automation.

Here, there is a consideration of what kind of system is considered, notably an engineered one. This is interesting as previous definitions have been somewhat ambiguous about what technologies, in fact, will fall under such legislation. There is also a focus on the cooperation of different entities, not specified of human or otherwise. Notably, they do not mention the origin and what kind of input is being processed, though through “varying levels of automation” it can be inferred that it covers the balance between human or non-human inputs, thus offering varying levels of autonomy. 

South Korea 

South Korea also adopted their definition of AI system in their 2023 AI Act, and it reads as follows:

Article 2 (Definitions) As used in this Act, the following terms have the following meanings.

  1. “Artificial intelligence” refers to the electronic implementation of human intellectual abilities such as learning, reasoning, perception, judgment, and language comprehension.

  2. “Artificial intelligence technology” means hardware technology required to implement artificial intelligence, software technology that systematically supports it, or technology for utilizing it.

While not mentioning AI systems, they attribute human attributes, like perception, to an electronic entity. While not mentioning “decisions,” attributing human characteristics perhaps makes that point redundant, as it can be interpreted as an actor, acting on a similar level as humans. Further, they are expansive on what technology is considered AI, as even a cable providing power can, under their current definition, be classified as a piece of AI technology. 

US Executive Order 

In the last part of 2023, The Biden administration issued an executive order whereby they defined an AI system:

“a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.”

Here, The Biden Administration merges human and machine-based inputs, highlighting the cooperation between the two actors. And while not legally binding, it shows intent. It shows more caution and perhaps skepticism regarding AI acting autonomously, as compared to any other of the major actors. Interestingly, the distinction between virtual and “real” (assuming this means physical, though the wording of it remains problematic) environments shows a similar skepticism to the scope and spheres that the Biden Administration is interested in AI occupying. This limits the controversial issue of potential autonomy present in previous definitions, though it limits communication between systems independently of human inputs, which can prove problematic in practice. 

Answers we are excited to see 

As we enter into an important legislative year for AI, we are looking forward to getting answers to the following questions regarding the legal definitions of AI systems:

  • What definition of personhood will accompany the AI systems definition in the AI Act? And what does this mean for the intellectual protection of something entirely made by an AI, considering that it allows for large amounts of autonomy? That is, if it indeed follows the same definition as the OECD. 
  • What kind of technology will be considered to be AI? Will it range from Excel spreadsheets to LLMs? Are we considering “machine-based systems,” an “engineered system” or something else?
  • Will legislation be strong enough, or perhaps broad enough, to encompass the massive changes AI is currently undergoing? And what predictions can we infer that the EU is making on behalf of the future advancements of AI?