The Finalized EU AI Act: Implications and Insights
Today, the European Union’s landmark AI regulation, the Artificial Intelligence Act, comes into force. This concludes more than five years of lawmaking and negotiations, and inaugurates a much longer phase of implementation, refinement, and enforcement. Even though the law has been written, it is far from final. The last deliberations took place under significant time constraints due to the European Parliament elections in June of this year. As a result, many provisions of the regulation remain vague and require additional specifications to guide AI developers and deployers in their compliance efforts. Especially for the European Commission, the EU’s executive branch, which has been tasked with issuing a swath of secondary legislation, the real work begins now.
Scope: The regulation applies to AI developers and deployers whose system or model, including its output, is on the EU market or otherwise in service in the EU, irrespective of their physical location inside or outside the EU. There are three broad exceptions:
- AI for military, defense, or national security purposes
- AI for scientific research
- AI used by individuals for personal, nonprofessional purposes
Notably, the regulation does not apply to EU developers exporting their technology for use outside of the 27 member countries. In other words, it is not prohibited to produce and export AI systems whose use is banned in the EU itself.
Definition of AI: The AI Act defines an AI system as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments,” (Art. 3(1)). As such, the EU’s definition is well-aligned with the Organisation for Economic Co-operation and Development and the definition used in the White House’s October 2023 AI executive order, a small yet noteworthy step of convergence.
AI Risk Classification and Corresponding Obligations
At the heart of the AI Act stands its risk-based approach, which classifies AI systems according to the level of risk they pose to health, safety, and fundamental rights. To do so, the AI Act categorizes AI systems into two broad groups: Single-purpose AI and general-purpose AI (GPAI). The risk level of single-purpose AI is determined by its use case, whereas that of GPAI is determined by its capabilities.
The regulation distinguishes between four levels of risk for single-purpose AI systems: 1) unacceptable risk, 2) high risk, 3) transparency risk, and 4) minimal risk. The majority of AI systems will fall into the minimal-risk category (e.g., AI used to filter junk emails) and are not subject to any requirements. Developers are instead encouraged to follow voluntary codes of conduct.
Systems that are considered to pose transparency risks include AI that can generate synthetic content, as well as biometric categorization and emotion recognition systems. These come with transparency and notification obligations: AI-generated content must be identifiable as such, for example through watermarking or other state-of-the-art methods, and individuals who are interacting with a generative AI or who are subject to a biometric system’s operation must be notified.
AI systems that pose unacceptable risks to health, safety, and fundamental rights, as defined by the AI Act, are banned from EU markets. The types of AI systems that fall under this designation have grown over the course of negotiations to the following eight:
- Deceptive or manipulative AI systems aiming to influence behavior and choices.
- AI systems exploiting vulnerabilities related to disability, age, and social or economic circumstances.
- AI systems evaluating people based on their social behavior (“social scoring”).
- AI systems for predictive policing (with exceptions).
- AI systems creating or expanding facial recognition databases through untargeted web- or CCTV-scraping.
- Emotion recognition AI in the workplace or educational institutions, except when used for medical or safety reasons.
- Biometric categorization to infer race, political opinions, as well as religious or philosophical beliefs, trade union membership, sex life, or sexual orientation, (except in the area of law enforcement).
- The use of “real-time” remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement (with exceptions).
The majority of the regulatory requirements relate to high-risk AI systems. These can be one of two types:
- AI that is a safety component of a product, or itself a product subject to existing safety legislation and assessment, such as toys, vehicles, or medical devices.
- AI that is used for a specific, sensitive purpose. In total, the regulation covers 25 use cases that fall within the following eight high-level areas:
- Biometrics
- Critical infrastructure
- Education and vocational training
- Employment, worker management, and self-employment
- Access to essential services
- Law enforcement
- Migration, asylum, and border control management
- Administration of justice and democratic processes
For an AI system in these sensitive contexts to be considered high-risk, it needs to materially influence the outcome of a decision or action. This means, for example, that an AI system that performs a narrow procedural task within the admissions process of a university—such as extracting names, addresses, and contact information from an application pool to populate a database of candidates—would be classified as low-risk despite the sensitivity of its use context.
High-risk AI systems are subject to a variety of requirements, which are laid out in the AI Act’s Chapter III, sections 2 and 3. They must meet accuracy, robustness and cybersecurity standards alongside provisions on data quality and governance for their training, testing, and validation data. In addition, they require strict monitoring and oversight. With the exception of uses in law enforcement and border management, actions or decisions taken on the basis of the output of a high-risk AI system must be reviewed and confirmed by two competent human supervisors. To facilitate this, high-risk AI systems must be equipped with understandable user interfaces, as well as logging and recording capabilities.
In addition to ensuring their system’s conformity with the above-described requirements, developers and deployers of high-risk AI systems face additional obligations. The cornerstone is the establishment of a quality management system by the developer that ensures its system’s conformity with the AI Act not only at launch but over the entire AI life cycle. Under the quality management system, developers must set up and implement a risk management framework, as well as an accountability framework, and draft detailed technical documentation that describes, among other things, the development methodologies, model specifications, datasets used, testing and validation strategies, and the result of performance evaluations. They must also implement a post-market monitoring system, including procedures for incident logging, identification, and reporting. Once all this is complete, developers must undertake a conformity assessment that verifies the fulfillment of all requirements, and register their high-risk AI system in an EU-wide database.
Deployers, on the other hand, are responsible for ensuring the AI system is operated in line with its intended purpose and instructions, for implementing monitoring and competent human oversight, and for complying with notification obligations towards third parties. Deployers in the areas of healthcare, education, housing, social services, administration, banking, and insurance must furthermore perform a fundamental rights impact assessment prior to deploying the high-risk AI system.
Rules for General-Purpose AI
A general-purpose AI model is defined as “an AI model […] that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications […].” (Art. 3(63)). The AI Act distinguishes between GPAI models and systems with and without systemic risks. Under this classification, all GPAI models and systems are subject to some requirements, while only some must comply with additional obligations.
All developers of GPAI models are required to prepare and maintain detailed technical documentation of the model and provide this documentation to authorities upon request. They are furthermore obligated to facilitate compliance with the law for downstream developers who wish to integrate their model into their AI system by providing them with additional necessary documentation, for example, on model interactions with other hardware or software. GPAI developers are expected to comply with EU copyright law and prepare and publish a summary of the content used in the training of their model.
Some GPAI are classified as posing systemic risks. In the AI Act, systemic risks are a direct result of a GPAI model’s high-impact capabilities, which may pose an inherent threat to public health, safety, security, fundamental rights, or society as a whole, or are risky due to the model’s widespread integration in the EU market. Until more appropriate benchmarks and model evaluation methodologies are developed, a GPAI model is considered to pose systemic risks when a) its training requires >10^25 FLOPS in computing power or b) it is designated as such by the European Commission based on metrics such as the number of registered users or model parameters.
Developers of GPAI models with systemic risks must, in addition to the obligations for all GPAI developers, establish a risk management system, perform additional model evaluations, engage in incident monitoring and reporting, and ensure appropriate levels of model cybersecurity.
Open-Source Exceptions
AI models, systems, and components released under a free and open-source license are exempt from many, though not all, requirements of the regulation.
- Single-purpose AI: Free and open-source AI systems are exempt from obligations unless they are categorized as systems posing transparency, or high and unacceptable risks.
- General-purpose AI: GPAI without systemic risks that is released under a free and open-source license is exempt from all requirements except the obligation to comply with copyright law and to provide a summary of the training data. However, to benefit from this exception, models must be truly free and open source. That is, their parameters—including model weights, model architecture, and information on model usage—must be publicly available and must be released under a free and open-source license that allows users to run, copy, distribute, and modify software and data. No exceptions are made for free and open-source GPAI with systemic risks.
- Obligations along the value chain: Under the AI Act, developers are generally required to cooperate with downstream providers integrating their models or systems into a new AI system in order to enable compliance of the new system with requirements of the AI Act. This cooperation may include providing additional documentation, technical access, and technical assistance. OSS developers of single-purpose AI and of GPAI without systemic risks are exempt from this duty to cooperate along the value chain (but are encouraged to adopt documentation practices such as model cards and data sheets).
Penalties
Failure to comply with the AI Act’s requirements is expensive. Penalties range from EUR 7.5M (or 1% of global annual turnover, whichever is higher) for supplying incorrect or misleading information to the authorities or during the conformity assessment process, to up to EUR 35M (or 7% of global annual turnover) for placing a banned AI system on the EU market.
Governance Structure: Actors and Responsibilities
EU Level
European Commission/AI Office: At the core of the AI Act’s overall implementation and governance efforts is the European Commission’s newly founded AI Office. Opened just over a month ago on June 16, 2024, the AI Office and its 140 staff are tasked with more than 100 specific responsibilities under the Act that must be carried out over the following months and years. Broadly, these fall into four categories: 1) Establishing the AI governance system at EU and member-state level; 2) Issuing secondary legislation that facilitates implementation of and compliance with the regulation; 3) Enforcing the rules for AI developers and deployers, with particular regards to GPAI; and 4) Conducting ex-post evaluations of the law to assess its relevance and appropriateness of thresholds and lists of high-risk and prohibited use cases. A full list of the responsibilities and corresponding timelines can be found here.
European AI Board: The board consists of representatives from EU member states tasked with preventing regulatory fragmentation and supporting its harmonized implementation across member countries and the AI Office.
Advisory Forum: The advisory forum consists of stakeholders from industry, small and medium-sized enterprises, academia, and civil society. It provides technical expertise from a diverse set of perspectives to the AI Office and the Board to be taken into consideration in the implementation process.
Scientific Panel of Independent Experts: Members of the panel are experts appointed by the European Commission who support the AI Office’s enforcement efforts as regards to GPAI. They are specifically tasked with identifying systemic risks and providing guidance on model classification based on the latest scientific understanding.
National Level
Market surveillance authorities: Each EU member country must designate a national market surveillance authority within the next year. They are in charge of monitoring and enforcing compliance in their national markets, but tasks also include managing incident tracking, supervising regulatory sandboxes, and handling citizen complaints. Data protection authorities have made the case for taking over these duties, pointing to their long experience providing independent oversight and developing guidelines, as well as their expertise in the interdisciplinary field of data processing and fundamental rights. By design of the AI Act, they will be in charge of overseeing AI use in the sectors of law enforcement, border control and migration, as well as judicial and electoral administration.
Notified bodies: Any organization with sufficient expertise may apply to become a notified body, in other words, an accredited algorithmic auditor. They are independent and accredited organizations (or other legal entities) authorized to conduct conformity assessments for high-risk AI systems. The AI Act’s reliance on notified bodies to assess developers’ and deployers’ compliance efforts appears like an attempt to stimulate the AI audit industry in the EU. Yet, since only a few types of high-risk systems require a conformity assessment by a third party, this effect may be overall dampened.
What’s Next?
The AI Act enters into force on August 1, 2024, but the law as a whole only becomes applicable on August 2, 2026, two years later. This gives AI developers and deployers additional time to familiarize themselves with the requirements, set up the necessary compliance procedures and bring their AI systems into conformity. However, the regulation provides for three special transition periods for certain categories of rules:
- The general provisions—covering the regulation’s purpose, scope, and definitions, as well as the prohibition of unacceptable-risk AI systems—will apply after six months, as of February 2, 2025.
- Penalties, the rules for GPAI, as well as the provisions on the law’s governance structure will apply after one year, that is, as of August 2, 2025.
- The obligations for high-risk AI systems that are subject to other EU product safety rules and assessments, such as toys, radio equipment, or vehicles, will apply within three years, as of August 2, 2027.
Moreover, for AI models that are already on the EU market, the AI Act provides for extended compliance periods, ranging from 2027 for GPAI to 2030 for high-risk AI. As such, a visible regulatory impact before 2026 is unlikely. The next year will require substantial investment in setting up the institutional governance structure, both at the EU and national levels. With limited budgets (the AI Office is endowed with only EUR 46.5M, national bodies will likely have far less), attracting the necessary talent to Brussels and all corners of the EU will present a tremendous challenge to the AI Act’s implementation and enforcement.
Critically, despite the AI Act’s adoption, regulatory uncertainty on how to comply with its requirements remains high. The regulation relies heavily on technical standards to demonstrate conformity, but compared to other sectors, the current AI standards landscape is still in its infancy. Standards development is a complex and lengthy process, and the European Committee for Standardization (CEN), as well as the European Committee for Electrotechnical Standardization (CENELEC)—the European standard-making bodies—are not expected to issue harmonized standards that operationalize the requirements of the AI Act before 2027. To bridge the period after the application of the first requirements, one year into the AI Act’s entry into force, the AI Office is charged with issuing Codes of Practice to guide GPAI developers in their compliance efforts. While it is yet unclear what these Codes of Practice will look like, their development will likely involve multi-stakeholder consultations, working groups for different obligations of GPAI developers, and the definition of measures and key performance indicators whose operationalization will lead to a presumption of conformity with the requirements of the AI Act.