On 21 May 2024, the European Council gave its final approval to the much-anticipated Artificial Intelligence regulation (the “AI Act”), establishing harmonized rules for AI development, placement on the market, and usage within the European Union (EU). This marks the EU’s first significant effort to regulate AI technologies, reflecting their increasing importance in modern economies.
What is the AI Act?
The AI Act provides a comprehensive legal framework for the regulation of AI systems, based on the risks they pose to fundamental rights, safety, and public health. It aims to strike a balance between fostering innovation and ensuring adequate oversight.
After nearly three years of legislative negotiations, the Act has substantially expanded from its original proposal. One of the most notable additions is the inclusion of general-purpose AI models as a newly regulated category—underlining the EU’s focus on AI as a strategic area of concern.
Highlights of the AI Act
Risk-based categorization of AI systems
The AI Act introduces a tiered framework, categorizing AI systems into four groups based on the risks they present:
-
Prohibited AI practices (Art. 5)
These include systems using manipulative or exploitative techniques that could lead to harm or discrimination. Such practices are strictly banned across the EU with no exceptions. Examples include social scoring and certain types of biometric surveillance. -
High-risk AI systems (Chapter III, incl. Articles 9 and 25)
High-risk AI covers applications in regulated sectors such as medical devices, civil aviation, vehicles, biometry, and critical infrastructure. These systems are identified through Annex I and Annex III, and are subject to strict compliance obligations for providers, importers, distributors, and deployers.-
Under Article 9, a risk management system must be established, implemented, documented, and maintained.
-
Article 25 outlines detailed responsibilities for providers, including risk assessments, technical documentation, and ongoing monitoring throughout the system’s lifecycle.
-
If any actor in the value chain has reason to believe the system is non-compliant, deployment must be suspended.
-
-
Other AI systems
These are AI systems that interact with individuals, such as chatbots or AI-generated content tools. The Act imposes transparency obligations, including Article 50, which requires users to be informed that content is AI-generated or manipulated. -
General-purpose AI models (GPAI)
GPAI refers to models capable of powering a broad range of applications. These are subject to lighter obligations unless they qualify as general-purpose AI models with systemic risks—i.e., models with high-impact capabilities, often trained using exceptionally large computational resources.
When systemic risk applies, additional requirements include risk mitigation policies, adversarial testing, and model evaluation obligations
Governance and Enforcement (Chapter VII)
The Act establishes a multi-level governance structure:
-
At European level:
-
An AI Office within the European Commission to oversee enforcement.
-
The European Artificial Intelligence Board, with representatives from each Member State, to advise and coordinate.
-
An Advisory Forum offering technical expertise.
-
A scientific panel of independent experts to support enforcement.
-
-
At national level:
Each Member State must designate at least one notifying authority and one market surveillance authority to oversee and enforce compliance at the domestic level.
Luxembourg’s Regulatory Sandbox
On 14 June 2024, Luxembourg’s CNPD (Commission Nationale pour la Protection des Données) officially launched a Regulatory Sandbox for AI. This initiative allows companies registered in Luxembourg to test their AI systems in a collaborative and supervised environment, with a focus on GDPR compliance.
Upcoming Steps
The AI Act was published in the Official Journal of the EU on 1 August 2024 and entered into force on 21 August 2024, in accordance with Article 113. Its provisions will be implemented in three stages:
-
Six months after entry into force:
The rules on prohibited AI practices will apply. -
Twelve months after entry into force:
Obligations related to general-purpose AI and the establishment of new EU and national governance bodies will become applicable. -
Thirty-six months after entry into force:
The more complex requirements concerning high-risk AI systems will come into effect.
While full applicability generally begins 24 months after entry into force, these staged measures allow for a phased rollout based on risk levels and preparedness.
Further Developments
The AI Act is part of a broader legislative framework that also includes the forthcoming:
-
AI Liability Directive, which will address civil liability for harm caused by AI systems, and
-
Product Liability Directive, which will update EU product safety laws to reflect the use of AI.
Together, these instruments will define the EU’s comprehensive regulatory approach to artificial intelligence in the years ahead.
Key competencies
arrow_forward Company formation
arrow_forward Corporate governance
arrow_forward Restructuring and insolvency
arrow_forward Corporate and commercial litigation
Related news
No related posts.