Luxembourg’s Draft Law No. 8476: Implementing the EU AI Act

Summary/Abstract

On 23 December 2024, Luxembourg submitted Draft Law No. 8476 to support the implementation of the EU Artificial Intelligence Act (Regulation (EU) 2024/1689). The bill outlines the designation of national authorities responsible for market surveillance, conformity assessments, and cooperation with EU institutions. It also introduces regulatory sandboxes, clarifies enforcement mechanisms, and sets out applicable sanctions. Businesses developing or deploying AI systems in or from Luxembourg should assess their classification and compliance readiness, especially with the AI Act’s staged application beginning in February 2025.

Luxembourg advances implementation of the EU AI Act

On 23 December 2024, the Luxembourg government introduced Draft Law No. 8476 as part of its national strategy to implement the EU’s landmark Artificial Intelligence Act (AI Act). The bill is currently under review by the Chambre des Députés and focuses on setting up the legal and institutional mechanisms needed to enforce the EU regulation at national level.

The AI Act imposes harmonized rules for the development, marketing, and use of AI systems in the EU, relying heavily on Member States to appoint competent authorities and structure enforcement mechanisms. Draft Law No. 8476 fills this role in Luxembourg’s legal framework.

Scope and objectives

The draft law’s core objective is to provide a procedural and organizational foundation for Luxembourg’s enforcement of the AI Act. It addresses:

  • Designation of notifying authorities to accredit and monitor conformity assessment bodies (CABs);

  • Identification of market surveillance authorities based on sectoral competence;

  • Establishment of a cooperation framework for national and EU-level enforcement;

  • Creation of AI regulatory sandboxes for supervised innovation;

  • Specification of enforcement powers and sanctions.

Notifying authorities and conformity assessment

The draft law identifies the following bodies as notifying authorities:

  • Office luxembourgeois d’accréditation et de surveillance (OLAS) – general oversight and accreditation;

  • Agence luxembourgeoise des médicaments et produits de santé (ALMPS) – oversight in the medical and health sectors;

  • Commissariat du gouvernement à la protection des données (CGPD) – competent for AI systems involving personal data processed in state procedures.

These authorities are tasked with supervising conformity assessment bodies (CABs), which certify whether high-risk AI systems meet the requirements of the AI Act based on standards, documentation, and technical testing.

For high-risk systems used by law enforcement, immigration or asylum authorities, assessments will be carried out directly by the Commission nationale pour la protection des données (CNPD), reflecting the sensitivity of such use cases.

Market surveillance authorities

The bill adopts a sector-specific approach to enforcement by assigning market surveillance functions to a range of authorities based on their existing regulatory remit:

  • CNPD – Default horizontal surveillance authority and single contact point with the European Commission.

  • Commission de surveillance du secteur financier (CSSF) – Financial services and markets.

  • Commissariat aux assurances (CAA) – Insurance sector.

  • Autorité de contrôle judiciaire – Judiciary and prosecution bodies.

  • ILNAS – Products and services regulated under EU harmonisation legislation.

  • ILR – Critical infrastructure and digital services.

  • ALMPS – Healthcare and medical devices.

  • ALIA – Compliance with content transparency obligations, including AI-generated or manipulated media.

These authorities are granted investigatory and enforcement powers, including inspections, orders to correct or remove non-compliant AI systems, and—where necessary—application of penalties.

Regulatory sandboxes

The draft law introduces a legal basis for the establishment of AI regulatory sandboxes, a feature encouraged by the AI Act. These are supervised environments where businesses can test innovative AI systems in collaboration with regulators, promoting early-stage compliance and responsible development.

Each surveillance authority will be expected to administer sandboxes within its sector, enabling tailored support for participants.

Coordination and EU cooperation

To ensure alignment with EU enforcement, the CNPD is designated as the single national contact point under Article 70(2) of the AI Act. The CNPD will coordinate Luxembourg’s cooperation with the European Commission, other Member State authorities, and relevant EU bodies such as the AI Office.

The draft law also contains provisions for structured inter-agency cooperation at national level and anticipates Union safeguard mechanisms, which may be triggered for serious cross-border compliance issues. In such cases, enforcement measures taken in Luxembourg could be escalated to EU level, potentially resulting in coordinated market restrictions.

Sanctions and enforcement powers

In line with the AI Act, the bill outlines a graduated system of sanctions:

  • Up to €35 million or 7% of global turnover for prohibited AI practices (Article 5 AI Act);

  • Up to €15 million or 3% for violations involving high-risk AI systems;

  • Up to €7.5 million or 1% for providing false or misleading information.

Fines are subject to proportionality rules for SMEs and startups. Competent authorities may also issue warnings, orders, and public notices. Decisions may be appealed before the Administrative Court of Luxembourg.

Strategic implications for businesses

As implementation of the AI Act progresses, companies developing or deploying AI systems within Luxembourg or across borders should begin preparing for compliance. Immediate steps may include:

  • Classifying systems under the AI Act’s risk tiers (e.g. high-risk under Annex III);

  • Reviewing documentation and governance to ensure transparency, human oversight, and post-market monitoring;

  • Assessing contractual frameworks with deployers, importers, and distributors;

  • Engaging with potential CABs for early-stage guidance;

  • Preparing for sandbox participation in case of novel or complex AI applications;

  • Aligning GDPR compliance, particularly in cases involving personal data or AI-generated content.

Conclusion

Draft Law No. 8476 provides Luxembourg’s legal backbone for enforcing the AI Act. It takes a structured, competence-based approach, emphasizing coordination between national agencies and with the European Commission. The bill reflects Luxembourg’s dual commitment: encouraging trustworthy AI innovation while ensuring robust enforcement and protection of fundamental rights.

Businesses active in Luxembourg’s AI ecosystem should view this development as both a compliance imperative and a strategic opportunity to engage proactively with regulators. The first key provisions of the AI Act will begin to apply six months after entry into force—which may be as early as February 2025.


EU finalizes AI Act: A landmark regulation for Artificial Intelligence

On 21 May 2024, the European Council gave its final approval to the much-anticipated Artificial Intelligence regulation (the “AI Act”), establishing harmonized rules for AI development, placement on the market, and usage within the European Union (EU). This marks the EU’s first significant effort to regulate AI technologies, reflecting their increasing importance in modern economies.

What is the AI Act?

The AI Act provides a comprehensive legal framework for the regulation of AI systems, based on the risks they pose to fundamental rights, safety, and public health. It aims to strike a balance between fostering innovation and ensuring adequate oversight.

After nearly three years of legislative negotiations, the Act has substantially expanded from its original proposal. One of the most notable additions is the inclusion of general-purpose AI models as a newly regulated category—underlining the EU’s focus on AI as a strategic area of concern.

Highlights of the AI Act

Risk-based categorization of AI systems

The AI Act introduces a tiered framework, categorizing AI systems into four groups based on the risks they present:

  1. Prohibited AI practices (Art. 5)
    These include systems using manipulative or exploitative techniques that could lead to harm or discrimination. Such practices are strictly banned across the EU with no exceptions. Examples include social scoring and certain types of biometric surveillance.

  2. High-risk AI systems (Chapter III, incl. Articles 9 and 25)
    High-risk AI covers applications in regulated sectors such as medical devices, civil aviation, vehicles, biometry, and critical infrastructure. These systems are identified through Annex I and Annex III, and are subject to strict compliance obligations for providers, importers, distributors, and deployers.

    • Under Article 9, a risk management system must be established, implemented, documented, and maintained.

    • Article 25 outlines detailed responsibilities for providers, including risk assessments, technical documentation, and ongoing monitoring throughout the system’s lifecycle.

    • If any actor in the value chain has reason to believe the system is non-compliant, deployment must be suspended.

  3. Other AI systems
    These are AI systems that interact with individuals, such as chatbots or AI-generated content tools. The Act imposes transparency obligations, including Article 50, which requires users to be informed that content is AI-generated or manipulated.

  4. General-purpose AI models (GPAI)
    GPAI refers to models capable of powering a broad range of applications. These are subject to lighter obligations unless they qualify as general-purpose AI models with systemic risks—i.e., models with high-impact capabilities, often trained using exceptionally large computational resources.
    When systemic risk applies, additional requirements include risk mitigation policies, adversarial testing, and model evaluation obligations

Governance and Enforcement (Chapter VII)

The Act establishes a multi-level governance structure:

  • At European level:

    • An AI Office within the European Commission to oversee enforcement.

    • The European Artificial Intelligence Board, with representatives from each Member State, to advise and coordinate.

    • An Advisory Forum offering technical expertise.

    • A scientific panel of independent experts to support enforcement.

  • At national level:
    Each Member State must designate at least one notifying authority and one market surveillance authority to oversee and enforce compliance at the domestic level.

Luxembourg’s Regulatory Sandbox

On 14 June 2024, Luxembourg’s CNPD (Commission Nationale pour la Protection des Données) officially launched a Regulatory Sandbox for AI. This initiative allows companies registered in Luxembourg to test their AI systems in a collaborative and supervised environment, with a focus on GDPR compliance.

Upcoming Steps

The AI Act was published in the Official Journal of the EU on 1 August 2024 and entered into force on 21 August 2024, in accordance with Article 113. Its provisions will be implemented in three stages:

  • Six months after entry into force:
    The rules on prohibited AI practices will apply.

  • Twelve months after entry into force:
    Obligations related to general-purpose AI and the establishment of new EU and national governance bodies will become applicable.

  • Thirty-six months after entry into force:
    The more complex requirements concerning high-risk AI systems will come into effect.

While full applicability generally begins 24 months after entry into force, these staged measures allow for a phased rollout based on risk levels and preparedness.

Further Developments

The AI Act is part of a broader legislative framework that also includes the forthcoming:

  • AI Liability Directive, which will address civil liability for harm caused by AI systems, and

  • Product Liability Directive, which will update EU product safety laws to reflect the use of AI.

Together, these instruments will define the EU’s comprehensive regulatory approach to artificial intelligence in the years ahead.


Privacy Preference Center