Este portal utiliza "cookies". Se quiser saber mais sobre cookies, quais as suas finalidades e como geri‑los, consulte a nossa Política de privacidade

Regulation

Update 30.11.2025

Discover all the key points on digital regulation in Public Administration.

AI Act

European AI regulation

The European AI Regulation (AI Act), adopted in 2024, sets out the first horizontal legal framework for developing, placing on the market, and using AI systems across the European Union.

Objectives

Ensuring that AI in Europe is safe, transparent, ethical, and respects fundamental rights, while fostering innovation and enhancing Europe’s competitiveness in AI.

AI Act

Risk-Based Model

The regulation takes a proportional approach, depending on the level of risk that various AI applications may present to citizens’ rights or society.

AI Act

Conformity Assessment

The implementation of the European Artificial Intelligence Regulation will be supported by a structured set of harmonized technical standards and certification mechanisms, essential to ensure that AI systems comply with established legal requirements. These tools promote interoperability, trust, and security in the development and use of artificial intelligence across the European Union.

Standards

Certification

AI Act

Governance Model

The AI ACT establishes the AI Board as a key element of its governance structure. To effectively carry out its tasks, it proposes a three-tier operational structure.

AI Act

Implementation roadmap

The regulation officially took effect in August 2024, with a phased implementation schedule.

1 August 2024

Entry into force of the regulation

2 November 2024

Publication of the list of entities overseeing the protection of fundamental rights under Article 77 of the AI Act.

From 2 February 2025

Application of Chapters I and II.

By 2 May 2025

Codes of Practice for GPAI (art. 56(9)).

By 2 August 2025

GPAI models must comply with the Regulation (Chap. V); Governance - identification of National Competent Authorities – Notification and Market Surveillance (Chap. VII); Definition of Fines and Penalties (Recital 179).

From 2 August 2025

Every two years, Member States shall inform the Commission of the status of financial and human resources available to the competent national authorities, including an assessment of their compliance. The Commission forwards this information to the Board for consideration and possible recommendations (Art. 70(2)).

By 2 February 2026

Deadline for COM to provide guidelines for the Practical Application of Art. 6 for High Risk AI Systems.

From 2 August 2026

Application of the remaining parts of the regulation with the exception of Article 6(1).

From 2 August 2027

Application of Art. 6(1) on High Risk AI System (Art. 113); Suppliers of GPAI models placed on the market before 2 August 2025 must have taken the necessary measures to comply with the obligations laid down in this Regulation by this date (Art. 111(3)); AI systems that are components of the large-scale IT systems listed in Annex X and that were placed on the market/put into service before this date must be brought into compliance with this Regulation by 31 December 2030 (Art. 111(1)).

See 9 Of 9