The AI Act has been published today! What do you need to know about it?
Today, on July 12, 2024, the EU Regulation No. 2024/1689[1] laying down harmonized rules on Artificial Intelligence (the “AI Act”) was finally published in the EU Official Journal. The AI Act will gradually come into force as of August 1st, 2024.
This is the world’s first comprehensive regulatory framework for AI systems (“AIS”) and the AI Act sets the stage for future regulations both within and beyond the EU.
What do you need to know about it?
Objectives of the AI Act
The development of AI systems allows users to automate certain tasks and gain efficiency. Task automation and data analysis represent progress. However, this progress comes with potential risks (on the fundamental rights, through the presence of cognitive biases, errors, discrimination or potential impact on data privacy).
In this context, the objective of this regulation is to strengthen trust around AI, to control its impact on society, businesses and individuals, while creating a context which is also favourable to research and development, to the economy, as well as innovation.
Content of the AI Act
The AI Act follows a risk-based approach by classifying AI systems into four levels, depending on the potential risk. Each level is subject to a particular set of obligations.
The risks can be qualified as:
(a) Unacceptable: the AI Act prohibits a limited set of practices that are deemed contrary to the values and fundamental rights of the EU (e.g.: generalized social scoring, predictive policing targeting individuals, recognition of emotions in the workplace and in educational institutions);
(b) High: AIS that may affect the safety of individuals or their fundamental rights, which justifies their development being subject to enhanced requirements (conformity assessments, technical documentation, risk management mechanisms). (e.g.: biometric systems, systems used in recruitment, or for law enforcement purposes);
(c) Limited risk: the AI Act imposes specific transparency obligations on these AIS, in particular where there is a clear risk of manipulation (e.g.: use of chatbots or systems generating content); or
(d) Minimal: the AI Act does not include a specific obligation for this typology of AIS. These are the vast majority of AI systems currently in use in the EU or likely to be used in the EU according to the European Commission.
Scope of the AI Act
The AI Act will apply to operators (suppliers, deployers and users) of AIS designed within the European Union but also to any operator dealing on the European market, even from a non-EU country.
Obligations implemented by the AI Act
(a) for suppliers:
The AIS providers see their obligations modulated according to risk levels:
- in case of unacceptable risk : the use of the AIS is prohibited;
- High risk AIS require compliance measures (recording of activities, security, user information of the users, human monitoring…);
- Limited risk AIS generate limited obligations;
- Minimal risk AIS are not subject to particular measures.
Recommended by LinkedIn
(b) for deployers:
In particular, general-purpose AIS deployers must inform end users of the presence of AI-generated content. They must also indicate whether the model was trained with copyrighted data.
(c) for users:
In the case of organizations, these must include monitoring input data, keeping logs and suspending the use of AIS in the event of non-compliance.
Sanctions: high level of penalties and proportionality to the level of risk
Sanctions in the event of non-compliance may amount to up to[2]:
- (a) 35,000,000 euros;
or
- (b) 7% of total worldwide annual turnover for the preceding financial year for legal entities, whichever is higher.
Application schedule
This regulation will generally become fully applicable after a two-year transitional period, although certain obligations will take effect at an earlier or later time.
Its timeline for implementation is as follows:
- August 1st, 2024: the AI Act will enter into force.
- February 2nd, 2025: prohibitions on AIS presenting unacceptable risks.
- August 2nd, 2025: General purpose AI (GPAI) models must be in compliance. Governance structure (AI Office, European Artificial Intelligence Board, national market surveillance authorities, etc.) will have to be in place.
- February 2nd, 2026: European Commission to adopt Implementing Act, which lays down detailed provisions that establish a template for the post-market monitoring plan and the list of elements to be included in the plan.
- August 2nd, 2026: all rules of the AI Act become applicable, including obligations for high- risk systems defined in Annex III (list of high-risk use cases). Also, Member States shall ensure that their competent authorities have established at least one operational AI regulatory sandbox at national level.
- August 2nd, 2027: Obligations for high-risk systems defined in Annex I (list of EU harmonization legislation) become applicable.
Furthermore, the entry into application will be based on “harmonised standards” at European level which must define precisely the requirements applicable to the AIS concerned. The European Commission has therefore commissioned CEN/CENELEC (European Committee for Electrotechnical Standardization) to draft ten standards.
[1] EU Regulation No. 2024/1689 (English version).
[2] Article 99 of the AI Act.