EU law on Artificial Intelligence (AI):

EU law on Artificial Intelligence (AI):

10 Points on the Adopted EU AI Act

  1. Risk-Based Approach: The law categorizes AI systems based on their risk levels: unacceptable risk (banned systems), high risk, limited risk, and minimal risk. Each category has specific requirements and restrictions (EY Assets) (IBM - United States).
  2. Banned AI Applications: Certain AI systems are prohibited, including those that manipulate human behavior, score people based on social behavior, or exploit vulnerabilities like age or disability (Home) (IBM - United States).
  3. High-Risk Systems: AI systems classified as high risk must meet strict requirements, such as continuous risk management, stringent data governance, and comprehensive technical documentation. This includes systems used in areas like law enforcement, border control, and critical infrastructure (EY Assets) (IBM - United States).
  4. Transparency Requirements: AI systems must be transparent. Users must be informed when they are interacting with AI. Generative AI must label content as AI-generated, and deepfakes must be clearly marked (IBM - United States) (Stibbe).
  5. Regulatory Authorities: The EU will establish a new AI Office responsible for overseeing and enforcing the rules. This office will be supported by a scientific advisory board and a stakeholder advisory forum (Home).
  6. Regulatory Sandboxes: To promote innovation, regulatory sandboxes will be created. These allow new AI technologies to be developed and tested in a controlled environment before being brought to market (Home).
  7. Penalties: Non-compliance with the regulations can result in substantial fines, up to €35 million or 7% of a company’s global turnover, whichever is higher (Home).
  8. Fundamental Rights Impact Assessments: Institutions such as banks and insurance companies must conduct an assessment of the impact on fundamental rights before deploying high-risk AI systems. This includes identifying risks and implementing oversight and mitigation measures (Stibbe).
  9. Responsibilities Along the Value Chain: Responsibilities for AI systems can be transferred along the value chain, meaning that importers, distributors, and other parties can be considered providers if they make significant modifications to the system or put their name on it (Stibbe).
  10. Right to Explanation: Affected individuals have the right to an explanation when decisions are made using high-risk AI systems. This is particularly relevant for algorithms used in credit scoring, where individuals can learn how decisions were made (Stibbe).

These points provide an overview of the main aspects of the EU AI Act, which aims to make the use of AI within the EU safer and more transparent.

Additional Points:

  • The law takes effect in early 2026.
  • It's the first comprehensive legal framework for AI in the world.
  • The regulation aims to create legal certainty for businesses while protecting citizens from the risks of AI technology.

Further Information:

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics