Fundamental Rights Impact Assessment for High-Risk AI Systems: Article 27 | EU AI Act

Fundamental Rights Impact Assessment for High-Risk AI Systems: Article 27 | EU AI Act

Article 27 of the AI Act provides clarity related to the deployment of a fundamental rights impact assessment for high-risk AI systems.

Deployers must conduct a thorough assessment of the impact on fundamental rights that the use of such high-risk AI systems may have. Such an assessment must consist of the following:

  • A description of the deployer’s processes in which the high-risk AI system is to be used;
  • A description of the period and frequency which the high-risk AI system is to be used;
  • The categories of natural persons and groups likely to be affected by its use;
  • The specific risks of harm the use of the AI system may have on each category of natural persons and groups;
  • A description of the implementation of the human oversight measures;
  • The measures in place to mitigate any risks.

The aforementioned measures and responsibilities apply to the first use of high-risk AI systems. The deployer may only rely on previously conducted fundamental rights impact assessments if any other important considerations or factors have changed since the assessment was conducted.

The deployer must notify the market surveillance authority of the results of its assessments. In this instance, the relevant exemptions detailed in Article 46 will apply.

A fundamental rights impact assessment does not need to be conducted if the obligations of this assessment have already been met as a result of a data protection impact assessment conducted per the requirements of Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680.

The AI Office must develop and make available a template for a questionnaire to enable deployers to comply with the obligations related to reporting assessments.

To view or add a comment, sign in

Insights from the community

Explore topics