Fundamental Rights Impact Assessment for High-Risk AI Systems: Article 27 | EU AI Act
Article 27 of the AI Act provides clarity related to the deployment of a fundamental rights impact assessment for high-risk AI systems.
Deployers must conduct a thorough assessment of the impact on fundamental rights that the use of such high-risk AI systems may have. Such an assessment must consist of the following:
The aforementioned measures and responsibilities apply to the first use of high-risk AI systems. The deployer may only rely on previously conducted fundamental rights impact assessments if any other important considerations or factors have changed since the assessment was conducted.
The deployer must notify the market surveillance authority of the results of its assessments. In this instance, the relevant exemptions detailed in Article 46 will apply.
A fundamental rights impact assessment does not need to be conducted if the obligations of this assessment have already been met as a result of a data protection impact assessment conducted per the requirements of Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680.
The AI Office must develop and make available a template for a questionnaire to enable deployers to comply with the obligations related to reporting assessments.