AI Act: Fundamental Rights Impact Assessment (FRIA)
| Sieuwert van Otterloo |
Artificial Intelligence
The AI Act has introduced several new rules for the use of AI. One new rule is that organisations must complete a FRIA (Fundamental Rights Impact Assessment) before deploying a high risk AI system. In this article we explain what fundamental rights are, what is considered high risk, FRIA requirements and also share a FRIA template and example.
Different rules for different systems
The AI act has different rules for different types of systems. On the highest levels, it splits AI systems in three risk classes:
- Forbidden AI systems. These are systems for purposes that are not aligned with EU values. Examples of forbidden systems or applications are social scoring, emotion recognition and systems that manipulate human behaviour. Since these applications are forbidden that are no further rules or obligations for these systems. You do not have to make a FRIA or conformity assessments for a forbidden application
- High risk AI systems. A high risk AI system is any system that may result in risks to health, safety or fundamental rights of natural persons, such as the right to privacy and the right not to be discriminated. Examples of high risk systems are for instance systems that control equipment of vehicles, make healthcare decisions or decisions for selecting, hiring or blocking people. High risk systems are allowed but there are many additional rules for providers and deployers of these systems. There are listed in articles 8 to 28 of the AI act. You will need to an AI Act specialist in your team if you are working on such a system.
- Non high risk AI systems. There are many opportunities to use AI in a small or supporting role where the use of AI does not lead to risks to health, safety or human rights. This is for instance the case where AI just helps people do their work better or solves supporting tasks, e.g. sharpening images or suggesting illustrations. Articles 8 to 28 of the AI act do not apply to these systems.
It is important to note that the AI Act has additional requirements for two other types of systems: general purpose AI systems (think Large Language Models) and for generative AI and chatbots. These rules have to also be applied, even if the system is not high risk.
- Article 50 of the AI Act gives additional transparency requirements for generative AI and chatbots. The idea is that you must inform people they are talking to a bot, and generated images and video must be clearly marked as such
- Articles 51 to 56 give additional requirements for general purpose AI systems, such as having documentation about limitations, complying with copyright laws and explain what content was used for training.
Requirements for the Fundamental Rights Impact Assessment
When a company is planning to deploy a high risk AI system, it must create documentation that shows that care was taken to make sure the system does not cause harm. This is called a Fundamental Rights Impact Assessment, or FRIA. A FRIA must contain the following:
- A description of the process in which the AI system will be used and its purpose, e.g. employee selection, fraud detection, quality control, medical advice, etc.
- The period of time when the system will be used, e.g. within a specific project or from a certain date. E.g., you can state that you will use the system as of Jan 2027, or you can state that the AI system
- The categories of people likely to be affected. These are not the employees that operate the system, but the people you are taking decisions for. E.g. the candidates, customers, drivers, patients, etc. The GDPR calls these people the data subjects, the AI Act calls them affected people.
- Specific risks of harm for these affected people. You should use a list of all fundamental rights, such as transparency, privacy, lack of discrimination and check if any right is potentially removed or reduced by the system. Make sure that you have multiple risks for different rights.
- A description of the implementation of human oversight. You must make sure that trained operators oversee the use and can spot and correct mistakes. You should explain what they see
- Measures taken to reduce the risks you have listed. These can be things like improved design, data quality control, testing for fairness / bias, explanations and a complaints process for affected people or information security.
The FRIA is very similar and seems inspired by the Data Protection Impact Assessment of the GDPR. It is a separate requirement and you may have to do both. The FRIA is intended as a design document that you do early in the project. The idea is that considering all these risks will lead to better ideas on how to design, implement and deploy the system.
What are fundamental rights
The EU wants companies to carefully consider all human rights when they are implementing AI. The human rights they are thinking of are the rights included in the universal declaration of human rights that was proclaimed by the UN in 1948.The universal declaration contains for instance the following important rights, that could be reduced due to the improper use of AI technology.
- Right to privacy (article 12)
- Freedom of thought (article 18)
- Freedom of opinion (article 19)
- Equal pay without discrimination (article 23)
- Right to freely participate in the cultural life of the community (article 27)
So in theory, for the FRIA, you should evaluate all these rights carefully and make sure your application does not have any impact on these official rights. This is however not a practical way of doing it since the impact of AI on fundamental rights is often indirect. It is better to take a list of more concrete principles that are easier to consider and apply, and use these in a FRIA. The European Union formed a ‘high level expert group’ (HLEG) of AI experts and they published a document called the Ethics Guidelines for Trustworthy AI. This is an important document that you should read when making a FRIA. We also use this in our human centred AI summer school. This document contains seven principles and 23 more detailed concerns that show what is exactly included in each principle. You should discuss each principle in your FRIA, explain using the process description and known AI risk whether there is a risk of infringement of human rights, and explain what you are doing to control or mitigate the risk. Below are the principles:
- Accountability. To ensure accountability, make sure decisions, underlying scores and inputs used are logged so it can be audited later
- Diversity, non-discrimination and fairness.
FRIA template and example
We created a FRIA template that you can use when doing a FRIA. The template is very long because we pre-filled it using an example case. The case is called algorithmic hiring and is based on using a machine learning algorithm for deciding which candidates to hire for a job. The case is based on our paper ‘fairness metrics explained‘.
You can find the template and example here:
These are also linked on our overview page with all templates.
Other AI Act requirements
Completing a FRIA for high risk AI systems is just one of the AI Act requirements. You also need to take the following steps in your organisation.
- Having an AI risk management system. This is actually very similar to ISO 27001 and you probably want to combine some structures, e.g. the risk register, internal audit and management review. There is an ISO standard for these systems: ISO 42001.
- Organize AI literacy training. You need to make an AI policy with common sense rules and then make people aware of the possible mistakes and risks.
- Select and train people for human supervision. Make sure the system provides the right information and people that do supervision are aware of automation bias, know how to spot mistakes and how to make corrections.
- Complete a conformity assessment when you provide a high risk AI system. When bringing anything with AI to the market in the EU, the provider that offers it to other companies must provide this documentation.
Luckily these requirements are related to the FRIA. If you do the FRIA well, it will contain the material you will need for the risk management system, the AI literacy training and the human supervision design and training. So do a good FRIA first and you will have everything you need for the AI Act.
img src: Igor Omilaev via unsplash
Dr. Sieuwert van Otterloo is a court-certified IT expert with interests in agile, security, software research and IT-contracts. He is a also an ISO 27001 and NEN 7510 auditor and AI researcher.


