The AI Act Risk Management System
| Pieter t Hoen |
Artificial Intelligence
In this article we explain what an AI Risk Management System (RMS) is, and why it is required by the AI Act for high-risk AI systems. We outline the components of the AI RMS and how you can potentially leverage, for example, your existing ISO 27001 work for ISO 42001 compliance.
The AI Act: When do you need a Risk Management System?
The AI Act is the European Union’s comprehensive law regulating artificial intelligence. It aims to ensure that AI systems used in the EU are safe, transparent, and respect fundamental rights. The focus of the AI Act is on high-risk systems (such as those used in healthcare, employment, law enforcement, or critical infrastructure) which must meet detailed compliance requirements before entering the EU market. Article 9: Risk Management System | EU Artificial Intelligence Act requires that a risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems.
The AI Act: What is a High-Risk AI System?
A high-risk AI system is any system that may result in risks to health, safety or fundamental rights of natural persons, such as the right to privacy and the right not to be discriminated. Examples of high-risk systems are for instance systems that control equipment of vehicles, make healthcare decisions or decisions for selecting, hiring or blocking people. High risk systems are allowed, but there are many additional rules for providers and deployers of these systems. There are listed in articles 8 to 28 of the AI act. You will need to an AI Act specialist in your team if you are working on such a system.
The AI Act: the Risk Management System
A risk management system (RMS) is a structured framework that organizations use to identify, assess, control, and monitor risks that could affect their objectives. Its purpose is not to eliminate all risk, but to understand the risks and manage them consciously and systematically. The AI Act requires a RMS is in place specifcially tailored to managing high-risk AI.
The risk is managed throughout the entire lifecycle of a high-risk AI system where there are repeated steps:
- the identification and evaluation of risks that the high-risk AI system can pose to health, safety or fundamental rights when the high-risk AI system is used;
- the estimation and evaluation of the risks of the high-risk AI system due to foreseeable misuse;
- the evaluation of other risks arising, based on the analysis of the post-market monitoring system referred to in Article 72, i.e. monitoring after go-live;
- the adoption of appropriate and targeted risk management measures designed to address the risks identified above.
The AI Risk Management System hence aims to evaluate a broad spectrum of sources of risks to human rights and to mitigate these. These risks however shall only concern those which may be reasonably mitigated or eliminated through the development or design of the high-risk AI system, or the provision of adequate technical information. See also https://ictinstitute.nl/ai-risk-management-checklist/ for more inspirational reading on AI risk.
The AI Act RMS in practice
In practice, an AI RMS must meet certain minimal requirements:
- a risk management cycle must be in place;
- at leastonce a year, the current risk set must be evaluated and existing mitigation measures must be reconsidered if not deemed sufficient.
The Plan–Do–Check–Act (PDCA) cycle is the continuous improvement model used in ISO/IEC 27001 to manage and improve an organization’s Information Security Management System (ISMS) and its risk management activities. This well-known cycle will fit quite well also for managing AI risk.
As for any Risk Management System, auditable proof must be available for judging effictiveness and being able to to improve in the (AI) risk mitigation. This can be achieved by a limited number of deliverables:
- A risk register;
- Mitigation plans for reducing unaceptable risk;
- Evidence of risk reduction, for example through repeated tests of the AI use that quantify the risk;
- A yearly planning for review of the risks and excecution chosen measures (planned PDCA cycli).
Leveraging your ISO 27001 work for ISO 42001
A fully implemented ISO 27001 Information Management System (ISMS) is focused on information security and privacy. Key is that like the AI RMS, the building blocks overlap and are both aimed at identifying, quantifying, and mitigating risk. Experienced practitioners will recognize the similarities and are used to plan and to produce evidence of reduced risks through effective measures.
There is hence an opportunity to leverage processes and certifications already available in your organization. AI is already on the radar for most companies, and included as an opportunity/risk in the planning. Key is to scope the risks in the specific context of the AI Act as discussed above.
With the right focus on risks, the jump to ISO 42001 (Artificial Intelligence Management System (AIMS)) becomes very small, to insignificant. ISO/IEC 42001 is a management system standard that helps organizations govern AI responsibly by managing risks, ensuring transparency, and establishing accountability for AI systems. Both use the same management system structure (Annex with measures, Harmonized Structure / PDCA cycle), so they integrate well. ISO 27001 directly supports ISO 42001, since AI systems rely heavily on secure data and infrastructure.
Extending the ISO 27001 would require an ISMS, as discussed, addressing AI-specific risks. Additionally, asset management and vendor management would need to address the use of AI (vendor) systems, but this would already be a requirement addressed in ISO 27001 as AI vendors are indeed vendors, but with a potentially more invasive twist.
When dotting the i’s and crossing the t’s, for the experts, take a look at what a https://ictinstitute.nl/ai-act-fundamental-rights-impact-assessment-fria/ entails for those already experienced with the ISO 27001 DPIA and https://ictinstitute.nl/a-checklist-for-auditing-ai-systems/ for more inspirational reading.
Image source by Loic Leray on Unsplash


