AI audits
| Sieuwert van Otterloo |
Artificial Intelligence
Companies making or using high risk AI systems must do AI audits and assessments to comply with the AI Act. We do these audits in a practical constructive manner, based on our technical and legal experience.
Why do an AI audit or assessment
An audit or assessment is an independent review of an AI system, resulting in a report. An audit or assessment of an AI system is legally required under the AI Act and GDPR, before one can offer or use a high risk AI system. A good audit or assessment also helps reducing practical risks, such as treating people unfairly or leaking data.
When done properly, an audit or assessment is a positive experience for everyone involved: it is a structured review of the AI solution and its context. It is based on a list of relevant questions and requirements. It results in a readable report that helps in deciding if the AI system can be used safely. The audit report is part of the required documentation that can be asked by users of the system or independent supervisors.
Different types of audits and assessments
The AI Act has multiple articles related to audits, each with slighly different criteria. If there are no previous audit or assessment results, one can do a combined audit to fulfill all requirements. If there are already some results, one needs to check what still needs to be one. The following are the most important audit, assessment and documentation requirements:
- High risk check. Most audit requirements only apply to so-called high risk systems. It is recommended to check in the requirements or design phase of any system whether the system uses personal data, contains AI and is a high-risk AI system. The outcome should be documented even if it is negative, so that you can demonstrate that further audits are not needed.
- Data Protection Impact Assessment. If an IT or AI system uses additional personal data or uses personal data in a new way, it is required to conduct a Data Protection Impact Assessment. This impact assessment is focused on privacy and data protection risks. The assessment must be done by the organisation using the AI system. Once completed it must be sent for advice to the internal data protection officer. We have a separate DPIA template with explanation and explain the DPIA in our GDPR courses.
- Fundamental Rights Impact Assessment (FRIA). Deployers of high risk AI systems must do a before the system can be deployed. In this assessment you look at bias and discrimination, the risk and impact of errors, but also logging, human oversight and the process for corrections.
- Conformity assessment. Article 43 of the AI Act requires providers to produce a conformity assessment report. It can be made by an outside regulator or produced internally depending on the type of service / product. When based on internal controls, the conformity assessment must include a review of the quality management system, the technical documentation and the design and development process.
Image source: steve-johnson via unsplash.jpg
Dr. Sieuwert van Otterloo is a court-certified IT expert with interests in agile, security, software research and IT-contracts.

