Volg ICTI

The Artificial Intelligence Impact Assessment

| Joost Krapels | Artificial Intelligence

Artificial Intelligence, or AI for short, is no longer future tech. Systems that can perform certain tasks better than humans can do inheretly bring risks with them. Important decisions might be made by machines instead of a human beings, we might be tracked in ways previously impossible, and intelligent systems could replace certain human jobs. An AI impact assessment is the way to account for AI risks in advance. ICT Institute has a team of independent experts ready to carry out these impact assessments.

Why an AI Impact Assessment

Many organizations have started to conduct research into the use of AI, since the deployment of AI has countless advantages. Some examples of theose benefits are: decisions can be made faster, advice can be pricisely customized, services can be better tailored to the customer, and fraud can be detected more quickly. However, if the application of AI goes wrong, the consequences can be severe. Below we have made a small selection of examples of the impact a bad functioning IA system can have.

The ECP, the Dutch platform for the information society, has hence developed an assessment with a number of experts (including Dr. Stefan Leijnen from ICT Institute) to ensure that AI is implemented responsibly (a report of the launch event can be found on Frankwatching). This AI Impact Assessment, or AIIA for short, was launched in November. It is a short test that can be carried out by a number of independent experts at the start of a project. The outcome is concrete advice on how the AI system can be implemented responsibly.

The 8 steps to succes

The eight steps drafted by the ECP are as follows:

  1. Determine the necessity of doing an AI Impact Assessment
  2. Describe the way AI will be applied
  3. Describe the benefits of this application of AI
  4. Are the goal and means to reach it ethically and legally responsible?
  5. Is the applied AI reliable, safe, and transparant?
  6. Weighing and judgement
  7. Capturing the process and accountability
  8. Periodic evaluation

Is an AIIA mandatory?

The AIIA is similar to the data protection impact assessment (DPIA). The DPIA is laid down in the GDPR and therefore obligatory for certain projects (projects involving personal data, with new technologies, leading to increased privacy risks). The AIIA is optional and can be performed in advance or simultaneously with a DPIA. The advantage of a separate AIIA is that you can look deeper for AI-specific risks. Think about:

  1. The use of black box algorithms (algorithms where the results can not be explained)
  2. Overfitting, bias, and other problems of self-learning systems
  3. Ethical considerations that systems must make
  4. Systems that continue learning after they have been introduced
  5. The use of confidential data and algorithms

The AIIA looks beyond the technology itself: it also focusses on the right planning, communication, human input, evaluation, security, and privacy, without losing the overal goal of “the responsible introduction of an AI system” out of sight.

How to get started

The manual for doing an AIIA is freely available via ECP. A team of experienced auditors and AI experts can use this manual to carry out an AIIA in a few days to a few weeks. The leader of the team starts by drawing up a research plan (interviews with those involved, and possibly data collection and measurements). If you have a team available yourself, you can do this internally. If you lack expertise, we are happy to help: we can provide additional expertise (for example, an AI expert) or perform the AIIA for you.

The AIIA is, in principle, done for the organization that implements the AI system, meaning that the report only goes to the client. They can then choose what to do: improve the system based on advice, change the approach, and possibly also share the report with users to ensure transparency.

Available expertise

The following people connected to ICT Institute are available for the implementation of- and assistance with AI impact assessments:

  • Dr. Sieuwert van Otterloo – Sieuwert is an AI expert with a PhD in Informatics, recognized independent IT expert (NVBI and LRGD), and has extensive experience with reviews and consulting.
  • Dr. Stefan Leijnen – Stefan has a PhD in Machine Learning, conducts research on computers and creativity, and also has a lot of experience with reviews and consulting. He has participated in the development of the ECP AI Impact Assessment.
  • Dr. Joost Schalken-Pinkster – Joost Schalken-Pinkster is a PhD AI expert, ISACA certified auditor and has extensive experience with auditing, reviewing, and consulting.
  • Joost Krapels MSc. – Joost Krapels is privacy specialist and consultant at ICT Institute, with a  background in AI.
  • Mr. ing Nico M. Keijser CDPO – Nico is a recognized independent IT expert (NVBI and LRGD) and also privacy expert. He is often asked for independent research.
  • Drs. Chris Barbiers RE RA – Chris is a registered IT auditor and has a great deal of experience in testing IT systems.

Source images: The official ECP Artificial Intelligence Impact Assessment 

 

Author: Joost Krapels
Joost Krapels has completed his BSc. Artificial Intelligence and MSc. Information Sciences at the VU Amsterdam. Within ICT Institute, Joost provides IT advice to clients, advises clients on Security and Privacy, and further develops our internal tools and templates.