The Artificial Intelligence Impact Assessment
| Joost Krapels |
Artificial Intelligence, or AI for short, is no longer future tech. Systems that can perform certain tasks better than humans can do inheretly bring risks with them. Important decisions might be made by machines instead of a human beings, we might be tracked in ways previously impossible, and intelligent systems could replace certain human jobs. An AI impact assessment is the way to account for AI risks in advance. ICT Institute has a team of independent experts ready to carry out these impact assessments.
Why an AI Impact Assessment
Many organizations have started to conduct research into the use of AI, since the deployment of AI has countless advantages. Some examples of theose benefits are: decisions can be made faster, advice can be pricisely customized, services can be better tailored to the customer, and fraud can be detected more quickly. However, if the application of AI goes wrong, the consequences can be severe. Below we have made a small selection of examples of the impact a bad functioning IA system can have.
- In a study from 2017 into gender bias, it appeared that facial recognition software was racist and sexist: dark-skinned men and women were not recognized well because the software had been trained and tested on white men.
- Amazon’s recruiting software turned out to discriminate against women: women were invited much less often than men with the same qualifications.
- The chatbot Tay from Microsoft had to be taken offline in 2016 because it turned out to be extremely racist. The chatbot learned through interaction with users, and apparently many users found it interesting to teach the robot all kinds of racist ideas.
- More than 500 people in the United States lost their homes because an IT system from Wells-Fargo did not want to extend the mortgage.
The ECP, the Dutch platform for the information society, has hence developed an assessment with a number of experts (including Dr. Stefan Leijnen from ICT Institute) to ensure that AI is implemented responsibly (a report of the launch event can be found on Frankwatching). This AI Impact Assessment, or AIIA for short, was launched in November. It is a short test that can be carried out by a number of independent experts at the start of a project. The outcome is concrete advice on how the AI system can be implemented responsibly.
The 8 steps to succes
The eight steps drafted by the ECP are as follows:
- Determine the necessity of doing an AI Impact Assessment
- Describe the way AI will be applied
- Describe the benefits of this application of AI
- Are the goal and means to reach it ethically and legally responsible?
- Is the applied AI reliable, safe, and transparant?
- Weighing and judgement
- Capturing the process and accountability
- Periodic evaluation
Is an AIIA mandatory?
The AIIA is similar to the data protection impact assessment (DPIA). The DPIA is laid down in the GDPR and therefore obligatory for certain projects (projects involving personal data, with new technologies, leading to increased privacy risks). The AIIA is optional and can be performed in advance or simultaneously with a DPIA. The advantage of a separate AIIA is that you can look deeper for AI-specific risks. Think about:
- The use of black box algorithms (algorithms where the results can not be explained)
- Overfitting, bias, and other problems of self-learning systems
- Ethical considerations that systems must make
- Systems that continue learning after they have been introduced
- The use of confidential data and algorithms
The AIIA looks beyond the technology itself: it also focusses on the right planning, communication, human input, evaluation, security, and privacy, without losing the overal goal of “the responsible introduction of an AI system” out of sight.
How to get started
The manual for doing an AIIA is freely available via ECP. A team of experienced auditors and AI experts can use this manual to carry out an AIIA in a few days to a few weeks. The leader of the team starts by drawing up a research plan (interviews with those involved, and possibly data collection and measurements). If you have a team available yourself, you can do this internally. If you lack expertise, we are happy to help: we can provide additional expertise (for example, an AI expert) or perform the AIIA for you.
The AIIA is, in principle, done for the organization that implements the AI system, meaning that the report only goes to the client. They can then choose what to do: improve the system based on advice, change the approach, and possibly also share the report with users to ensure transparency.
The following people connected to ICT Institute are available for the implementation of- and assistance with AI impact assessments:
- Dr. Sieuwert van Otterloo – Sieuwert is an AI expert with a PhD in Informatics, recognized independent IT expert (NVBI and LRGD), and has extensive experience with reviews and consulting.
- Dr. Stefan Leijnen – Stefan has a PhD in Machine Learning, conducts research on computers and creativity, and also has a lot of experience with reviews and consulting. He has participated in the development of the ECP AI Impact Assessment.
- Dr. Joost Schalken-Pinkster – Joost Schalken-Pinkster is a PhD AI expert, ISACA certified auditor and has extensive experience with auditing, reviewing, and consulting.
- Joost Krapels MSc. – Joost Krapels is privacy specialist and consultant at ICT Institute, with a background in AI.
- Mr. ing Nico M. Keijser CDPO – Nico is a recognized independent IT expert (NVBI and LRGD) and also privacy expert. He is often asked for independent research.
- Drs. Chris Barbiers RE RA – Chris is a registered IT auditor and has a great deal of experience in testing IT systems.
Source images: The official ECP Artificial Intelligence Impact Assessment
Joost Krapels has completed his BSc. Artificial Intelligence and MSc. Information Sciences at the VU Amsterdam. Within ICT Institute, Joost provides IT advice to clients, advises clients on Security and Privacy, and further develops our internal tools and templates.