lgorithms are used increasingly frequently for risk-based operations and automated decision-making. However, this approach carries a great risk, especially with machine-learning systems, namely, that it is no longer clear how the decision-making takes place. That this can go horribly wrong was shown by the child allowance affair, whereby minority groups were systematically discriminated against by the Dutch Tax and Customs Administration, including through the use of particular algorithms.


As part of a study commissioned by the Netherlands Ministry of the Interior and Kingdom Relations, a team of researchers of Tilburg University, Eindhoven University of Technology, Free University of Brussels-VUB, and the Netherlands Institute for Human Rights have composed a handbook explaining step by step how organizations that want to use Artificial Intelligence can prevent using algorithms with a bias. The research team examined technical, legal, and organizational criteria that need to be taken into account. The handbook can be used in both the public and private sectors.