From Risk Regulation to Risk Governance:
Examining the Dutch Public Policy Approach to Addressing Discrimination in Algorithmic Profiling
DOI:
https://doi.org/10.71265/8vrrn290Keywords:
discrimination, algorithms, AI, profiling, risk governance, Systemic risk, risk regulationAbstract
Human rights law, the GDPR, the LED, and the AI Act, establish obligations for addressing discrimination in algorithmic profiling but leaves room regarding the operationalization of these obligations. Against this backdrop, the Dutch Ministry of Internal Affairs, Directorate of Digital Society (DDS), is developing the Algoritmekader, a framework to provide further guidance to public organizations regarding the responsible and non-discriminatory use of algorithmic systems.
This paper examines how DDS approaches developing the Algoritmekader to address discrimination in algorithmic profiling and assesses the suitability of their approach for managing this systemic risk. Using empirical legal methods, the approach taken by DDS is analyzed through the lens of the systemic risk governance principle—communication and inclusion, integration, and reflection. The findings highlight the importance of integrating citizen perspectives in systemic risk governance, combining bottom-up & top-down coordination, strategically engaging stakeholders through targeted consultations, and embedding formal reflections to enhance institutional learning.
Downloads
References
Algemene Rekenkamer, Focus op AI bij de rijksoverheid (Algemene Rekenkamer, 2024)
CVRM, Toetsingskader risicoprofilering – Normen tegen discriminatie op grond van ras en nationaliteit (College voor de Rechten van de Mens, 2025).
Aline S. Franzke, Iris Muis, Mirko T. Schäfer, ‘Data Ethics Decision Aid (DEDA): a dialogical framework for ethical inquiry of AI and data projects in the Netherlands’ (2021) 23(3) Ethics and Information Technology 551-567.
Amnesty International, Etnisch profileren is overheidsbreed probleem (Amnesty International 2024) 53-57.
Amnesty International, We sense trouble: Automated discrimination and mass surveillance in predictive policing in the Netherlands (Amnesty International, 2020) < https://www.amnesty.org/en/documents/eur35/2971/2020/en/ >
Amnesty International, Xenophobic Machines: Discrimination through Unregulated Use of Algorithms in the Dutch Childcare Benefits Scandal (Amnesty International, 2021).
Andreas Klinke & Ortwin Renn, ‘Adaptive and integrative governance on risk and uncertainty’ (2012) 15(3) Journal of Risk Research 275-276.
European Convention on Human Rights (ECHR), opened for signature 4 November 1950, ETS 5 (entered into force 3 September 1953)
Charter of Fundamental Rights of the European Union [2012] OJ C326/391
Benham Taebi, Jan Kwakkel and Celine Kermisch, ‘Governing climate risks in the face of normative uncertainties’ (2020) 11 Wiley Interdisciplinary Reviews: Climate Change e666.
Ministry of Interior and Kingdom Relations of Netherlands, ‘Discriminerende effecten en ander ongewenst onderscheid bij het gebruik van algoritmes’ (2025) Algoritmekader < https://minbzk.github.io/Algoritmekader/onderwerpen/bias-en-non-discriminatie/ >.
Carola Houtekamer & Merijn Rengers, ‘LET OP, zegt de computer van Buitenlandse Zaken bij tienduizenden visumaanvragen. Is dat discriminatie?’ (1 May 2024) NRC < https://www.nrc.nl/nieuws/2024/05/01/let-op-zegt-de-computer-van-buitenlandse-zaken-bij-tienduizenden-visumaanvragen-is-dat-discriminatie-a4197697 >.
Christoph Kern and others, ‘When Small Decisions Have Big Impact: Fairness Implications of Algorithmic Profiling Schemes ' (2024) 1(4) Journal on Responsible Computing.
Daniel Vale, Ali El-Sharif, & Muhammed Ali, ‘Explainable artificial intelligence (XAI) post-hoc explainability methods: risks and limitations in non-discrimination law’ (2022) 2(1) AI and Ethics 815-826.
David Davidson, ‘Dubieus algoritme van de politie ‘voorspelt’ wie in de toekomst geweld zal plegen’ (23 August 2023) Follow The Money https://www.ftm.nl/artikelen/nederlandse-politie-gebruikt-minority-report-algoritme?utm_medium=social&utm_campaign=sharebuttonnietleden&utm_source=linkbutton
Dennis Vetter and others, ‘Lessons Learned from Assessing Trustworthy AI in Practice’ (2023) 2(3) Digital Society.
Anne Meuwese, Jurriaan Parie, & Ariën Voogt, ‘Hoe ‘algoprudentie’ kan bijdragen aan een verantwoorde inzet van machine learning-algoritmes’ (2024) 2024/556 Nederlands Juristenblad.
Dienst Uitvoering Onderwijs, Intern onderzoek controle uitwonendenbeurs (DUO, 2024).
Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA (Law Enforcement Directive) OJL119/89.
Dutch Data Protection Authority, DUO Gebruik van geautomatiseerde risicoclassificering op basis van een risicoprofiel bij Controleproces Uitwonende Beurs (CUB) (Dutch Data Protection Authority, 2024)
Eileen Guo, Gabriel Geiger and Justin-Casimir Braun, ‘Inside Amsterdam’s high-stakes experiment to create fair welfare AI’ MIT Technology Review (Online, 11 June 2025) https://www.technologyreview.com/2025/06/11/1118233/amsterdam-fair-welfare-ai-discriminatory-algorithms-failure/ accessed September 20, 2025.
Eva Constantaras and others, ‘Inside the Suspicion Machine’ (6 March 2023) WIRED < https://www.wired.com/story/welfare-state-algorithms/>
Frederik Zuiderveen Borgesius, ‘Strengthening legal protection against discrimination by algorithms and artificial intelligence’ (2020) 24(10) The International Journal of Human Rights 1578-1581.
Gabriela Marques Di Giulio, Ione Maria Mendes, Felipe Dos Reis Campos and Joao Nunes, ‘Risk governance in the response to global health emergencies: understanding the governance of chaos in Brazil’s handling of the Covid-19 pandemic’ (2023) 38 Health Policy and Planning 593.
International Risk Governance Council, ‘What do we mean by “Risk Governance”?’ (IRGC, 2019) https://irgc.org/risk-governance/what-is-risk-governance/ accessed 20 September 2025.
anneke Gerards & Frederik Zuiderveen Borgesisus, ‘Protected grounds and the system of non-discrimination law in the context of algorithmic decision-making and artificial intelligence’ (2022) 20 (1) Colorado Technology Law Journal 45-47.
Jedrzej Niklas, ‘Poland: Government to scrap controversial unemployment scoring system’ (16 April 2-19) AlgorithmWatch < https://algorithmwatch.org/en/poland-government-to-scrap-controversial-unemployment-scoring-system/ >.
Jeroen van Raalte, ‘Amsterdam wilde met AI de bijstand eerlijker en efficiënter maken. Het liep anders’ Trouw (Online, 6 June 2025) https://www.trouw.nl/verdieping/amsterdam-wilde-met-ai-de-bijstand-eerlijker-en-efficienter-maken-het-liep-anders~b2890374/ accessed 20 September 2025.
Jessica L. Roberts, ‘Protecting Privacy to Prevent Discrimination’ (2015) 56(6) William & Mary Law Review 2121-2127.; See for example Ligue des droits humains v Conseil des ministres [2022] C-817/19 & District Court The Hague, C-09-550982-HA ZA 18-388.
Jon Ungoed-Thomas & Yusra Abdulahi, ‘Warnings AI tools used by government on UK public are ‘racist and biased’ (25 August 2024) The Guardian <https://www.theguardian.com/technology/article/2024/aug/25/register-aims-to-quash-fears-over-racist-and-biased-ai-tools-used-on-uk-public>.
Julia Angwin and others, ‘Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks.’ (23 May 2016) ProPublica <https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing>
Karen Yeung & Sofia Ranchordás, An Introduction to Law and Regulation Text and Materials 2nd Edition (Cambridge University Press 2024) 41-77.
Lucas Haitsma, ‘Regulating algorithmic discrimination through adjudication: the Court of Justice of the European Union on discrimination in algorithmic profiling based on PNR data’ (2023) 5 Frontiers.
Lucas Haitsma & Albertjan Tollenaar, ‘Regie op de toepassing van algoritmes in het sociaal domein’ in Solke Munneke, Hanna Tolsma, & Heinrich Winter, Regie, regie, regie: over maatschappelijke problemen en de terugkeer van de sturende overheid (Boom Juridisch 2024) 99-115.
Lucas Haitsma, ‘The Murky Waters of Algorithmic Profiling: Examining discrimination in the digitalized enforcement of social security policy' (2023) 44(2) Recht der Werkelijkheid.
Lucien Hanssen, Jeroen Devilee, Marijke Hermans and others, ‘The use of risk governance principles in practice: Lessons from a Dutch public institute for risk research and assessment’ (2019) 9 European Journal of Risk Regulation 632.
Lucy Holmes McHugh, Maria Carmen Lemos and Tiffany Hope Morrison, ‘Risk? Crisis? Emergency? Implications of the new climate emergency framing for governance and policy’ (2021) 12 Wiley Interdisciplinary Reviews: Climate Change e736.
Maddalena Favaretto, Eva De Clercq, Elger Bernice Simone, ‘Big Data and discrimination: perils, promises and solutions. A systematic review’ (2019) 6(1) Journal of Big Data 1-27.
Marc Hijink, ‘IND maakte zich schuldig aan etnisch profileren’ (6 May 2022) NRC < <https://www.nrc.nl/nieuws/2022/05/06/ind-verzweeg-een-dikke-error-met-kennismigranten-a4123661>.
Marc Schuilenburg & Abhijit Das, ‘Vuile data leiden tot willekeur bij politie’ (5 October 2020) Sociale Vraagstukken <https://www.socialevraagstukken.nl/vuile-data-leiden-tot-willekeur-bij-politie/>.
Marco Marabelli, Sue Newell, & Valerie Handunge, ‘The lifecycle of algorithmic decision-making systems: Organizational choices and ethical challenges’ (2021) 30(3) The Journal of Strategic Information Systems 101683.
Maria O’ Sullivan, ‘Artificial intelligence and the right to an effective remedy’ in Michal Balcerzak & Julia Kapelańska-Pręgowska Artificial Intelligence and International Human Rights Law (Edward Elgar Publishing 2024) 196-213.
Marina Micheli, Marisa Ponti, Max Craglia and others, ‘Emerging models of data governance in the age of datafication’ (2020) 7 Big Data & Society 2053951720948087.
Marjolein Boonstra and others, ‘Lessons Learned in Performing a Trustworthy AI and Fundamental Rights Assessment’ (2024) arXiv:2404.14366.
Marjolein van Asselt & Ortwin Renn, ‘Risk Governance’ (2011) 14(4) Journal of Risk Research 431–449.; IRGC, IRGC Guidelines for the Governance of Systemic Risks (International Risk Governance Council, 2018).
Merijn Rengers, Carola Houtekamer & Nalinee Maleeyakul, ‘’Pas op met deze visumaanvraag’, waarschuwt het algoritme dat discriminatie in de hand werkt. Het ministerie negeert kritiek’ (23 April 2023) NRC <https://www.nrc.nl/nieuws/2023/04/23/beslisambtenarenblijven-profileren-met-risicoscores-a4162837>
Michele Loi, Andrea Ferrario, & Eleonora Viganò, ‘Transparency as design publicity: explaining and justifying inscrutable algorithms’ (2021) 23(3) Ethics and Information Technology 253-264.
Ministry of Interior and Kingdom Relations of Netherlands, ‘About - Team’ (2025) AI Validatie Team < https://minbzk.github.io/ai-validation/about/team/ >.
Ministry of Interior and Kingdom Relations of Netherlands, ‘ADR-0011 Researcher in Residence’ (2024) AI Validatie Team < https://minbzk.github.io/ai-validation/adrs/0011-researcher-in-residence/>.
Ministry of Interior and Kingdom Relations of Netherlands, ‘AI Validation Team’ (2024) AI Validation Team < https://minbzk.github.io/ai-validation/>;
Ministry of Interior and Kingdom Relations of Netherlands, ‘Algoritmekader’ (2025) aienalgoritmes.pleio <https://aienalgoritmes.pleio.nl/groups/view/bf169271-70df-47b3-ae59-b46f6b1b32dc/algoritmekader>
Ministry of Interior and Kingdom Relations of Netherlands, ‘MinBZK/Algoritmekader Repository’ (2025) Github < https://github.com/MinBZK/Algoritmekader >.
Ministry of Interior and Kingdom Relations of Netherlands, ‘Werkgroep Fundamentele Rechten’ (2025) aienalgoritmes.pleio <https://aienalgoritmes.pleio.nl/groups/view/314509b2-70e7-4ca1-b4e3-cb2d1c26d4ac/werkgroep-fundamentele-rechten>.
Ministry of Internal Affairs and Kingdom Relations Netherlands, ‘Verdiepingssessie bias, fairness en non-discriminatie - 18 sept. 2024’ (4 October 2024) aienalgoritmes.pleio < https://aienalgoritmes.pleio.nl/wiki/view/3034f239-12a9-4dd8-907b-587e2a223533/verdiepingssessie-bias-fairness-en-non-discriminatie-18-sept-2024 >
Ministry of Internal Affairs and Kingdom Relations Netherlands, ‘Terugblik verdiepingssessie bias, fairness en non-discriminatie’ (September, 2024) < https://algoritmeregister.email-provider.eu/web/1pjwwoyxrs/difhirmnit >
Ministry of Internal Affairs and Kingdom Relations Netherlands, Implementatiekader 'Verantwoorde inzet van algoritmen’ (Ministry of Internal Affairs and Kingdom Relations Netherlands, 2023)
Ministry of Internal Affairs and Kingdom Relations, ‘Maak een lijst van de meest kwetsbare groepen en bescherm hen extra’ (2024) Algoritmekadder < https://minbzk.github.io/Algoritmekader/voldoen-aan-wetten-en-regels/maatregelen/2-owp-08-kwetsbare-groepen/?h=kwetsbare+gr >
Ministry of the Interior and Kingdom Relations of Netherlands, ‘Algoritmekader’ (December 2024) Overheid.nl < https://minbzk.github.io/Algoritmekader/ >.
Ministry of the Interior and Kingdom Relations of Netherlands, ‘Hillie Beentjes directeur Digitale Samenleving tevens plaatsvervangend directeur-generaal DOO bij BZK’ (27 May 2025) Algemene Bestuursdienst < https://www.algemenebestuursdienst.nl/actueel/nieuws/2024/05/27/hillie-beentjes-directeur-digitale-samenleving-tevens-plaatsvervangend-directeur-generaal-doo-bij-bzk >.
Mirthe Danloff, ‘Analysing and organizing human communications for AI fairness assessment’ (2024) AI & Society 1-21.
Nederlands Normalisatie-instituut (NEN), “Normcommissie Artificial Intelligence en Big Data” (NEN) https://www.nen.nl/normcommissie-artificial-intelligence-en-big-data accessed September 20, 2025.
Nederlands Normalisatie-instituut (NEN), “Start ontwikkeling NTA ‘Beheersmaatregelen ten behoeve van de verantwoorde inzet van risicoprofileringsalgoritmen’” (NEN, 14 February 2025) https://www.nen.nl/nieuws/actueel/start-ontwikkeling-nta--beheersmaatregelen-ten-behoeve-van-de-verantwoorde-inzet-van-risicoprofileringsalgoritmen-/ accessed September 20, 2025.
Netherlands vs NJCM c.s. [2020] ECLI:NL:RBDHA:2020:1878., District Court The Hague, C-09-550982-HA ZA 18-388.
Niklas Eder, ‘Privacy, Non-Discrimination and Equal Treatment: Developing a Fundamental Rights Response to Behavioural Profiling’ in Algorithmic Governance and Governance of Algorithms: Legal and Ethical Challenges (Springer International Publishing 2021) 32-38.
Observations AI Validation Team 08-02-2024 (8 February 2024).
Observations AI Validation Team 13-06-2024 (13 June 2024).
Ortwin Renn and Andreas Klinke, ‘A framework of adaptive risk governance for urban planning’ (2013) 5 Sustainability 2036.
Ortwin Renn, ‘Stakeholder and Public Involvement in Risk Governance’ (2015) 6(8) International Journal of Disaster Risk Science 8.
Ortwin Renn & Andreas Klinke, ‘Risk’ in Christopher Ansell & Jacob Torfing in Handbook on Theories of Governance (Edward Elgar Publishing 2016) 253-254.
Ortwin Renn, Andreas Klinke, & Marjolein van Asselt, ‘Coping with complexity, uncertainty and ambiguity in risk governance: a synthesis’ (2011) 40(2) Ambio 231-246; Karen Yeung & Sofia Ranchordás (n 11) 66.
Rahida Richardson, Jason Schultz, & Kate Crawford, ‘Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice’ (2019) 94 NYU Law Review.
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) OJL119/1.
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 11 July 2024 laying down harmonised rules on artifical intelligence (Artificial Intelligence Act) [2024] OJ L202/1.
Rijksoverheid, ‘ Over Open Overheid – Directie Digitale Samenleving’ (2025) Open-overheid.nl < https://www.open-overheid.nl/over-open-overheid >.
Sahar Barmomanesh & Victor Miranda-Soberanis, ‘Potential Biased Outcomes on Child Welfare and Racial Minorities in New Zealand using Predictive Models: An Initial Review on Mitigation Approaches’ (2023) arXiv.
Tilburg University, Non-Discrimination by Design (Tilburg University 2021).
Toby Murray, Marc Cheong, Jeannie Paterson, ‘The flawed algorithm at the heart of Robodebt’ (10 July 2023) Pursuit < https://pursuit.unimelb.edu.au/articles/the-flawed-algorithm-at-the-heart-of-robodebt >.
Tweede Kamer den Staten Generaal, ‘Rondetafelgesprek over risicoprofilering in het handhavingsbeleid’ (23 May 2024 < https://www.tweedekamer.nl/debat_en_vergadering/commissievergaderingen/details?id=2024A03248 >.
Tweede Kamer der Staten-Generaal, Informatie- en communicatietechnologie (ICT) Brief van de Staatssecretaris van Binnenlandse Zaken en Koninkrijkerelaties (7 July 2023) 26 643, nr. 1056.
Victoria Ahlqvist, Andreas Norrman and Marianne Jahre, ‘Supply chain risk governance: Towards a conceptual multi-level framework’ (2020) 13 Operations and Supply Chain Management 382.
Will Douglas Heaven, ‘Predictive policing algorithms are racist. They need to be dismantled.’ (17 July 2020) MIT Technology Review <https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/>
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Lucas M. Haitsma

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
