Skip to main content

Developing fair models in an imperfect world: How to deal with bias in AI

23 December 2021

Artificial intelligence (AI) is increasingly used in data-based decision making, from general rule-based models to machine learning (ML) models. Decisions made by ML models are thought to be better, faster, and more consistent than human decisions. However, as AI becomes an integral part of our lives, the concerns over potentially biased and unfair models are growing. Insurance is one of many industries facing this problem. This white paper discusses how to detect bias and build a fair machine-learning model.


About the Author(s)

Daniël van Dam

Amsterdam Insurance and Financial Risk | Tel: 31686822397

Raymond van Es

Amsterdam Insurance and Financial Risk | Tel: 31 6 1133 4000

Jan Thiemen Postema

Amsterdam Insurance and Financial Risk | Tel: 31686855107

We’re here to help