Fairness and Bias in Online Selection

Jose Correa
Andres Cristi
Paul Duetting
Ashkan Norouzi Fard
Proceedings of the 2021 International Conference on Machine Learning (ICML'21), pp. 2112-2121
Google Scholar

Abstract

There is growing awareness and concern about fairness in machine learning and algorithm design. This is particularly true in online selection problems where decisions are often biased, for example, when assessing credit risks or hiring staff. We address the issues of fairness and bias in online selection by introducing multi-color versions of the classic secretary and prophet problem. We develop optimal fair algorithms for these new problems, and provide tight bounds on the competitiveness of these new algorithms. We validate the efficacy and fairness of these algorithms and natural benchmarks on real-world data.