Fairness and Bias in Online Selection
Abstract
There is growing awareness and concern about fairness in machine learning and algorithm design. This is particularly true in online selection problems where decisions are often biased, for example, when assessing credit risks or hiring staff. We address the issues of fairness and bias in online selection by introducing multi-color versions of the classic secretary and prophet problem. We develop optimal fair algorithms for these new problems, and provide tight bounds on the competitiveness of these new algorithms. We validate the efficacy and fairness of these algorithms and natural benchmarks on real-world data.