Jump to Content

Evaluating Fairness of Machine Learning Models Under Uncertain and Incomplete Information

Pranjal Awasthi
Alex Beutel
Matthaeus Kleindessner
Jamie Morgenstern
Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency

Abstract

Training and evaluation of fair classifiers is a challenging problem. This is partly due to the fact that most fairness metrics of interest depend on both the sensitive attribute information and label information of the data points. In many scenarios it is not possible to collect large datasets with such information. An alternate approach that is commonly used is to separately train an attribute classifier on data with sensitive attribute information, and then use it later in the ML pipeline to evaluate the bias of a given classifier. While such decoupling helps alleviate the problem of demographic scarcity, it raises several natural questions such as: how should the attribute classifier be trained?, and how should one use a given attribute classifier for accurate bias estimation? In this work we study this question from both theoretical and empirical perspectives.

Research Areas