Google Research

Fairness properties do not transfer: do we have viable solutions for real-world applications?

Women in Machine Learning workshop at NeurIPS 2021 (2022)

Abstract

Fairness and robustness are often considered as orthogonal dimensions when evaluating machine learning models. However, recent work has exposed interactions between fairness and robustness, showing that fairness properties do not transfer across environments. In healthcare settings, this can result in e.g. a model that performs fairly (according to a selected metric) in hospital A showing unfairness when deployed in hospital B. While a nascent field has emerged to provide \textit{fair and robust} models, it typically remains focused on 'simple' settings, limiting its impact for real-world applications. In this work, we explore in which settings the current literature is applicable by referring to a causal framing. We then show that the settings in real-world applications are complex and invalidate the requirements of such methods using examples in dermatology and in Electronic Health Records (EHR). Our work hence exposes technical, practical, and engineering gaps that still prevent \textit{fair and robust} machine learning modelling in real-world applications. Finally, we discuss non-technical solutions to this issue, as well as highlight how feature engineering can be a path forward.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work