Understanding challenges to the validity of disaggregated evaluations for algorithmic fairness

Chirag Nagpal
David Madras
Vishwali Mhasawade
Olawale Salaudeen
Shannon Sequeira
Santiago Arciniegas
Lillian Sung
Nnamdi Ezeanochie
Heather Cole-Lewis
Sanmi Koyejo
Proceedings of the 2025 Conference on Neural Information Processing Systems (NeurIPS) (2025)

Abstract

Disaggregated evaluation across subgroups is critical for assessing the fairness of machine learning models, but its uncritical use can mislead practitioners. We show that equal performance across subgroups is an unreliable measure of fairness when data are representative of the relevant populations but reflective of real-world disparities. Furthermore, when data are not representative due to selection bias, both disaggregated evaluation and alternative approaches based on conditional independence testing may be invalid without explicit assumptions regarding the bias mechanism. We use causal graphical models to characterize fairness properties and metric stability across subgroups under different data generating processes. Our framework suggests complementing disaggregated evaluations with explicit causal assumptions and analysis to control for confounding and distribution shift, including conditional independence testing and weighted performance estimation. These findings have broad implications for how practitioners design and interpret model assessments given the ubiquity of disaggregated evaluation.