Jump to Content

Diversity and Inclusion Metrics for Subset Selection

Margaret Mitchell
Dylan Baker
Nyalleng Moorosi
Alex Hanna
Timnit Gebru
Jamie Morgenstern
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES), ACM (2020)

Abstract

The concept of fairness has recently been applied in machine learning settings to describe a wide range of constraints and objectives. When applied to ranking, recommendation, or subset selection problems for an individual, it becomes less clear that fairness goals are more applicable than goals that prioritize diverse outputs and instances that represent the individual's goals well. In this work, we discuss the relevance of the concept of fairness to the concepts of diversity and inclusion, and introduce metrics that quantify the diversity and inclusion of an instance or set. Diversity and inclusion metrics can be used in tandem, including additional fairness constraints, or may be used separately, and we detail how the different metrics interact. Results from human subject experiments demonstrate that the proposed criteria for diversity and inclusion are consistent with social notions of these two concepts, and human judgments on the diversity and inclusion of example instances are correlated with the defined metrics.