Ensuring Fairness in Machine Learning to Advance Health Equity
Abstract
A central promise of machine learning (ML) is to use historical data to project the future trajectories of patients. Will they have a good or bad outcome? What diagnoses will they have? What treatments should they be given? But in many cases, we do not want the future to look like the past, especially when the past contains patterns of human or structural biases against vulnerable populations.
This is not an abstract problem. In a model used to predict future crime based on historical records, black defendants who did not re-offend were classified as high-risk at a substantially higher rate than white defendants who did not re-offend.22 Similar biases have been observed in predictive policing,23 social services,24 and technology companies.25 Given known healthcare disparities, this problem will nearly inevitably surface in medical domains where ML could be applied(Table 1): a "protected group" could be systematically excluded from the benefits of a ML system or even harmed. We argue that ML systems should be fair, which is defined in medical ethics as the "moral obligation to act on the basis of fair adjudication between competing claims."26
Recent advances in computer science have offered mathematical and procedural suggestions to make a ML system fairer: ensures it is equally accurate for patients in a protected class, allocates resources to protected classes proportional to need, leads to better patient outcomes for all, and is built and tested in ways that protect the privacy, expectations, and trust of patients.
To guide clinicians, administrators, policymakers, and regulators in making principled decisions to improve ML fairness, we illustrate the mechanisms by which a model could be unfair. We then review both technical and non-technical solutions to improve fairness. Finally, we make policy recommendations to stakeholders specifying roles, responsibilities and oversight.
This is not an abstract problem. In a model used to predict future crime based on historical records, black defendants who did not re-offend were classified as high-risk at a substantially higher rate than white defendants who did not re-offend.22 Similar biases have been observed in predictive policing,23 social services,24 and technology companies.25 Given known healthcare disparities, this problem will nearly inevitably surface in medical domains where ML could be applied(Table 1): a "protected group" could be systematically excluded from the benefits of a ML system or even harmed. We argue that ML systems should be fair, which is defined in medical ethics as the "moral obligation to act on the basis of fair adjudication between competing claims."26
Recent advances in computer science have offered mathematical and procedural suggestions to make a ML system fairer: ensures it is equally accurate for patients in a protected class, allocates resources to protected classes proportional to need, leads to better patient outcomes for all, and is built and tested in ways that protect the privacy, expectations, and trust of patients.
To guide clinicians, administrators, policymakers, and regulators in making principled decisions to improve ML fairness, we illustrate the mechanisms by which a model could be unfair. We then review both technical and non-technical solutions to improve fairness. Finally, we make policy recommendations to stakeholders specifying roles, responsibilities and oversight.