Unintended Machine Learning Biases as Social Barriers for Persons with Disabilities
Abstract
Persons with disabilities face many barriers to participation in society,
and the rapid advancement of technology creates ever more.
Achieving fair opportunity and justice for people with disabilities
demands paying attention not just to accessibility, but also to the attitudes
towards, and representations of, disability that are implicit in machine learning (ML) models
that are pervasive in how one engages with the society.
However such models often inadvertently learn to perpetuate
undesirable social biases from the data on which they are trained.
This can result, for example, in models for classifying text producing very different
predictions for {\em I stand by a person with mental illness},
and {\em I stand by a tall person}.
We present evidence of such social biases in existing ML models, along
with an analysis of biases in a dataset used for model development.
and the rapid advancement of technology creates ever more.
Achieving fair opportunity and justice for people with disabilities
demands paying attention not just to accessibility, but also to the attitudes
towards, and representations of, disability that are implicit in machine learning (ML) models
that are pervasive in how one engages with the society.
However such models often inadvertently learn to perpetuate
undesirable social biases from the data on which they are trained.
This can result, for example, in models for classifying text producing very different
predictions for {\em I stand by a person with mental illness},
and {\em I stand by a tall person}.
We present evidence of such social biases in existing ML models, along
with an analysis of biases in a dataset used for model development.