- Alex Fiannaca
- Cindy Bennett
- Shaun Kane
- Meredith Ringel Morris
Abstract
As modern, pre-trained ML models have proliferated in recent years , many researchers and practitioners have made significant efforts to prevent AI systems from causing harm. This focus on safety is critical, but a singular focus on safety can come at the exclusion of considering other important stakeholder values and the interactions between those values in the AI systems we build. In this position paper, we propose that the AI community should incorporate ideas from the Value-Sensitive Design framework from the Human-Computer Interaction community to ensure the needs and values of all stakeholders are reflected in the systems we build. We share observations and reflections from our experiences working on AI-supported accessibility technologies and with members of various disability communities to illustrate the tensions that sometimes arise between safety and other values.
Research Areas
Learn more about how we do research
We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work