Jump to Content

Identifying Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction

Shalaleh Rismani
Kathryn Henne
AJung Moon
Paul Nicholas
N'Mah Yilla-Akbari
Jess Gallegos
Emilio Garcia
Gurleen Virk
Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, Association for Computing Machinery, 723–741


Understanding the broader landscape of potential harms from algorithmic systems enables practitioners to better anticipate consequences of the systems they build. It also supports the prospect of incorporating controls to help minimize harms that emerge from the interplay of technologies and social and cultural dynamics. A growing body of scholarship has identified a wide range of harms across different algorithmic and machine learning (ML) technologies. However, computing research and practitioners lack a high level and synthesized overview of harms from algorithmic systems arising at the micro-, meso-, and macro-levels of society. We present an applied taxonomy of sociotechnical harms to support more systematic surfacing of potential harms in algorithmic systems. Based on a scoping review of prior research on harms from AI systems (n=172), we identified five major themes related to sociotechnical harms — allocative, quality-of-service, representational, social system, and interpersonal harms. We describe these categories of harm, and present case studies that illustrate the usefulness of the taxonomy. We conclude with a discussion of challenges and under-explored areas of harm in the literature, which present opportunities for future research.

Research Areas