Our research in Responsible AI aims to shape the field of artificial intelligence and machine learning in ways that foreground the human experiences and impacts of these technologies. We examine and shape emerging AI models, systems, and datasets used in research, development, and practice. This research uncovers foundational insights and devises methodologies that define the state-of-the-art across the field. We advance equity, fairness, transparency, robustness, interpretability, and inclusivity as key elements of AI systems. For example, recent research evaluates the generalizability of the fairness properties of medical AI algorithms and discusses the cultural properties of fair AI systems globally. We strive to ensure that the promise of AI is realized beneficially for all individuals and communities, prioritizing social and contextual implications.
Recent publications
Building Stereotype Repositories with Complementary Approaches
C3NLP workshop at EACL 2023 (2023)
“Discover AI in Daily Life”: An AI Literacy Lesson for Middle School Students
SIGCSE 2023: Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 2 (2023), pp. 1327
Annotator Diversity in Data Practices
Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, USA (to appear)
Some of our teams
Join Us
Our researchers work across the world
Together, our research teams tackle tough problems.