Our research in Responsible AI aims to shape the field of artificial intelligence and machine learning in ways that foreground the human experiences and impacts of these technologies. We examine and shape emerging AI models, systems, and datasets used in research, development, and practice. This research uncovers foundational insights and devises methodologies that define the state-of-the-art across the field. We advance equity, fairness, transparency, robustness, interpretability, and inclusivity as key elements of AI systems. For example, recent research evaluates the generalizability of the fairness properties of medical AI algorithms and discusses the cultural properties of fair AI systems globally. We strive to ensure that the promise of AI is realized beneficially for all individuals and communities, prioritizing social and contextual implications.
Recent publications
Safety and Fairness for Content Moderation in Generative Models
CVPR Workshop on Ethical Considerations in Creative applications of Computer Vision (2023)
Terms-we-Serve-with: Five dimensions for anticipating and repairing algorithmic harm
Big Data & Society, vol. 10(2) (2023), pp. 14
AART: AI-Assisted Red-Teaming with Diverse Data Generation for New LLM-powered Applications
The 2023 Conference on Empirical Methods in Natural Language Processing (2023) (to appear)
Contextualizing the Limits of Model & Evaluation Dataset Curation on Semantic Similarity Classification Tasks
EMNLP Generation, Evaluation & Metrics (GEM) Workshop (2023)
AI’s Regimes of Representation: A Community-centered Study of Text-to-Image Models in South Asia
Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, Association for Computing Machinery, 506–517
Some of our teams
Join Us
Our researchers work across the world
Together, our research teams tackle tough problems.