The mission of the Responsible AI and Human Centered Technology (RAI-HCT) team is to conduct research and develop methodologies, technologies, and best practices to ensure AI systems are built responsibly.
About the team
We want to ensure that AI, and its development, have a positive impact on everyone, including marginalized communities. To meet this goal, we research and develop technology with a human-centered perspective, building tools and processes that put our AI Principles into practice at scale. Working alongside diverse collaborators, including our partner teams and external contributors as we strive to make AI more transparent, fair, and useful to diverse communities. We also seek to constantly improve the reliability and safety of our entire AI ecosystem.
Our intention is to create a future where technology benefits all users and society.
What we do
- Foundational Research: Build foundational insights and methodologies that define the state-of-the-art of Responsible AI development across the field
- Impact at Google: Collaborate with and contribute to teams across Alphabet to ensure that Google’s products are built following our AI Principles
- Democratize AI: Embed a diversity of cultural contexts and voices in AI development, and empower a broader audience with consistent access, control, and explainability
- Tools and Guidance: Develop tools and technical guidance that can be used by Google, our customers, and the community to test and improve AI products for RAI objectives
Team focus summaries
Highlighted projects
An interactive visualization tool for understanding datasets with the goal of improving data quality and mitigating fairness and bias issues. See the case study for the COCO Captions dataset.
The More Inclusive People Annotations for Fairness (MIAP) collection is a new set of annotations from the 9 million-plus Open Images dataset that uses a new labeling protocol designed to help researchers incorporate fairness analysis into their work.
The Monk Skin Tone Scale provides a broader spectrum of skin tones that can be used to evaluate datasets and ML models for better representation.
Interactive visualizations exploring important concepts in machine learning. "Big ideas in machine learning, simply explained"
An open-source platform for visualization and understanding of ML models
Machine learning research that takes a hard look at whether methods are modeling the causal mechanisms we think they are, and when we expect them to be fair.
Qualitative evaluation documentation for health datasets designed in consultation with a diverse set of health experts.
Societal Context Understanding Tools and Solutions (SCOUTS) is a Google Research initiative with the mission to provide people and ML systems with the scalable, trustworthy societal context knowledge required to realize responsible and robust AI.
A case study looking at correlation related to gender to formulate a series of best practices for using pre-trained language models
A library with techniques for addressing bias and fairness issues in ML models.
A portfolio of projects demonstrating AI’s societal benefit to enable real-world impact.
Featured publications
ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*) (2020)
Advances in Neural Information Processing Systems 33 (2020)
ACM FAccT Conference 2022, ACM (2022)
Journal of Machine Learning Research (2020)
ACM Conference on Human Factors in Computing Systems (ACM CHI) 2022 (2022)
In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (AACL-IJCNLP) (2022)
FAT* Barcelona, 2020, ACM Conference on Fairness, Accountability, and Transparency (ACM FAT* (2020)