Responsible AI

The mission of the Responsible AI and Human Centered Technology (RAI-HCT) team is to conduct research and develop methodologies, technologies, and best practices to ensure AI systems are built responsibly.

people talking in an office space

The mission of the Responsible AI and Human Centered Technology (RAI-HCT) team is to conduct research and develop methodologies, technologies, and best practices to ensure AI systems are built responsibly.

About the team

We want to ensure that AI, and its development, have a positive impact on everyone. To meet this goal, we research and develop technology with a human-centered perspective, building tools and processes that put our AI Principles into practice at scale. Working alongside numerous collaborators, including our partner teams and external contributors as we strive to make AI more transparent, fair, and useful to all communities. We also seek to constantly improve the reliability and safety of our entire AI ecosystem.

Our intention is to create a future where technology benefits all users and society.

What we do

  • Foundational Research: Build foundational insights and methodologies that define the state-of-the-art of Responsible AI development across the field
  • Impact at Google: Collaborate with and contribute to teams across Alphabet to ensure that Google’s products are built following our AI Principles
  • Democratize AI: Embed varied cultural contexts and voices in AI development, and empower a broader audience with consistent access, control, and explainability
  • Tools and Guidance: Develop tools and technical guidance that can be used by Google, our customers, and the community to test and improve AI products for RAI objectives

Team focus summaries

Featured publications

Towards a Critical Race Methodology in Algorithmic Fairness
Alex Hanna
ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*) (2020)
Fairness in Recommendation Ranking through Pairwise Comparisons
Alex Beutel
Tulsee Doshi
Hai Qian
Li Wei
Yi Wu
Lukasz Heldt
Zhe Zhao
Lichan Hong
Cristos Goodrow
KDD (2019)
Healthsheet: development of a transparency artifact for health datasets
Diana Mincu
Lauren Wilcox
Jessica Schrouff
Razvan Adrian Amironesei
Nyalleng Moorosi
ACM FAccT Conference 2022, ACM (2022)
Underspecification Presents Challenges for Credibility in Modern Machine Learning
Dan Moldovan
Ben Adlam
Babak Alipanahi
Alex Beutel
Christina Chen
Jon Deaton
Matthew D. Hoffman
Shaobo Hou
Neil Houlsby
Ghassen Jerfel
Yian Ma
Diana Mincu
Akinori Mitani
Andrea Montanari
Christopher Nielsen
Thomas Osborne
Rajiv Raman
Kim Ramasamy
Jessica Schrouff
Martin Gamunu Seneviratne
Shannon Sequeira
Harini Suresh
Victor Veitch
Steve Yadlowsky
Xiaohua Zhai
D. Sculley
Journal of Machine Learning Research (2020)
A Systematic Review and Thematic Analysis of Community-Collaborative Approaches to Computing Research
Ned Cooper
Tiffanie Horne
Gillian Hayes
Jess Scon Holbrook
Lauren Wilcox
ACM Conference on Human Factors in Computing Systems (ACM CHI) 2022 (2022)
Re-contextualizing Fairness in NLP: The Case of India
Shaily Bhatt
In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (AACL-IJCNLP) (2022)
Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditin
Becky White
Inioluwa Deborah Raji
Margaret Mitchell
Timnit Gebru
FAT* Barcelona, 2020, ACM Conference on Fairness, Accountability, and Transparency (ACM FAT* (2020)

Highlighted projects

Some of our people