Patrick Gage Kelley

Patrick Gage Kelley is a lead Trust & Safety researcher at Google focusing on questions of security, privacy, and anti-abuse.

He has led projects on the use and design of standardized, user-friendly privacy displays, passwords, location-sharing, mobile apps, encryption, and technology ethics. Patrick’s work on redesigning privacy policies in the style of nutrition labels was included in the 2009 Annual Privacy Papers for Policymakers event on Capitol Hill. Apple and Google revived this work with their App Privacy Labels, and it was awarded a Test of Time award in 2025 at USEC. Recently he has focused on research that supports users who at-risk of the most frequent, severe, or impactful online harms, definitional work on digital safety research, and work on how users understand and be given better explanations of AI.

Before Google, he was a professor of Computer Science at the University of New Mexico and faculty at the UNM ARTSLab and received his Ph.D. from Carnegie Mellon University. He has worked at Wombat Security Technologies, Intel Labs, and the National Security Agency.
Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Supporting the Digital Safety of At-Risk Users: Lessons Learned from 9+ Years of Research and Training
    Tara Matthews
    Lea Kissner
    Andreas Kramm
    Andrew Oplinger
    Andy Schou
    Stephan Somogyi
    Dalila Szostak
    Jill Woelfer
    Lawrence You
    Izzie Zahorian
    ACM Transactions on Computer-Human Interaction, 32(3) (2025), pp. 1-39
    Preview abstract Creating information technologies intended for broad use that allow everyone to participate safely online—which we refer to as inclusive digital safety—requires understanding and addressing the digital-safety needs of a diverse range of users who face elevated risk of technology-facilitated attacks or disproportionate harm from such attacks—i.e., at-risk users. This article draws from more than 9 years of our work at Google to understand and support the digital safety of at-risk users—including survivors of intimate partner abuse, people involved with political campaigns, content creators, youth, and more—in technology intended for broad use. Among our learnings is that designing for inclusive digital safety across widely varied user needs and dynamic contexts is a wicked problem with no “correct” solution. Given this, we describe frameworks and design principles we have developed to help make at-risk research findings practically applicable to technologies intended for broad use and lessons we have learned about communicating them to practitioners. View details
    Beyond Digital Literacy: Building Youth Digital Resilience Through Existing “Information Sensibility” Practices
    Mia Hassoun
    Ian Beacock
    Todd Carmody
    Beth Goldberg
    Devika Kumar
    Laura Murray
    Rebekah Park
    Behzad Sarmadi
    Social Sciences Journal, 14(4) (2025)
    Preview abstract Youth media consumption and disordered eating practices have historically been subjects of moral panics, often resulting in protective, deficit-based interventions like content removal. We argue for interventions which instead equip youth to evaluate and manage risks in their online environments, building upon their existing “information sensibility” practices. Drawing upon ethnographic research and intervention testing with 77 participants in the US and India, we analyze how youth (aged 13–26), including those with diverse political perspectives and those recovering from disordered eating (DE), engage with online news and health information. Participants generally algorithmically encountered (rather than searched for) information online, and their engagement was shaped more by social motivations—like belonging—than truth seeking. Participants interpreted online information collaboratively, relying on social cues and peer validation within their online communities. They demonstrated preference for personal testimonies and relatable sources, particularly those with similar social identities. We propose resilience-building interventions that build upon these youth online information practices by: (1) leveraging peer networks, promoting critical information engagement through collaborative learning and peer-to-peer support within online communities; (2) developing social media sensibility, equipping youth to critically evaluate information sources in situ; (3) providing pathways offline, connecting youth to desired in-person communities; and (4) encouraging probabilistic thinking. View details
    Preview abstract Youth are increasingly exposed to a broad range of technology-facilitated abuse that challenges their safety and well-being. Building on previous work that examined youth help-seeking behaviors, coping strategies, threats they encounter, and the social support systems around them, we articulate a framework— called PROTECT—Problem recognition, Reaching out, Organizing support, Training, Engaging experts, Continuous support, and Tackling safety measures—which integrates existing models of support, help-seeking, and digital skills to offer a high-level, structured approach to adults who serve as a support system to youth navigate technology-facilitated abuse. The framework unpacks social and contextual dynamics that influence help-seeking behaviors, providing a foundation for educators, advocates, health professionals, developers and other adult stakeholders to design and develop trauma-informed, timely interventions to promote resilience. View details
    Leveraging Virtual Reality to Enhance Diversity and Inclusion training at Google
    Karla Brown
    Leonie Sanderson
    2024 CHI Conference on Human Factors in Computing Systems, ACM
    Preview abstract Virtual reality (VR) has emerged as a promising educational training method, offering a more engaging and immersive experience than traditional approaches. In this case study, we explore its effectiveness for diversity, equity, and inclusion (DEI) training, with a focus on how VR can help participants better understand and appreciate different perspectives. We describe the design and development of a VR training application that aims to raise awareness about unconscious biases and promote more inclusive behaviors in the workplace. We report initial findings based on the feedback of Google employees who took our training and found that VR appears to be an effective way to enhance DEI training. In particular, participants reported that VR training helped them better recognize biases and how to effectively respond to them. However, our findings also highlight some challenges with VR-based DEI training, which we discuss in terms of future research directions. View details
    Preview abstract Image-based sexual abuse (IBSA), like other forms of technology-facilitated abuse, is a growing threat to peoples' digital safety. Attacks include unwanted solicitations for sexually explicit images, extorting people under threat of leaking their images, or purposefully leaking images to enact revenge or exert control. In this paper, we explore how people experiencing IBSA seek and receive help from social media. Specifically, we identify over 100,000 Reddit posts that engage relationship and advice communities for help related to IBSA. We draw on a stratified sample of these posts to qualitatively examine how various types of IBSA unfold, the support needs of victim-survivors experiencing IBSA, and how communities help victim-survivors navigate their abuse through technical, emotional, and relationship advice. In the process, we highlight how gender, relationship dynamics, and the threat landscape influence the design space of sociotechnical solutions. We also highlight gaps that remain to connecting victim-survivors with important care---regardless of whom they turn to for help. View details
    Understanding Digital-Safety Experiences of Youth in the U.S.
    Diana Freed
    Natalie N. Bazarova
    Eunice Han
    Dan Cosley
    The ACM CHI Conference on Human Factors in Computing Systems, ACM (2023)
    Preview abstract The seamless integration of technology into the lives of youth has raised concerns about their digital safety. While prior work has explored youth experiences with physical, sexual, and emotional threats—such as bullying and trafficking—a comprehensive and in-depth understanding of the myriad threats that youth experience is needed. By synthesizing the perspectives of 36 youth and 65 adult participants from the U.S., we provide an overview of today’s complex digital-safety landscape. We describe attacks youth experienced, how these moved across platforms and into the physical world, and the resulting harms. We also describe protective practices the youth and the adults who support them took to prevent, mitigate, and recover from attacks, and key barriers to doing this effectively. Our findings provide a broad perspective to help improve digital safety for youth and set directions for future work. View details
    “Discover AI in Daily Life”: An AI Literacy Lesson for Middle School Students
    Allison Woodruff
    Annica Schjott Voneche
    Kelly Thunstrom
    Rebecca L. Hardy
    Derek R. Aoki
    SIGCSE 2023: Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 2 (2023), pp. 1327
    Preview abstract We describe “Discover AI in Daily Life”, a lesson in Google’s Applied Digital Skills curriculum. The lesson introduces elements of AI literacy and is freely available online at g.co/DiscoverAI. It is designed for middle school students while also supporting high school and adult learners. View details
    Preview abstract Online hate and harassment poses a threat to the digital safety of people globally. In light of this risk, there is a need to equip as many people as possible with advice to stay safer online. We interviewed 24 experts to understand what threats and advice internet users should prioritize to prevent or mitigate harm. As part of this, we asked experts to evaluate 45 pieces of existing hate-and-harassment-specific digital-safety advice to understand why they felt advice was viable or not. We find that experts frequently had competing perspectives for which threats and advice they would prioritize. We synthesize sources of disagreement, while also highlighting the primary threats and advice where experts concurred. Our results inform immediate efforts to protect users from online hate and harassment, as well as more expansive socio-technical efforts to establish enduring safety. View details
    Preview abstract Explainability helps people understand and interact with the systems that make decisions and inferences about them. This should go beyond providing explanations at the moment of a decision; rather, explainability is best served when information about AI is incorporated into the entire user journey and AI literacy is built continuously throughout a person’s life. We share resources that encourage AI practitioners to think more broadly about what explanations can look like across their products and ways to provide people with a solid foundation that helps them better understand AI systems and decisions. View details
    Preview abstract Online content creators---who create and share their content on platforms such as Instagram, TikTok, Twitch, and YouTube---are uniquely at-risk of increased digital-safety threats due to their public prominence, the diverse social norms of wide-ranging audiences, and their access to audience members as a valuable resource. We interviewed 23 creators to understand their digital-safety experiences. This includes the security, privacy, and abuse threats they have experienced across multiple platforms and how the threats have changed over time. We also examined the protective practices they have employed to stay safer, including tensions in how they adopt the practices. We found that creators have diverse threat models that take into consideration their emotional, physical, relational, and financial safety. Most adopted protections---including distancing from technology, moderating their communities, and seeking external or social support---only after experiencing a serious safety incident. Lessons from their experiences help us better prepare and protect creators and ensure a diversity of voices are present online. View details
    ×