Sunny Consolvo
Sunny is a researcher at Google where she spends most of her time focusing on digital-safety topics.
Authored Publications
Sort By
Practicing Information Sensibility: How Gen Z Engages with Online Information
Amelia Hassoun
Ian Beacock
Beth Goldberg
Patrick Gage Kelley
Daniel M. Russell
ACM CHI Conference on Human Factors in Computing Systems (2023)
Preview abstract
Assessing the trustworthiness of information online is complicated. Literacy-based paradigms are both widely used to help and widely critiqued. We conducted a study with 35 Gen Zers from across the U.S. to understand how they assess information online. We found that they tended to encounter—rather than search for—information, and that those encounters were shaped more by social motivations than by truth-seeking queries. For them, information processing is fundamentally a social practice. Gen Zers interpreted online information together, as aspirational members of social groups. Our participants sought information sensibility: a socially-informed awareness of the value of information encountered online. We outline key challenges they faced and practices they used to make sense of information. Our findings suggest that like their information sensibility practices, solutions and strategies to address misinformation should be embedded in social contexts online.
View details
Understanding Digital-Safety Experiences of Youth in the U.S.
Diana Freed
Natalie N. Bazarova
Eunice Han
Patrick Gage Kelley
Dan Cosley
The ACM CHI Conference on Human Factors in Computing Systems, ACM (2023)
Preview abstract
The seamless integration of technology into the lives of youth has raised concerns about their digital safety. While prior work has explored youth experiences with physical, sexual, and emotional threats—such as bullying and trafficking—a comprehensive and in-depth understanding of the myriad threats that youth experience is needed. By synthesizing the perspectives of 36 youth and 65 adult participants from the U.S., we provide an overview of today’s complex digital-safety landscape. We describe attacks youth experienced, how these moved across platforms and into the physical world, and the resulting harms. We also describe protective practices the youth and the adults who support them took to prevent, mitigate, and recover from attacks, and key barriers to doing this effectively. Our findings provide a broad perspective to help improve digital safety for youth and set directions for future work.
View details
"Millions of people are watching you": Understanding the digital safety needs of creators
Patrawat Samermit
Patrick Gage Kelley
Tara Matthews
Vanessia Wu
(2023)
Preview abstract
Online content creators---who create and share their content on platforms such as Instagram, TikTok, Twitch, and YouTube---are uniquely at-risk of increased digital-safety threats due to their public prominence, the diverse social norms of wide-ranging audiences, and their access to audience members as a valuable resource. We interviewed 23 creators to understand their digital-safety experiences. This includes the security, privacy, and abuse threats they have experienced across multiple platforms and how the threats have changed over time. We also examined the protective practices they have employed to stay safer, including tensions in how they adopt the practices. We found that creators have diverse threat models that take into consideration their emotional, physical, relational, and financial safety. Most adopted protections---including distancing from technology, moderating their communities, and seeking external or social support---only after experiencing a serious safety incident. Lessons from their experiences help us better prepare and protect creators and ensure a diversity of voices are present online.
View details
Preview abstract
Online hate and harassment poses a threat to the digital safety of people globally. In light of this risk, there is a need to equip as many people as possible with advice to stay safer online. We interviewed 24 experts to understand what threats and advice internet users should prioritize to prevent or mitigate harm. As part of this, we asked experts to evaluate 45 pieces of existing hate-and-harassment-specific digital-safety advice to understand why they felt advice was viable or not. We find that experts frequently had competing perspectives for which threats and advice they would prioritize. We synthesize sources of disagreement, while also highlighting the primary threats and advice where experts concurred. Our results inform immediate efforts to protect users from online hate and harassment, as well as more expansive socio-technical efforts to establish enduring safety.
View details
“I just wanted to triple check ... they were all vaccinated” — Supporting Risk Negotiation in the Context of COVID-19
Jennifer Brown
Jennifer C. Mankoff
Margaret E. Morris
Paula S. Nurius
Savanna Yee
ACM Transactions on Computer-Human Interaction (2022)
Preview abstract
During the COVID-19 pandemic, risk negotiation became an important precursor to in-person contact. For young adults, social planning generally occurs through computer-mediated communication. Given the importance of social connectedness for mental health and academic engagement, we sought to understand how young adults plan in-person meetups over computer-mediated communication in the context of the pandemic. We present a qualitative study that explores young adults’ risk negotiation during the COVID-19 pandemic, a period of conflicting public health guidance. Inspired by cultural probe studies, we invited participants to express their preferred precautions for one week as they planned in-person meetups. We interviewed and surveyed participants about their experiences. Through qualitative analysis, we identify strategies for risk negotiation, social complexities that impede risk negotiation, and emotional consequences of risk negotiation. Our findings have implications for AI-mediated support for risk negotiation and assertive communication more generally. We explore tensions between risks and potential benefits of such systems.
View details
SoK: A Framework for Unifying At-Risk User Research
Noel Warford
Tara Matthews
Kaitlyn Yang
Omer Akgul
Patrick Gage Kelley
Nathan Malkin
Michelle L. Mazurek
(2022)
Preview abstract
At-risk users are people who experience risk factors that augment or amplify their chances of being digitally attacked and/or suffering disproportionate harms. In this systematization work, we present a framework for reasoning about at-risk users based on a wide-ranging meta-analysis of 95 papers. Across the varied populations that we examined (e.g., children, activists, people with disabilities), we identified 10 unifying contextual risk factors—such as marginalization and access to a sensitive resource—that augment or amplify digital-safety risks and their resulting harms. We also identified technical and non-technical practices that at-risk users adopt to attempt to protect themselves from digital-safety risks. We use this framework to discuss barriers that limit at-risk users’ ability or willingness to take protective actions. We believe that researchers and technology creators can use our framework to identify and shape research investments to benefit at-risk users, and to guide technology design to better support at-risk users.
View details
“Why wouldn’t someone think of democracy as a target?”: Security practices & challenges of people involved with U.S. political campaigns
Patrick Gage Kelley
Tara Matthews
Lee Carosi Dunn
Proceedings of the USENIX Security Symposium (2021)
Preview abstract
People who are involved with political campaigns face increased digital security threats from well-funded, sophisticated attackers, especially nation-states. Improving political campaign security is a vital part of protecting democracy. To identify campaign security issues, we conducted qualitative research with 28 participants across the U.S. political spectrum to understand the digital security practices, challenges, and perceptions of people involved in campaigns. A main, overarching finding is that a unique combination of threats, constraints, and work culture lead people involved with political campaigns to use technologies from across platforms and domains in ways that leave them—and democracy—vulnerable to security attacks. Sensitive data was kept in a plethora of personal and work accounts, with ad hoc adoption of strong passwords, two-factor authentication, encryption, and access controls. No individual company, committee, organization, campaign, or academic institution can solve the identified problems on their own. To this end, we provide an initial understanding of this complex problem space and recommendations for how a diverse group of experts can begin working together to improve security for political campaigns.
View details
Designing Toxic Content Classification for a Diversity of Perspectives
Deepak Kumar
Patrick Gage Kelley
Joshua Mason
Zakir Durumeric
Michael Bailey
(2021)
Preview abstract
In this work, we demonstrate how existing classifiers for identifying toxic comments online fail to generalize to the diverse concerns of Internet users. We survey 17,280 participants to understand how user expectations for what constitutes toxic content differ across demographics, beliefs, and personal experiences. We find that groups historically at-risk of harassment—such as people who identify as LGBTQ+ or young adults—are more likely to to flag a random comment drawn from Reddit, Twitter, or 4chan as toxic, as are people who have personally experienced harassment in the past. Based on our findings, we show how current one-size-fits-all toxicity classification algorithms, like the Perspective API from Jigsaw, can improve in accuracy by 86% on average through personalized model tuning. Ultimately, we highlight current pitfalls and new design directions that can improve the equity and efficacy of toxic content classifiers for all users.
View details
SoK: Hate, Harassment, and the Changing Landscape of Online Abuse
Devdatta Akhawe
Michael Bailey
Dan Boneh
Nicola Dell
Zakir Durumeric
Patrick Gage Kelley
Deepak Kumar
Damon McCoy
Sarah Meiklejohn
Thomas Ristenpart
Gianluca Stringhini
(2021)
Preview abstract
We argue that existing security, privacy, and anti-abuse protections fail to address the growing threat of online hate and harassment. In order for our community to understand and address this gap, we propose a taxonomy for reasoning about online hate and harassment. Our taxonomy draws on over 150 interdisciplinary research papers that cover disparate threats ranging from intimate partner violence to coordinated mobs. In the process, we identify seven classes of attacks---such as toxic content and surveillance---that each stem from different attacker capabilities and intents. We also provide longitudinal evidence from a three-year survey that hate and harassment is a pervasive, growing experience for online users, particularly for at-risk communities like young adults and people who identify as LGBTQ+. Responding to each class of hate and harassment requires a unique strategy and we highlight five such potential research directions that ultimately empower individuals, communities, and platforms to do so.
View details