Sunny Consolvo
Sunny is a researcher at Google where she spends most of her time focusing on digital-safety topics.
Authored Publications
Sort By
"Millions of people are watching you": Understanding the digital safety needs of creators
Patrawat Samermit
Patrick Gage Kelley
Tara Matthews
Vanessia Wu
(2023)
Preview abstract
Online content creators---who create and share their content on platforms such as Instagram, TikTok, Twitch, and YouTube---are uniquely at-risk of increased digital-safety threats due to their public prominence, the diverse social norms of wide-ranging audiences, and their access to audience members as a valuable resource. We interviewed 23 creators to understand their digital-safety experiences. This includes the security, privacy, and abuse threats they have experienced across multiple platforms and how the threats have changed over time. We also examined the protective practices they have employed to stay safer, including tensions in how they adopt the practices. We found that creators have diverse threat models that take into consideration their emotional, physical, relational, and financial safety. Most adopted protections---including distancing from technology, moderating their communities, and seeking external or social support---only after experiencing a serious safety incident. Lessons from their experiences help us better prepare and protect creators and ensure a diversity of voices are present online.
View details
Practicing Information Sensibility: How Gen Z Engages with Online Information
Amelia Hassoun
Ian Beacock
Beth Goldberg
Patrick Gage Kelley
Daniel M. Russell
ACM CHI Conference on Human Factors in Computing Systems (2023)
Preview abstract
Assessing the trustworthiness of information online is complicated. Literacy-based paradigms are both widely used to help and widely critiqued. We conducted a study with 35 Gen Zers from across the U.S. to understand how they assess information online. We found that they tended to encounter—rather than search for—information, and that those encounters were shaped more by social motivations than by truth-seeking queries. For them, information processing is fundamentally a social practice. Gen Zers interpreted online information together, as aspirational members of social groups. Our participants sought information sensibility: a socially-informed awareness of the value of information encountered online. We outline key challenges they faced and practices they used to make sense of information. Our findings suggest that like their information sensibility practices, solutions and strategies to address misinformation should be embedded in social contexts online.
View details
Understanding Digital-Safety Experiences of Youth in the U.S.
Diana Freed
Natalie N. Bazarova
Eunice Han
Patrick Gage Kelley
Dan Cosley
The ACM CHI Conference on Human Factors in Computing Systems, ACM (2023)
Preview abstract
The seamless integration of technology into the lives of youth has raised concerns about their digital safety. While prior work has explored youth experiences with physical, sexual, and emotional threats—such as bullying and trafficking—a comprehensive and in-depth understanding of the myriad threats that youth experience is needed. By synthesizing the perspectives of 36 youth and 65 adult participants from the U.S., we provide an overview of today’s complex digital-safety landscape. We describe attacks youth experienced, how these moved across platforms and into the physical world, and the resulting harms. We also describe protective practices the youth and the adults who support them took to prevent, mitigate, and recover from attacks, and key barriers to doing this effectively. Our findings provide a broad perspective to help improve digital safety for youth and set directions for future work.
View details
Preview abstract
Online hate and harassment poses a threat to the digital safety of people globally. In light of this risk, there is a need to equip as many people as possible with advice to stay safer online. We interviewed 24 experts to understand what threats and advice internet users should prioritize to prevent or mitigate harm. As part of this, we asked experts to evaluate 45 pieces of existing hate-and-harassment-specific digital-safety advice to understand why they felt advice was viable or not. We find that experts frequently had competing perspectives for which threats and advice they would prioritize. We synthesize sources of disagreement, while also highlighting the primary threats and advice where experts concurred. Our results inform immediate efforts to protect users from online hate and harassment, as well as more expansive socio-technical efforts to establish enduring safety.
View details
“I just wanted to triple check ... they were all vaccinated” — Supporting Risk Negotiation in the Context of COVID-19
Jennifer Brown
Jennifer C. Mankoff
Margaret E. Morris
Paula S. Nurius
Savanna Yee
ACM Transactions on Computer-Human Interaction (2022)
Preview abstract
During the COVID-19 pandemic, risk negotiation became an important precursor to in-person contact. For young adults, social planning generally occurs through computer-mediated communication. Given the importance of social connectedness for mental health and academic engagement, we sought to understand how young adults plan in-person meetups over computer-mediated communication in the context of the pandemic. We present a qualitative study that explores young adults’ risk negotiation during the COVID-19 pandemic, a period of conflicting public health guidance. Inspired by cultural probe studies, we invited participants to express their preferred precautions for one week as they planned in-person meetups. We interviewed and surveyed participants about their experiences. Through qualitative analysis, we identify strategies for risk negotiation, social complexities that impede risk negotiation, and emotional consequences of risk negotiation. Our findings have implications for AI-mediated support for risk negotiation and assertive communication more generally. We explore tensions between risks and potential benefits of such systems.
View details
SoK: A Framework for Unifying At-Risk User Research
Noel Warford
Tara Matthews
Kaitlyn Yang
Omer Akgul
Patrick Gage Kelley
Nathan Malkin
Michelle L. Mazurek
(2022)
Preview abstract
At-risk users are people who experience risk factors that augment or amplify their chances of being digitally attacked and/or suffering disproportionate harms. In this systematization work, we present a framework for reasoning about at-risk users based on a wide-ranging meta-analysis of 95 papers. Across the varied populations that we examined (e.g., children, activists, people with disabilities), we identified 10 unifying contextual risk factors—such as marginalization and access to a sensitive resource—that augment or amplify digital-safety risks and their resulting harms. We also identified technical and non-technical practices that at-risk users adopt to attempt to protect themselves from digital-safety risks. We use this framework to discuss barriers that limit at-risk users’ ability or willingness to take protective actions. We believe that researchers and technology creators can use our framework to identify and shape research investments to benefit at-risk users, and to guide technology design to better support at-risk users.
View details
College from home during COVID-19: A mixed-methods study of heterogeneous experiences
Margaret E. Morris
Kevin S. Kuehn
Jennifer Brown
Paula S. Nurius
Han Zhang
Yasaman S. Sefidgar
Xuhai Xu
Eve A. Riskin
Anind K. Dey
Jennifer C. Mankoff
Proceedings of the ACM on Human Computer Interaction (PACM HCI), ACM (2021)
Preview abstract
This mixed-method study examined the experiences of college students during the COVID-19 pandemic through surveys, experience sampling data collected over two academic quarters (Spring 2019 n1 = 253; Spring 2020 n2 = 147), and semi-structured interviews with 27 undergraduate students. There were no marked changes in mean levels of depressive symptoms, anxiety, stress, or loneliness between 2019 and 2020, or over the course of the Spring 2020 term. Students in both the 2019 and 2020 cohort who indicated psychosocial vulnerability at the initial assessment showed worse psychosocial functioning throughout the entire Spring term relative to other students. However, rates of distress increased faster in 2020 than in 2019 for these individuals. Across individuals, homogeneity of variance tests and multi-level models revealed significant heterogeneity, suggesting the need to examine not just means but the variations in individuals’ experiences. Thematic analysis of interviews characterizes these varied experiences, describing the contexts for students' challenges and strategies. This analysis highlights the interweaving of psychosocial and academic distress: Challenges such as isolation from peers, lack of interactivity with instructors, and difficulty adjusting to family needs had both an emotional and academic toll. Strategies for adjusting to this new context included initiating remote study and hangout sessions with peers, as well as self-learning. In these and other strategies, students used technologies in different ways and for different purposes than they had previously. Supporting qualitative insight about adaptive responses were quantitative findings that students who used more problem-focused forms of coping reported fewer mental health symptoms over the course of the pandemic, even though they perceived their stress as more severe. These findings underline the need for interventions oriented towards problem-focused coping and suggest opportunities for peer role modeling.
View details
“Why wouldn’t someone think of democracy as a target?”: Security practices & challenges of people involved with U.S. political campaigns
Patrick Gage Kelley
Tara Matthews
Lee Carosi Dunn
Proceedings of the USENIX Security Symposium (2021)
Preview abstract
People who are involved with political campaigns face increased digital security threats from well-funded, sophisticated attackers, especially nation-states. Improving political campaign security is a vital part of protecting democracy. To identify campaign security issues, we conducted qualitative research with 28 participants across the U.S. political spectrum to understand the digital security practices, challenges, and perceptions of people involved in campaigns. A main, overarching finding is that a unique combination of threats, constraints, and work culture lead people involved with political campaigns to use technologies from across platforms and domains in ways that leave them—and democracy—vulnerable to security attacks. Sensitive data was kept in a plethora of personal and work accounts, with ad hoc adoption of strong passwords, two-factor authentication, encryption, and access controls. No individual company, committee, organization, campaign, or academic institution can solve the identified problems on their own. To this end, we provide an initial understanding of this complex problem space and recommendations for how a diverse group of experts can begin working together to improve security for political campaigns.
View details
Designing Toxic Content Classification for a Diversity of Perspectives
Deepak Kumar
Patrick Gage Kelley
Joshua Mason
Zakir Durumeric
Michael Bailey
(2021)
Preview abstract
In this work, we demonstrate how existing classifiers for identifying toxic comments online fail to generalize to the diverse concerns of Internet users. We survey 17,280 participants to understand how user expectations for what constitutes toxic content differ across demographics, beliefs, and personal experiences. We find that groups historically at-risk of harassment—such as people who identify as LGBTQ+ or young adults—are more likely to to flag a random comment drawn from Reddit, Twitter, or 4chan as toxic, as are people who have personally experienced harassment in the past. Based on our findings, we show how current one-size-fits-all toxicity classification algorithms, like the Perspective API from Jigsaw, can improve in accuracy by 86% on average through personalized model tuning. Ultimately, we highlight current pitfalls and new design directions that can improve the equity and efficacy of toxic content classifiers for all users.
View details