
Rebecca Umbach
Originally trained as a criminologist, I now work on topics related to sexual exploitation and tech-facilitated abuse, including image-based abuse (NCEI) and CSAM. I am currently a staff UXR on the T&S Global Engagements Team. Before joining Google, I was a T32 postdoctoral scholar working with Dr. Nim Tottenham at Columbia University.
Authored Publications
Sort By
Preview abstract
There is a growing trend of legislation, regulation, and court rulings mandating the delisting of content from intermediary platforms. However, few, if any, studies have evaluated user reactions to edge cases involving the delisting of content of public interest. We administered a vignette-based online survey experiment to a representative sample of over 20,000 participants in five countries. We sought to understand user perceptions of delisting content from search engine results and the factors that influence them. While leaving information accessible in search engine results generally leads to warmer feelings towards those search engines, we find that contextual elements also impact this resulting warmth. In addition, we analyze respondents' knowledge and attitudes about the ``Right to be Forgotten'' (RTBF), perhaps the most well-known legislation on delisting. We find that respondents in countries with active RTBF legislation are more likely to support delisting, know more about RTBF, and support RTBF, and that RTBF knowledge/attitudes affects respondents' answers to our experiment. These results indicate a complex tension around delisting public-interest content from search engines' results. Experts sensitive to local context should perform reviews to ensure that delisting requests are handled in a way that meets users’ expectations.
View details
Understanding Help-Seeking and Help-Giving on Social Media for Image-Based Sexual Abuse
Tara Matthews
Miranda Wei
Patrick Gage Kelley
Sarah Meiklejohn
(2024)
Preview abstract
Image-based sexual abuse (IBSA), like other forms of technology-facilitated abuse, is a growing threat to peoples' digital safety. Attacks include unwanted solicitations for sexually explicit images, extorting people under threat of leaking their images, or purposefully leaking images to enact revenge or exert control. In this paper, we explore how people experiencing IBSA seek and receive help from social media. Specifically, we identify over 100,000 Reddit posts that engage relationship and advice communities for help related to IBSA. We draw on a stratified sample of these posts to qualitatively examine how various types of IBSA unfold, the support needs of victim-survivors experiencing IBSA, and how communities help victim-survivors navigate their abuse through technical, emotional, and relationship advice. In the process, we highlight how gender, relationship dynamics, and the threat landscape influence the design space of sociotechnical solutions. We also highlight gaps that remain to connecting victim-survivors with important care---regardless of whom they turn to for help.
View details
Preview abstract
The growing threat of sexual extortion (”sextortion”) has garnered significant attention in the news and by law enforcement agencies around the world. Foundational knowledge of prevalence and risk factors, however, is still nascent. The present study surveyed 16,693 respondents, distributed equally across 10 different countries, to assess prevalence of victimization and perpetration of threatening to disseminate intimate images. Weighted by gender, age, region, and population, 14.5% of respondents indicated at least one experience of victimization, while 4.8% of respondents indicated perpetration of the same. Demographic risk factors for perpetration and victimization were also assessed. Consistent with findings from other studies, men (15.7%) were 1.17 times more likely to report being victimized compared to women (13.2%), and 1.43 times more likely to report perpetration. LGBTQ+ respondents were 2.07 times more likely to report victimization compared to non-LGBTQ+ respondents, and 2.51 times more likely to report offending behaviors. Age was significantly associated, with younger participants more likely to report both victimization and perpetration experiences. The most common type of perpetrator, as reported by victims, was a former or current partner. Despite the strong likelihood of under-reporting given the topic area, the study found that experiencing threats to distribute intimate content is a relatively commonplace occurrence, impacting 1 in 7 adults. Implications for potential mitigation are discussed.
View details
Preview abstract
Deepfake technology tools have become ubiquitous, democratizing the ability to manipulate images and videos. One popular use of such technology is the creation of sexually explicit content, which is then often posted and shared widely on the internet. This article examines attitudes and behaviors related to non-consensual synthetic intimate imagery (NSII) across over 16,000 respondents in 10 countries. Despite nascent societal awareness of NSII, NSII was universally deemed harmful, particularly by women. 2.2\% of respondents indicated personal victimization, and 1.8\% of respondents indicated perpetration behaviors. Men reported both more victimization and perpetration. Respondents from countries with relevant legislation also reported perpetration and victimization experiences, suggesting legislative action alone is not a sufficient solution to deter perpetration. Technical considerations to reduce harms may include how individuals can better monitor their presence online, as well as enforced platform policies which ban or allow for removal of NSII content.
View details
Seeking in Cycles: How Users Leverage Personal Information Ecosystems to Find Mental Health Information
Ashlee Milton
Fernando Maestre
Stevie Chancellor
Proceedings of the CHI Conference on Human Factors in Computing Systems (2024)
Preview abstract
Information is crucial to how people understand their mental health and well-being, and many turn to online sources found through search engines and social media. We present the findings from an interview study (n = 17) of participants who use online platforms to seek information about their mental illnesses. We found that participants leveraged multiple platforms in a cyclical process for finding information from their personal information ecosystems, driven by the adoption of new information and uncertainty surrounding the credibility of information. Concerns about privacy, fueled by perceptions of stigma and platform design, also influenced their information-seeking decisions. Our work proposes theoretical implications for social computing and information retrieval on information seeking in users' personal information ecosystems. We also offer design implications to support users in navigating their personal information ecosystems to find mental health information.
View details
Beyond Binary: Towards Embracing Complexities in Cyberbullying Detection & Intervention-A Position Paper
Kanishk Verma
Kolawole Adebayo
Joachim Wagner
Megan Reynolds
Tijana Milosevic
Brian Davis
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) (2024)
Preview abstract
Cyberbullying (CB) has become a prevalent issue among children in the digital age. Social science research on CB indicates that these behaviours can manifest as early as primary school age and can have harmful and long-lasting effects, including an increased risk of self-harm. Drawing on insights from psychology, social sciences, and computational linguistics, this position paper highlights the complexity of CB incidents. These incidents are not limited to bullies and victims, but include bystanders with various roles, resulting in numerous sub-categories and variations of online harm. Despite the growing recognition of the complexities inherent in CB, existing computational approaches tend to oversimplify it as a binary classification task. They often rely on training datasets that may not comprehensively capture the full spectrum of CB behaviours. In addition to scrutinising the diversity of CB policies on online platforms and revealing inconsistencies in the definitions and categorising of CB-related online harms, this article also brings to attention the ethical concerns that arise when CB research involves children in role-playing CB incidents to curate datasets. This paper, through multi-disciplinary collaboration, seeks to address our position on strategies to consider while training or testing CB detection systems. Furthermore, it presents our standpoint on leveraging large language models (LLMs) like Claude-2 \& Llama2-Chat as an alternative to generate CB-related role-played datasets. By elucidating the current research gaps and presenting our standpoint, we aim to aid researchers, policymakers, and online platforms in making informed decisions regarding the automation of CB incident detection and intervention. By addressing these complexities, our research contributes to a more nuanced and effective approach to combating CB especially in young people.
View details