Kurt Thomas
Research Areas
Authored Publications
Sort By
Who is At-Risk? Surveying the Prevalence of Risk Factors and Tech-Facilitated Attacks in the United States
Sharon Heung
Claire Weizenegger
Mo Houtti
Tara Matthews
Ashley Walker
2026
Preview abstract
A growing body of qualitative research has identified contextual risk factors that elevate people’s chances of experiencing digital-safety attacks. However, the lack of quantitative data on the population level distribution of these risk factors prevents policymakers and tech companies from developing targeted, evidence-based interventions to improve digital safety. To address this gap, we surveyed 5,001 adults in the United States to analyze: (1) the frequency of and relationship between digital-safety attacks (e.g., scams, harassment, account hacking), and (2) how these attacks align with 10 contextual risk factors. Nearly half of our respondents identify as resource constrained, which significantly correlates with higher likelihood of experiencing four common attacks. We also present qualitative insights to expand our understanding of the factors beyond the existing literature (e.g., “prominence” included high-visibility roles in local communities). This study provides the first large-scale quantitative analysis correlating digital-safety attacks with contextual risk factors and demographics.
View details
"It didn’t feel right but I needed a job so desperately": Understanding People's Emotions & Help Needs During Financial Scams
G. Jake Chanenson
Tara Matthews
Jessica McClearn
Sarah Meiklejohn
Mia Hassoun
2026
Preview abstract
Online financial scams represent a long-standing and serious threat for which people seek help. We present a study to understand people’s in situ motivations for engaging with scams and the help needs they express before, during, and after encountering a scam. We identify the main emotions scammers exploited (e.g., fear, hope) and characterize how they did so. We examine factors—such as financial insecurity and legal precarity—which elevate people’s risk of engaging with specific scams and experiencing harm. We indicate when people sought help and describe their help-seeking needs and emotions at different stages of the scam. We discuss how these needs could be met through the design of contextually-specific prevention, diagnostic, mitigation, and recovery interventions.
View details
Supporting the Digital Safety of At-Risk Users: Lessons Learned from 9+ Years of Research and Training
Tara Matthews
Lea Kissner
Andreas Kramm
Andrew Oplinger
Andy Schou
Stephan Somogyi
Dalila Szostak
Jill Woelfer
Lawrence You
Izzie Zahorian
ACM Transactions on Computer-Human Interaction, 32(3) (2025), pp. 1-39
Preview abstract
Creating information technologies intended for broad use that allow everyone to participate safely online—which we refer to as inclusive digital safety—requires understanding and addressing the digital-safety needs of a diverse range of users who face elevated risk of technology-facilitated attacks or disproportionate harm from such attacks—i.e., at-risk users. This article draws from more than 9 years of our work at Google to understand and support the digital safety of at-risk users—including survivors of intimate partner abuse, people involved with political campaigns, content creators, youth, and more—in technology intended for broad use. Among our learnings is that designing for inclusive digital safety across widely varied user needs and dynamic contexts is a wicked problem with no “correct” solution. Given this, we describe frameworks and design principles we have developed to help make at-risk research findings practically applicable to technologies intended for broad use and lessons we have learned about communicating them to practitioners.
View details
Evaluating the Robustness of a Production Malware Detection System to Transferable Adversarial Attacks
Milad Nasr
Yanick Fratantonio
Ange Albertini
Loua Farah
Alexandre Petit-Bianco
Andreas Terzis
Nicholas Carlini
ACM Conference on Computer and Communications Security (CCS) (2025)
Preview abstract
As deep learning models become widely deployed as components
within larger production systems, their individual shortcomings
can create system-level vulnerabilities with real-world impact. This
paper studies how adversarial attacks targeting an ML component
can degrade or bypass an entire production-grade malware detection system, performing a case study analysis of Gmail’s pipeline
where file-type identification relies on a ML model.
The malware detection pipeline in use by Gmail contains a machine learning model that routes each potential malware sample
to a specialized malware classifier to improve accuracy and performance. This model, called Magika, has been open sourced. By
designing adversarial examples that fool Magika, we can cause the
production malware service to incorrectly route malware to an unsuitable malware detector thereby increasing our chance of evading
detection. Specifically, by changing just 13 bytes of a malware sample, we can successfully evade Magika in 90% of cases and thereby
allow us to send malware files over Gmail. We then turn our attention to defenses, and develop an approach to mitigate the severity
of these types of attacks. For our defended production model, a
highly resourced adversary requires 50 bytes to achieve just a 20%
attack success rate. We implement this defense, and, thanks to a
collaboration with Google engineers, it has already been deployed
in production for the Gmail classifier.
View details
DroidCCT: Cryptographic Compliance Test via Trillion-Scale Measurement
Preview
Rémi Audebert
Pedro Barbosa
Borbala Benko
Alex (Mac) Mihai
László Siroki
Catherine Vlasov
Annual Computer Security Applications Conference (ACSAC) (2025) (to appear)
Help-seeking and Coping Strategies for Technology-facilitated Abuse Experienced by Youth
Diana Freed
Dan Cosley
Ender Ricart
Natalie Bazarova
2025
Preview abstract
Technology provides youth (ages 10–17) with near-constant opportunities for learning, communication, and self-expression. It can also expose them to technology-facilitated abuse: harassment, coercion, fraud, and more. The ability of youth to navigate such abuse is crucial for their well-being and development. A recent advisory by the U.S. Surgeon General called for better support of youth, including that youth should “reach out for help.” However, little is known about how youth seek help or otherwise cope with technology-facilitated abuse. Through a qualitative study in the U.S., we examine how youth engage in self-reliance, seek help from others, and how others seek help on a youth’s behalf. We discuss these strategies and outline opportunities for how the HCI community can better support youth who experience technology-facilitated abuse.
View details
Give and Take: An End-To-End Investigation of Giveaway Scams
Eric Mugnier
Enze Liu
Stefan Savage
Geoffrey M. Voelker
David Tao
George Kappos
Sarah Meiklejohn
2024
Preview abstract
Scams — fraudulent narratives designed to extract money or items of value from victims — have existed as long as recorded history. However, the Internet’s combination of low communication cost, global reach, and functional anonymity has allowed scam volumes to reach their historic zenith. Designing effective interventions against such activities requires first understanding the context in which they thrive: how scammers advertise to potential victims, the proceeds they can expect in response, and how they ultimately monetize their illicit activities. In this paper, we focus on such questions in the specific context of a giveaway scam, in which scammers offer to give away cryptocurrency to users who send them coins first (often promising to send them back double whatever they sent). In particular, our work aims to understand how such giveaway scams are advertised on both on textual social media (Twitter) and via video livestreams (YouTube and Twitch), the extent to which such efforts are effective in attracting victims, and the scope and nature of the payments received in such fraudulent transactions.
View details
Preview abstract
The task of content-type detection, which entails determining the data type encoded by byte streams, has a long history within the realm of computing and nowadays it is a key primitive for critical automated pipelines. The first program ever developed to perform this task is "file", which shipped with Bell Labs UNIX over five decades ago. Since then, a number of additional tools have been developed, but, despite their importance, to date it is not clear how well these approaches perform, and whether modern techniques can improve over the state of the art.
This paper sheds light on this overlooked area. We collect a dataset of more than 26M samples, and we perform the first large-scale evaluation of existing content type tools. Then, we introduce Magika, a new content type detection tool based on deep learning. Magika is designed to be fast (5ms inference time), even on a single CPU, thus making it a viable replacement for existing command line tools and suitable for large-scale automated pipelines.
Magika achieves 99\%+ average precision and recall, which is a double-digit % accuracy improvement (in absolute terms) over the state of the art.
As a testament to its real-world utility, we are working with a large email provider and with Visual Studio Code developers on integrating Magika to be their reference content-type detector. To ease reproducibility, we release all our artifacts, including the tool, the model, the training pipeline, the dataset collection codebase, and details about our dataset.
View details
Understanding Help-Seeking and Help-Giving on Social Media for Image-Based Sexual Abuse
Tara Matthews
Miranda Wei
Sarah Meiklejohn
(2024)
Preview abstract
Image-based sexual abuse (IBSA), like other forms of technology-facilitated abuse, is a growing threat to peoples' digital safety. Attacks include unwanted solicitations for sexually explicit images, extorting people under threat of leaking their images, or purposefully leaking images to enact revenge or exert control. In this paper, we explore how people experiencing IBSA seek and receive help from social media. Specifically, we identify over 100,000 Reddit posts that engage relationship and advice communities for help related to IBSA. We draw on a stratified sample of these posts to qualitatively examine how various types of IBSA unfold, the support needs of victim-survivors experiencing IBSA, and how communities help victim-survivors navigate their abuse through technical, emotional, and relationship advice. In the process, we highlight how gender, relationship dynamics, and the threat landscape influence the design space of sociotechnical solutions. We also highlight gaps that remain to connecting victim-survivors with important care---regardless of whom they turn to for help.
View details
Understanding Digital-Safety Experiences of Youth in the U.S.
Diana Freed
Natalie N. Bazarova
Eunice Han
Dan Cosley
The ACM CHI Conference on Human Factors in Computing Systems, ACM (2023)
Preview abstract
The seamless integration of technology into the lives of youth has raised concerns about their digital safety. While prior work has explored youth experiences with physical, sexual, and emotional threats—such as bullying and trafficking—a comprehensive and in-depth understanding of the myriad threats that youth experience is needed. By synthesizing the perspectives of 36 youth and 65 adult participants from the U.S., we provide an overview of today’s complex digital-safety landscape. We describe attacks youth experienced, how these moved across platforms and into the physical world, and the resulting harms. We also describe protective practices the youth and the adults who support them took to prevent, mitigate, and recover from attacks, and key barriers to doing this effectively. Our findings provide a broad perspective to help improve digital safety for youth and set directions for future work.
View details