Security, privacy and abuse

Software engineering and programming language researchers at Google study all aspects of the software development process, from the engineers who make software to the languages and tools that they use.

About the team

Our team brings together experts from systems, networking, cryptography, machine learning, human-computer interaction, and user experience to advance the state of security, privacy, and abuse research. Our mission is straightforward:

Make the world's information trustworthy and safe. We strive to keep all information on the Internet free of deceptive, fraudulent, unwanted, and malicious content. Users should never be at risk for sharing personal data, accessing content, or conducting business on the Internet.

Defend users, everywhere. Internet users can face incredibly diverse threats, from governments to cybercrime and sexual abuse. We approach research, design, and product development with the goal of protecting every user, no matter their needs, from the start.

Advance the state of the art. Making the Internet safer requires support from the public, academia, and industry. We foster initiatives that shape the direction of security, privacy, and abuse research for the next generation.

Build privacy for everyone. It's a responsibility that comes with creating products and services that are free and accessible for all. This is especially important as technology progresses and privacy needs evolve. We look to these principles to guide our products, our processes, and our people in keeping our users' data private, safe, and secure.

Team focus summaries

Malicious and deceptive software

Our crawlers and analysis engines represent the state of the art for detecting malware, unwanted software, deception, and targeted attacks. These threats span the web, binaries, extensions, and mobile applications.

Spam and harmful interactions

We continue to develop one of the most sophisticated machine learning and reputation systems to protect users from scams, hate and harassment, and sexual abuse.

Trustworthy infrastructure

We are pioneering advancements in browser, mobile, and cloud security including sandboxing, auto-updates, fuzzing, program analysis, formal verification, and vulnerability research.

Fraud and automation prevention

Our team of warrior scientists actively protects users and businesses from payment fraud, invalid traffic, denial of service, and bot automation. We fight all things fake—installs, clicks, likes, views, and accounts.


We are re-defining informed user decision making. Our work spans warnings, notifications, and advice. This includes informing our designs based on the privacy attitudes, personas, and segmentation across Internet users.

Protecting user data

We are developing the next generation of mechanisms for encryption (e.g., end-to-end & post-quantum) authentication, and privacy-preserving analytics. This includes advanced modeling to detect account hijacking.

Featured publications

SoK: Hate, Harassment, and the Changing Landscape of Online Abuse
Devdatta Akhawe
Michael Bailey
Dan Boneh
Nicola Dell
Zakir Durumeric
Patrick Gage Kelley
Deepak Kumar
Damon McCoy
Sarah Meiklejohn
Thomas Ristenpart
Gianluca Stringhini
Preview abstract We argue that existing security, privacy, and anti-abuse protections fail to address the growing threat of online hate and harassment. In order for our community to understand and address this gap, we propose a taxonomy for reasoning about online hate and harassment. Our taxonomy draws on over 150 interdisciplinary research papers that cover disparate threats ranging from intimate partner violence to coordinated mobs. In the process, we identify seven classes of attacks---such as toxic content and surveillance---that each stem from different attacker capabilities and intents. We also provide longitudinal evidence from a three-year survey that hate and harassment is a pervasive, growing experience for online users, particularly for at-risk communities like young adults and people who identify as LGBTQ+. Responding to each class of hate and harassment requires a unique strategy and we highlight five such potential research directions that ultimately empower individuals, communities, and platforms to do so. View details
" Quiet!" Reducing the Unwanted Interruptions of Notification Permission Prompts on Chrome
Balazs Engedy
Jud Porter
Kamila Hasanbega
Andrew Paseltiner
Hwi Lee
Edward Jung
PJ McLachlan
Jason James
30th USENIX Security Symposium (USENIX Security 21), USENIX Association, Vancouver, B.C.(2021)
Preview abstract Push notifications are an extremely useful feature. In web browsers, they allow users to receive timely updates even if the website is not currently open. On Chrome, the feature has become extremely popular since its inception in 2015, but it is also the least likely to be accepted by users. Our telemetry shows that, although 74% of all permission prompts are about notifications, they are also the least likely to be granted with only a 10% grant rate on desktop and 21% grant rate on Android. In order to preserve its utility for the websites and to reduce unwanted interruptions for the users, we designed and tested a new UI for notification permission prompt on Chrome. In this paper, we conduct two large-scale studies of Chrome users interactions with the notifications permission prompt in the wild, in order to understand how users interact with such prompts and to evaluate a novel design that we introduced in Chrome version 80 in February 2020. Our main goal for the redesigned UI is to reduce the unwanted interruptions due to notification permission prompts for Chrome users, the frequency at which users have to suppress them and the ease of changing a previously made choice. Our results, based on an A/B test using behavioral data from more than 40 million users who interacted with more than 100 million prompts on more than 70 thousand websites, show that the new UI is very effective at reducing the unwanted interruptions and their frequency (up to 30% fewer unnecessary actions on the prompts), with a minimal impact (less than 5%) on the grant rates, across all types of users and websites. We achieve these results thanks to a novel adaptive activation mechanism coupled with a block list of interrupting websites, which is derived from crowd-sourced telemetry from Chrome clients. View details
Tracking Ransomware End-to-end
Danny Y. Huang
Maxwell Matthaios Aliapoulios
Vector Guo Li
Kylie McRoberts
Jonathan Levin
Kirill Levchenko
Alex C. Snoeren
Damon McCoy
Security & Privacy 2018(2018)
Preview abstract Ransomware is a type of malware that encrypts the files of infected hosts and demands payment, often in a cryptocurrency such as bitcoin. In this paper, we create a measurement framework that we use to perform a large-scale, two-year, end-to-end measurement of ransomware payments, victims, and operators. By combining an array of data sources, including ransomware binaries, seed ransom payments, victim telemetry from infections, and a large database of bitcoin addresses annotated with their owners, we sketch the outlines of this burgeoning ecosystem and associated third-party infrastructure. In particular, we trace the financial transactions, from the moment victims acquire bitcoins, to when ransomware operators cash them out. We find that many ransomware operators cashed out using BTC-e, a now-defunct bitcoin exchange. In total we are able to track over $16 million in likely ransom payments made by 19,750 potential victims during a two-year period. While our study focuses on ransomware, our methods are potentially applicable to other cybercriminal operations that have similarly adopted bitcoin as their payment channel. View details
Measuring HTTPS adoption on the web
Richard Barnes
April King
Chris Palmer
Chris Bentzel
USENIX Security(2017)
Preview abstract HTTPS ensures that the Web has a base level of privacy and integrity. Security engineers, researchers, and browser vendors have long worked to spread HTTPS to as much of the Web as possible via outreach efforts, developer tools, and browser changes. How much progress have we made toward this goal of widespread HTTPS adoption? We gather metrics to benchmark the status and progress of HTTPS adoption on the Web in 2017. To evaluate HTTPS adoption from a user perspective, we collect large-scale, aggregate user metrics from two major browsers (Google Chrome and Mozilla Firefox). To measure HTTPS adoption from a Web developer perspective, we survey server support for HTTPS among top and long-tail websites. We draw on these metrics to gain insight into the current state of the HTTPS ecosystem. View details
Preview abstract In this paper, we present the first longitudinal measurement study of the underground ecosystem fueling credential theft and assess the risk it poses to millions of users. Over the course of March, 2016--March, 2017, we identify 788,000 potential victims of off-the-shelf keyloggers; 12.4 million potential victims of phishing kits; and 1.9 billion usernames and passwords exposed via data breaches and traded on blackmarket forums. Using this dataset, we explore to what degree the stolen passwords---which originate from thousands of online services---enable an attacker to obtain a victim's valid email credentials---and thus complete control of their online identity due to transitive trust. Drawing upon Google as a case study, we find 7--25\% of exposed passwords match a victim's Google account. For these accounts, we show how hardening authentication mechanisms to include additional risk signals such as a user's historical geolocations and device profiles helps to mitigate the risk of hijacking. Beyond these risk metrics, we delve into the global reach of the miscreants involved in credential theft and the blackhat tools they rely on. We observe a remarkable lack of external pressure on bad actors, with phishing kit playbooks and keylogger capabilities remaining largely unchanged since the mid-2000s. View details
Scalable Private Learning with PATE
Ilya Mironov
Ananth Raghunathan
Kunal Talwar
Úlfar Erlingsson
International Conference on Learning Representations (ICLR)(2018)
Preview abstract The rapid adoption of machine learning has increased concerns about the privacy implications of machine learning models trained on sensitive data, such as medical records or other personal information. To address those concerns, one promising approach is Private Aggregation of Teacher Ensembles, or PATE, which transfers to a "student" model the knowledge of an ensemble of "teacher" models, with intuitive privacy provided by training teachers on disjoint data and strong privacy guaranteed by noisy aggregation of teachers’ answers. However, PATE has so far been evaluated only on simple classification tasks like MNIST, leaving unclear its utility when applied to larger-scale learning tasks and real-world datasets. In this work, we show how PATE can scale to learning tasks with large numbers of output classes and uncurated, imbalanced training data with errors. For this, we introduce new noisy aggregation mechanisms for teacher ensembles that are more selective and add less noise, and prove their tighter differential-privacy guarantees. Our new mechanisms build on two insights: the chance of teacher consensus is increased by using more concentrated noise and, lacking consensus, no answer need be given to a student. The consensus answers used are more likely to be correct, offer better intuitive privacy, and incur lower-differential privacy cost. Our evaluation shows our mechanisms improve on the original PATE on all measures, and scale to larger tasks with both high utility and very strong privacy (ε < 1.0). View details
Preview abstract A great deal of research on the management of user data on smartphones via permission systems has revealed significant levels of user discomfort, lack of understanding, and lack of attention. The majority of these studies were conducted on Android devices before runtime permission dialogs were widely deployed. In this paper we explore how users make decisions with runtime dialogs on smartphones with Android 6.0 or higher. We employ an experience sampling methodology in order to ask users the reasons influencing their decisions immediately after they decide. We conducted a longitudinal survey with 157 participants over a 6 week period. We explore the grant and denial rates of permissions, overall and on a per permission type basis. Overall, our participants accepted 84% of the permission requests. We observe differences in the denial rates across permissions types; these vary from 23% (for microphone) to 10% (calendar). We find that one of the main reasons for granting or denying a permission request depends on users’ expectation on whether or not an app should need a permission. A common reason for denying permissions is because users know they can change them later. Among the permissions granted, our participants said they were comfortable with 90% of those decisions - indicating that for 10% of grant decisions users may be consenting reluctantly. Interestingly, we found that women deny permissions twice as often as men. View details
Stories from survivors: Privacy & security practices when coping with intimate partner abuse
Tara Matthews
Jill Palzkill Woelfer
Martin Shelton
Cori Manthorne
Elizabeth F. Churchill
CHI '17 Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, ACM, New York, NY, USA(2017), pp. 2189-2201
Preview abstract We present a qualitative study of the digital privacy and security motivations, practices, and challenges of survivors of intimate partner abuse (IPA). This paper provides a framework for organizing survivors' technology practices and challenges into three phases: physical control, escape, and life apart. This three-phase framework combines technology practices with three phases of abuse to provide an empirically sound method for technology creators to consider how survivors of IPA can leverage new and existing technologies. Overall, our results suggest that the usability of and control over privacy and security functions should be or continue to be high priorities for technology creators seeking ways to better support survivors of IPA. View details
Preview abstract Web browser warnings should help protect people from malware, phishing, and network attacks. Adhering to warnings keeps people safer online. Recent improvements in warning design have raised adherence rates, but they could still be higher. And prior work suggests many people still do not understand them. Thus, two challenges remain: increasing both comprehension and adherence rates. To dig deeper into user decision making and comprehension of warnings, we performed an experience sampling study of web browser security warnings, which involved surveying over 6,000 Chrome and Firefox users in situ to gather reasons for adhering or not to real warnings. We find these reasons are many and vary with context. Contrary to older prior work, we do not find a single dominant failure in modern warning design---like habituation---that prevents effective decisions. We conclude that further improvements to warnings will require solving a range of smaller contextual misunderstandings. View details

Highlighted work