Publications
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Sort By
1 - 15 of 521 publications
50 Shades of Support: A Device-Centric Analysis of Android Security Updates
Abbas Acar
Esteban Luques
Harun Oz
Ahmet Aris
Selcuk Uluagac
Network and Distributed System Security (NDSS) Symposium (2024)
Preview abstract
Android is by far the most popular OS with over
three billion active mobile devices. As in any software, uncovering
vulnerabilities on Android devices and applying timely patches
are both critical. Android Open Source Project (AOSP) has
initiated efforts to improve the traceability of security updates
through Security Patch Levels (SPLs) assigned to devices. While
this initiative provided better traceability for the vulnerabilities,
it has not entirely resolved the issues related to the timeliness
and availability of security updates for end users. Recent studies
on Android security updates have focused on the issue of delay
during the security update roll-out, largely attributing this to
factors related to fragmentation. However, these studies fail to
capture the entire Android ecosystem as they primarily examine
flagship devices or do not paint a comprehensive picture of the
Android devices’ lifecycle due to the datasets spanning over a
short timeframe. To address this gap in the literature, we utilize
a device-centric approach to analyze the security update behavior
of Android devices. Our approach aims to understand the security
update distribution behavior of OEMs (e.g., Samsung) by using
a representative set of devices from each OEM and characterize
the complete lifecycle of an average Android device. We obtained
367K official security update records from public sources, span-
ning from 2014 to 2023. Our dataset contains 599 unique devices
from four major OEMs that are used in 97 countries and are
associated with 109 carriers. We identify significant differences
in the roll-out of security updates across different OEMs, device
models/types, and geographical regions across the world. Our
findings show that the reasons for the delay in the roll-out of
security updates are not limited to fragmentation but also involve
OEM-specific factors. Our analysis also uncovers certain key
issues that can be readily addressed as well as exemplary practices
that can be immediately adopted by OEMs in practice.
View details
Wear's my Data? Understanding the Cross-Device Runtime Permission Model in Wearables
Doguhan Yeke
Muhammad Ibrahim
Habiba Farukh
Abdullah Imran
Antonio Bianchi
Z. Berkay Celik
IEEE Security and Privacy (2024) (to appear)
Preview abstract
Wearable devices are becoming increasingly important, helping us stay healthy and connected. There are a variety
of app-based wearable platforms that can be used to manage
these devices. The apps on wearable devices often work with a
companion app on users’ smartphones. The wearable device and
the smartphone typically use two separate permission models
that work synchronously to protect sensitive data. However, this
design creates an opaque view of the management of permission-
protected data, resulting in over-privileged data access without
the user’s explicit consent. In this paper, we performed the first
systematic analysis of the interaction between the Android and
Wear OS permission models. Our analysis is two-fold. First,
through taint analysis, we showed that cross-device flows of
permission-protected data happen in the wild, demonstrating
that 28 apps (out of the 150 we studied) on Google Play
have sensitive data flows between the wearable app and its
companion app. We found that these data flows occur without
the users’ explicit consent, introducing the risk of violating
user expectations. Second, we conducted an in-lab user study
to assess users’ understanding of permissions when subject to
cross-device communication (n = 63). We found that 66.7% of
the users are unaware of the possibility of cross-device sensitive
data flows, which impairs their understanding of permissions in
the context of wearable devices and puts their sensitive data at
risk. We also showed that users are vulnerable to a new class of
attacks that we call cross-device permission phishing attacks on
wearable devices. Lastly, we performed a preliminary study on
other watch platforms (i.e., Apple’s watchOS, Fitbit, Garmin
OS) and found that all these platforms suffer from similar
privacy issues. As countermeasures for the potential privacy
violations in cross-device apps, we suggest improvements in the
system prompts and the permission model to enable users to
make better-informed decisions, as well as on app markets to
identify malicious cross-device data flows.
View details
FP-Fed: Privacy-Preserving Federated Detection of Browser Fingerprinting
Meenatchi Sundaram Muthu Selva Annamalai
Emiliano De Cristofaro
Network and Distributed System Security (NDSS) Symposium (2024)
Preview abstract
Browser fingerprinting often provides an attractive alternative to third-party cookies for tracking users across the web. In fact, the increasing restrictions on third-party cookies placed by common web browsers and recent regulations like the GDPR may accelerate the transition. To counter browser fingerprinting, previous work proposed several techniques to detect its prevalence and severity. However, these rely on 1) centralized web crawls and/or 2) computationally intensive operations to extract and process signals (e.g., information-flow and static analysis).
To address these limitations, we present FP-Fed, the first distributed system for browser fingerprinting detection. Using FP-Fed, users can collaboratively train on-device models based on their real browsing patterns, without sharing their training data with a central entity, by relying on Differentially Private Federated Learning (DP-FL). To demonstrate its feasibility and effectiveness, we evaluate FP-Fed’s performance on a set of 18.3k popular websites with different privacy levels, numbers of participants, and features extracted from the scripts. Our experiments show that FP-Fed achieves reasonably high detection performance and can perform both training and inference efficiently, on-device, by only relying on runtime signals extracted from the execution trace, without requiring any resource-intensive operation.
View details
Preview abstract
Rust is a general-purpose programming language designed for performance and safety. Unrecoverable errors (e.g., Divide by Zero) in Rust programs are critical, as they signal bad program states and terminate programs abruptly. Previous work has contributed to utilizing KLEE, a dynamic symbolic test engine, to verify the program would not panic. However, it is difficult for engineers who lack domain expertise to write test code correctly. Besides, the effectiveness of KLEE in finding panics in production Rust code has not been evaluated. We created an approach, called PanicCheck, to hide the complexity of verifying Rust programs with KLEE. Using PanicCheck, engineers only need to annotate the function-to-verify with #[panic_check]. The annotation guides PanicCheck to generate test code, compile the function together with tests, and execute KLEE for verification. After applying PanicCheck to 21 open-source and 2 closed-source projects, we found 61 test inputs that triggered panics; 60 of the 61 panics have been addressed by developers so far. Our research shows promising verification results by KLEE, while revealing technical challenges in using KLEE. Our experience will shed light on future practice and research in program verification.
View details
AI-powered patching: the future of automated vulnerability fixes
Jan Keller
Jan Nowakowski
Google Security Engineering Technical Report (2024) (to appear)
Preview abstract
As AI continues to advance at rapid speed, so has its ability to unearth hidden security vulnerabilities in all types of software. Every bug uncovered is an opportunity to patch and strengthen code—but as detection continues to improve, we need to be prepared with new automated solutions that bolster our ability to fix those bugs. That’s why our Secure AI Framework (SAIF) includes a fundamental pillar addressing the need to “automate defenses to keep pace with new and existing threats.”
This paper shares lessons from our experience leveraging AI to scale our ability to fix bugs, specifically those found by sanitizers in C/C++, Java, and Go code. By automating a pipeline to prompt Large Language Models (LLMs) to generate code fixes for human review, we have harnessed our Gemini model to successfully fix 15% of sanitizer bugs discovered during unit tests, resulting in hundreds of bugs patched. Given the large number of sanitizer bugs found each year, this seemingly modest success rate will with time save significant engineering effort. We expect this success rate to continually improve and anticipate that LLMs can be used to fix bugs in various languages across the software development lifecycle.
View details
Developer Ecosystems for Software Safety
ACM Queue, 22 (2024), 73–99
Preview abstract
This paper reflects on work at Google over the past decade to address common types of software safety and security defects. Our experience has shown that software safety is an emergent property of the software and tooling ecosystem it is developed in and the production environment into which it is deployed. Thus, to effectively prevent common weaknesses at scale, we need to shift-left the responsibility for ensuring safety and security invariants to the end-to-end developer ecosystem, that is, programming languages, software libraries, application frameworks, build and deployment tooling, the production platform and its configuration surfaces, and so forth.
Doing so is practical and cost effective when developer ecosystems are designed with application archetypes in mind, such as web or mobile apps: The design of the developer ecosystem can address threat model aspects that apply commonly to all applications of the respective archetype, and investments to ensure safety invariants at the ecosystem level amortize across many applications.
Applying secure-by-design principles to developer ecosystems at Google has achieved drastic reduction and in some cases near-zero residual rates of common classes of defects, across hundreds of applications being developed by thousands of developers.
View details
Secure by Design at Google
Google Security Engineering (2024)
Preview abstract
This whitepaper provides an overview of Google's approach to secure design.
View details
Preview abstract
2022 marked the 50th anniversary of memory safety vulnerabilities, first reported by Anderson et al. Half a century later, we are still dealing with memory safety bugs despite substantial investments to improve memory unsafe languages.
Like others', Google’s data and internal vulnerability research show that memory safety bugs are widespread and one of the leading causes of vulnerabilities in memory-unsafe codebases. Those vulnerabilities endanger end users, our industry, and the broader society.
At Google, we have decades of experience addressing, at scale, large classes of vulnerabilities that were once similarly prevalent as memory safety issues. Based on this experience we expect that high assurance memory safety can only be achieved via a Secure-by-Design approach centered around comprehensive adoption of languages with rigorous memory safety guarantees.
We see no realistic path for an evolution of C++ into a language with rigorous memory safety guarantees that include temporal safety. As a consequence, we are considering a gradual transition of C++ code at Google towards other languages that are memory safe.
Given the large volume of pre-existing C++, we believe it is nonetheless necessary to improve the safety of C++ to the extent practicable. We are considering transitioning to a safer C++ subset, augmented with hardware security features like MTE.
View details
Website Data Transparency in the Browser
Sebastian Zimmeck
Daniel Goldelman
Owen Kaplan
Logan Brown
Justin Casler
Judeley Jean-Charles
Joe Champeau
24th Privacy Enhancing Technologies Symposium (PETS 2024), PETS (to appear)
Preview abstract
Data collection by websites and their integrated third parties is often not transparent. We design privacy interfaces for the browser to help people understand who is collecting which data from them. In a proof of concept browser extension, Privacy Pioneer, we implement a privacy popup, a privacy history interface, and a watchlist to notify people when their data is collected. For detecting location data collection, we develop a machine learning model based on TinyBERT, which reaches an average F1 score of 0.94. We supplement our model with deterministic methods to detect trackers, collection of personal data, and other monetization techniques. In a usability study with 100 participants 82% found Privacy Pioneer easy to understand and 90% found it useful indicating the value of privacy interfaces directly integrated in the browser.
View details
(In)Security of File Uploads in Node.js
Harun Oz
Abbas Acar
Ahmet Aris
Amin Kharraz
Selcuk Uluagac
The Web conference (WWW) (2024)
Preview abstract
File upload is a critical feature incorporated by a myriad of web
applications to enable users to share and manage their files conveniently. It has been used in many useful services such as file-sharing
and social media. While file upload is an essential component of
web applications, the lack of rigorous checks on the file name, type,
and content of the uploaded files can result in security issues, often
referred to as Unrestricted File Upload (UFU). In this study, we analyze the (in)security of popular file upload libraries and real-world
applications in the Node.js ecosystem. To automate our analysis, we
propose NodeSec– a tool designed to analyze file upload insecurities in Node.js applications and libraries. NodeSec generates unique
payloads and thoroughly evaluates the application’s file upload security against 13 distinct UFU-type attacks. Utilizing NodeSec, we
analyze the most popular file upload libraries and real-world ap-
plications in the Node.js ecosystem. Our results reveal that some
real-world web applications are vulnerable to UFU attacks and dis-
close serious security bugs in file upload libraries. As of this writing,
we received 19 CVEs and two US-CERT cases for the security issues that we reported. Our findings provide strong evidence that
the dynamic features of Node.js applications introduce security
shortcomings and that web developers should be cautious when
implementing file upload features in their applications.
View details
Securing the AI Software Supply Chain
Isaac Hepworth
Kara Olive
Kingshuk Dasgupta
Michael Le
Mark Lodato
Mihai Maruseac
Sarah Meiklejohn
Shamik Chaudhuri
Tehila Minkus
Google, Google, 1600 Amphitheatre Parkway, Mountain View, CA, 94043 (2024)
Preview abstract
As AI-powered features gain traction in software applications, we see many of the same problems we’ve faced with traditional software—but at an accelerated pace. The threat landscape continues to expand as AI is further integrated into everyday products, so we can expect more attacks. Given the expense of building models, there is a clear need for supply chain solutions.
This paper explains our approach to securing our AI supply chain using provenance information and provides guidance for other organizations. Although there are differences between traditional and AI development processes and risks, we can build on our work over the past decade using Binary Authorization for Borg (BAB), Supply-chain Levels for Software Artifacts (SLSA), and next-generation cryptographic signing solutions via Sigstore, and adapt these to the AI supply chain without reinventing the wheel. Depending on internal processes and platforms, each organization’s approach to AI supply chain security will look different, but the focus should be on areas where it can be improved in a relatively short time.
Readers should note that the first part of this paper provides a broad overview of “Development lifecycles for traditional and AI software”. Then we delve specifically into AI supply chain risks, and explain our approach to securing our AI supply chain using provenance information. More advanced practitioners may prefer to go directly to the sections on “AI supply chain risks,” “Controls for AI supply chain security,” or even the “Guidance for practitioners” section at the end of the paper, which can be adapted to the needs of any organization.
View details
Preview abstract
In this paper, we explore the challenges of ensuring security and privacy for users from diverse demographic backgrounds. We propose a threat modeling approach to identify potential risks and countermeasures for product inclusion in security and privacy. We discuss various factors that can affect a user's ability to achieve a high level of security and privacy, including low-income demographics, poor connectivity, shared device usage, ML fairness, etc. We present results from a global security and privacy user experience survey and discuss the implications for product developers. Our work highlights the need for a more inclusive approach to security and privacy and provides a framework for researchers and practitioners to consider when designing products and services for a diverse range of users.
View details
Assessing Web Fingerprinting Risk
Robert Busa-Fekete
Antonio Sartori
Proceedings of the ACM Web Conference (WWW 2024)
Preview abstract
Modern Web APIs allow developers to provide extensively customized experiences for website visitors, but the richness of the device information they provide also make them vulnerable to being abused by malign actors to construct browser fingerprints, device-specific identifiers that enable covert tracking of users even when cookies are disabled.
Previous research has established entropy, a measure of information, as the key metric for quantifying fingerprinting risk. Earlier studies that estimated the entropy of Web APIs were based on data from a single website or were limited to an extremely small sample of clients. They also analyzed each Web API separately and then summed their entropies to quantify overall fingerprinting risk, an approach that can lead to gross overestimates.
We provide the first study of browser fingerprinting which addresses the limitations of prior work. Our study is based on actual visited pages and Web API function calls reported by tens of millions of real Chrome browsers in-the-wild. We accounted for the dependencies and correlations among Web APIs, which is crucial for obtaining more realistic entropy estimates. We also developed a novel experimental design that accurately estimates entropy while never observing too much information from any single user. Our results provide an understanding of the distribution of entropy for different website categories, confirm the utility of entropy as a fingerprinting proxy, and offer a method for evaluating browser enhancements which are intended to mitigate fingerprinting.
View details
Improved Communication-Privacy Trade-offs in L2 Mean Estimation under Streaming Differential Privacy
Wei-Ning Chen
Albert No
Sewoong Oh
International Conference on Machine Learning (ICML) (2024)
Preview abstract
We study $L_2$ mean estimation under central differential privacy and communication constraints, and address two key challenges: firstly, existing mean estimation schemes that simultaneously handle both constraints are usually optimized for $L_\infty$ geometry and rely on random rotation or Kashin's representation to adapt to $L_2$ geometry, resulting in suboptimal leading constants in mean square errors (MSEs); secondly, schemes achieving order-optimal communication-privacy trade-offs do not extend seamlessly to streaming differential privacy (DP) settings (e.g., tree aggregation or matrix factorization), rendering them incompatible with DP-FTRL type optimizers.
In this work, we tackle these issues by introducing a novel privacy accounting method for the sparsified Gaussian mechanism that incorporates the randomness inherent in sparsification into the DP noise. Unlike previous approaches, our accounting algorithm directly operates in $L_2$ geometry, yielding MSEs that fast converge to those of the uncompressed Gaussian mechanism. Additionally, we extend the sparsification scheme to the matrix factorization framework under streaming DP and provide a precise accountant tailored for DP-FTRL type optimizers. Empirically, our method demonstrates at least a 100x improvement of compression for DP-SGD across various FL tasks.
View details
Preview abstract
Service providers of large language model (LLM) applications collect user instructions in the wild and use them in further aligning LLMs with users’ intentions. These instructions, which potentially contain sensitive information, are annotated by human workers in the process. This poses a new privacy risk not addressed by the typical private optimization. To this end, we propose using synthetic instructions to replace real instructions in data annotation and model fine-tuning. Formal differential privacy is guaranteed by generating those synthetic instructions using privately fine-tuned generators. Crucial in achieving the desired utility is our novel filtering algorithm that matches the distribution of the synthetic instructions to that of the real ones. In both supervised fine-tuning and reinforcement learning from human feedback, our extensive experiments demonstrate the high utility of the final set of synthetic instructions by showing comparable results to real instructions. In supervised fine-tuning, models trained with private synthetic instructions outperform leading open-source models such as Vicuna
View details