Daniel Ramage
Daniel has been at Google since 2011 focusing on federated learning and privacy technologies. Additional publications available on Google Scholar.
Authored Publications
Sort By
Preview abstract
Building privacy-preserving systems for machine learning and data science on decentralized data
View details
Context-Aware Local Differential Privacy
Jayadev Acharya
Ziteng Sun
International Conference on Machine Learning (ICML) (2020)
Preview abstract
Local differential privacy (LDP) is a strong notion of privacy for individual users that often comes at the expense of a significant drop in utility. The classical definition of LDP assumes that all elements in the data domain are equally sensitive. However, in many applications, some symbols are more sensitive than others. This work proposes a context-aware framework of local differential privacy that allows a privacy designer to incorporate the application's context into the privacy definition. For binary data domains, we provide a universally optimal privatization scheme and highlight its connections to Warner's randomized response (RR) and Mangat's improved response. Motivated by geolocation and web search applications, for k-ary data domains, we consider two special cases of context-aware LDP: block-structured LDP and high-low LDP. We study discrete distribution estimation and provide communication-efficient, sample-optimal schemes and information-theoretic lower bounds for both models. We show that using contextual information can require fewer samples than classical LDP to achieve the same accuracy.
View details
Generative Models for Effective ML on Private, Decentralized Datasets
8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020, OpenReview.net
Preview abstract
To improve real-world applications of machine learning, experienced modelers develop intuition about their datasets, their models, and how the two interact. Manual inspection of raw data—of representative samples, of outliers, of misclassifications—is an essential tool in a) identifying and fixing problems in the data, b) generating new modeling hypotheses, and c) assigning or refining human-provided labels. However, manual data inspection is risky for privacy-sensitive datasets, such as those representing the behavior of real-world individuals. Furthermore, manual data inspection is impossible in the increasingly important setting of federated learning, where raw examples are stored at the edge and the modeler may only access aggregated outputs such as metrics or model parameters. This paper demonstrates that generative models—trained using federated methods and with formal differential privacy guarantees—can be used effectively to debug data issues even when the data cannot be directly inspected. We explore these methods in applications to text with differentially private federated RNNs and to images using a novel algorithm for differentially private federated GANs.
View details
Advances and Open Problems in Federated Learning
Brendan Avent
Aurélien Bellet
Mehdi Bennis
Arjun Nitin Bhagoji
Graham Cormode
Rachel Cummings
Rafael G.L. D'Oliveira
Salim El Rouayheb
David Evans
Josh Gardner
Adrià Gascón
Phillip B. Gibbons
Marco Gruteser
Zaid Harchaoui
Chaoyang He
Lie He
Zhouyuan Huo
Justin Hsu
Martin Jaggi
Tara Javidi
Gauri Joshi
Mikhail Khodak
Jakub Konečný
Aleksandra Korolova
Farinaz Koushanfar
Sanmi Koyejo
Tancrède Lepoint
Yang Liu
Prateek Mittal
Richard Nock
Ayfer Özgür
Rasmus Pagh
Ramesh Raskar
Dawn Song
Weikang Song
Sebastian U. Stich
Ziteng Sun
Florian Tramèr
Praneeth Vepakomma
Jianyu Wang
Li Xiong
Qiang Yang
Felix X. Yu
Han Yu
Arxiv (2019)
Preview abstract
Federated learning (FL) is a machine learning setting where many clients (e.g., mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g., service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and mitigates many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents a comprehensive list of open problems and challenges.
View details
Federated Learning for Mobile Keyboard Prediction
Chloé M Kiddon
Hubert Eichner
(2019)
Preview abstract
We train a recurrent neural network language model using a distributed, on-device learning framework called federated learning for the purpose of next-word prediction in a virtual keyboard for smartphones. Server-based training using stochastic gradient descent is compared with training on client devices using the Federated Averaging algorithm. The federated algorithm, which enables training on a higher-quality dataset for this use case, is shown to achieve better prediction recall. This work demonstrates the feasibility and benefit of training language models on client devices without exporting sensitive user data to servers. The federated learning environment gives users greater control over their data and simplifies the task of incorporating privacy by default with distributed training and aggregation across a population of client devices.
View details
Federated Evaluation of On-device Personalization
Chloé M Kiddon
Hubert Eichner
Kangkang Wang
(2019)
Preview abstract
Federated learning is a distributed, on-device computation framework that enables training global models without exporting sensitive user data to servers. In this work, we describe methods to extend the federation framework to evaluate strategies for personalization of global models. We present tools to analyze the effects of personalization and evaluate conditions under which personalization yields desirable models. We report on our experiments personalizing a language model for a virtual keyboard for smartphones with a population of tens of millions of users. We show that a significant fraction of users benefit from personalization.
View details
Towards Federated Learning at Scale: System Design
Hubert Eichner
Wolfgang Grieskamp
Dzmitry Huba
Vladimir Ivanov
Chloé M Kiddon
Jakub Konečný
Stefano Mazzocchi
Timon Van Overveldt
David Petrou
Jason Roselander
SysML 2019
Preview abstract
Federated Learning is a distributed machine learning approach which enables training on a large corpus of data which never needs to leave user devices. We have spent some effort over the last two years building a scalable production system for FL. In this paper, we report about the resulting high-level design, sketching the challenges and the solutions, as well as touching the open problems and future directions.
View details
Learning Differentially Private Recurrent Language Models
Kunal Talwar
Li Zhang
International Conference on Learning Representations (ICLR) (2018)
Preview abstract
We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy. Our work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent. In particular, we add user-level privacy protection to the federated averaging algorithm, which makes "large step" updates from user-level data. Our work demonstrates that given a dataset with a sufficiently large number of users (a requirement easily met by even small internet-scale datasets), achieving differential privacy comes at the cost of increased computation, rather than in decreased utility as in most prior work. We find that our private LSTM language models are quantitatively and qualitatively similar to un-noised models when trained on a large dataset.
View details
Federated Learning for Mobile Keyboard Prediction
Chloé M Kiddon
Hubert Eichner
(2018)
Preview abstract
We train a recurrent neural network language model using a distributed, on-device learning framework called federated learning for the purpose of next-word prediction in a virtual keyboard for smartphones. Server-based training using stochastic gradient descent is compared with training on client devices using the FederatedAveraging algorithm. The federated algorithm, which enables training on a higher-quality dataset for this use case, is shown to achieve better prediction recall.
This work demonstrates the feasibility and benefit of training language models on client devices without exporting sensitive user data to servers. The federated learning environment gives users greater control over their data and simplifies the task of incorporating privacy by default with distributed training and aggregation across a population of client devices.
View details
Applied Federated Learning: Improving Google Keyboard Query Suggestions
Galen Andrew
Hubert Eichner
Haicheng Sun
Wei Li
Nicholas Kong
(2018)
Preview abstract
Federated learning is a distributed form of machine learning where both the training data and model training are decentralized. In this paper, we use federated learning in a commercial, global-scale setting to train, evaluate and deploy a model to improve virtual keyboard search suggestion quality without direct access to the underlying user data. We describe our observations in federated training, compare metrics to live deployments, and present resulting quality increases. In whole, we demonstrate how federated learning can be applied end-to-end to both improve user experiences and enhance user privacy.
View details