Google Research

User-Level Private Learning via Correlated Sampling


Most works in learning with differential privacy (DP) have focused on the case where each user has a single sample. In this work, we consider the setting where each user receives $m$ samples and the privacy protection is enforced at the level of each user's data. We show that, in this setting, we may learn with much few number of samples. Specifically, we show that, as long as each user receives sufficiently many samples, we can learn any privately learnable class via an $(\eps, \delta)$-DP algorithm using only $O(\log(1/\delta)/\eps)$ users. For $\eps$-DP algorithms, we show that we can learn using only $O_{\eps}(d)$ users even in the local model, where $d$ is the probabilistic representation dimension. In both cases, we show a nearly-matching lower bound on the number of users required.

A crucial component of our results is a generalization of \emph{global stability}~\cite{BunLM20} that allows the use of public randomness. Under this relaxed notion, we employ a correlated sampling strategy to show that the global stability can be boosted to be arbitrarily close to one, at a polynomial expense in the number of samples.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work