Shuffle Private Stochastic Convex Optimization
Abstract
In shuffle privacy, each user sends a collection of randomized messages to a trusted shuffler, the shuffler randomly permutes these messages, and the resulting shuffled collection of messages must satisfy differential privacy. Prior work in this model has largely focused on protocols that use a single round of communication to compute algorithmic primitives like means, histograms, and counts. In this work, we present interactive shuffle protocols for stochastic convex optimization. Our optimization protocols rely on a new noninteractive protocol for summing vectors of bounded $\ell_2$ norm. By combining this sum subroutine with accelerated gradient descent, Nesterov's smoothing method, and a careful alignment of noise level and mini-batch size, we obtain loss guarantees that significantly improve on those of the local model and sometimes match those of the central model.