A Field Guide to Federated Optimization

Suhas Diggavi
Chaoyang He
Mahdi Soltanolkotabi
Maruan Al-Shedivat
Chen Zhu
Peter Richtarik
Honglin Yuan
Ameet Talwalkar
Sebastian Stich
Sanmi Koyejo
Hongyi Wang
Deepesh Data
Blake Woodworth
Filip Hanzely
A. Salman Avestimehr
Tian Li
Jianyu Wang
Samuel Horvath
Antonious M. Girgis
Mi Zhang
Advait Gadhikar
Martin Jaggi
Gauri Joshi
Tara Javidi
Virginia Smith
Sai Praneeth Karimireddy
Karan Singhal
Jakub Konečný
Manzil Zaheer
Satyen Chandrakant Kale
Chunxiang (Jake) Zheng
Weikang Song
Galen Andrew
Katharine Daly
Tong Zhang
Hubert Eichner
arxiv (2021)

Abstract

Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data, motivated by and designed for privacy protection. The distributed learning process can be formulated as solving federated optimization problems, which emphasize communication efficiency, data heterogeneity, compatibility with privacy and system requirements, and other constraints that are not primary considerations in other problem settings. This paper provides recommendations and guidelines on formulating, designing, evaluating and analyzing federated optimization algorithms through concrete examples and practical implementation, with a focus on conducting effective simulations to infer real-world performance. The goal of this work is not to survey the current literature, but to inspire researchers and practitioners to design federated learning algorithms that can be used in various practical applications.