- Jianyu Wang
- Zachary Burr Charles
- Zheng Xu
- Gauri Joshi
- Brendan McMahan
- Blaise Hilary Aguera-Arcas
- Maruan Al-Shedivat
- Galen Andrew
- A. Salman Avestimehr
- Katharine Daly
- Deepesh Data
- Suhas Diggavi
- Hubert Eichner
- Advait Gadhikar
- Zachary Garrett
- Antonious M. Girgis
- Filip Hanzely
- Andrew Hard
- Chaoyang He
- Samuel Horvath
- Zhouyuan Huo
- Alex Ingerman
- Martin Jaggi
- Tara Javidi
- Peter Kairouz
- Satyen Chandrakant Kale
- Sai Praneeth Karimireddy
- Jakub Konečný
- Sanmi Koyejo
- Tian Li
- Luyang Liu
- Mehryar Mohri
- Hang Qi
- Sashank Reddi
- Peter Richtarik
- Karan Singhal
- Virginia Smith
- Mahdi Soltanolkotabi
- Weikang Song
- Ananda Theertha Suresh
- Sebastian Stich
- Ameet Talwalkar
- Hongyi Wang
- Blake Woodworth
- Shanshan Wu
- Felix Yu
- Honglin Yuan
- Manzil Zaheer
- Mi Zhang
- Tong Zhang
- Chunxiang (Jake) Zheng
- Chen Zhu
- Wennan Zhu
Abstract
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data, motivated by and designed for privacy protection. The distributed learning process can be formulated as solving federated optimization problems, which emphasize communication efficiency, data heterogeneity, compatibility with privacy and system requirements, and other constraints that are not primary considerations in other problem settings. This paper provides recommendations and guidelines on formulating, designing, evaluating and analyzing federated optimization algorithms through concrete examples and practical implementation, with a focus on conducting effective simulations to infer real-world performance. The goal of this work is not to survey the current literature, but to inspire researchers and practitioners to design federated learning algorithms that can be used in various practical applications.
Research Areas
Learn more about how we do research
We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work