
Sashank Reddi
Research Areas
Authored Publications
Sort By
Google
Efficient Training of Language Models using Few-Shot Learning
Shankar Krishnan
Satyen Kale
Seungyeon Kim
ICML (2023)
On Emergence of Activation Sparsity in Trained Transformers
Zonglin Li
Chong You
Daliang Li
Ke Ye
International Conference on Learning Representations (ICLR) (2023)
In defense of dual-encoders for neural ranking
Aditya Krishna Menon
Sadeep Jayasumana
Seungyeon Kim
International Conference on Machine Learning (ICML) (2022)
RankDistil: Distillation for Ranking
Aditya Krishna Menon
Seungyeon Kim
AISTATS 2021 (2021)
Efficient Training of Retrieval Models using Negative Cache
Erik Lindgren
Neural Information Processing Systems 2021 (2021)
A Field Guide to Federated Optimization
Jianyu Wang
Gauri Joshi
Maruan Al-Shedivat
Galen Andrew
A. Salman Avestimehr
Katharine Daly
Deepesh Data
Suhas Diggavi
Hubert Eichner
Advait Gadhikar
Antonious M. Girgis
Filip Hanzely
Chaoyang He
Samuel Horvath
Martin Jaggi
Tara Javidi
Satyen Chandrakant Kale
Sai Praneeth Karimireddy
Jakub Konečný
Sanmi Koyejo
Tian Li
Peter Richtarik
Karan Singhal
Virginia Smith
Mahdi Soltanolkotabi
Weikang Song
Sebastian Stich
Ameet Talwalkar
Hongyi Wang
Blake Woodworth
Honglin Yuan
Manzil Zaheer
Mi Zhang
Tong Zhang
Chunxiang (Jake) Zheng
Chen Zhu
arxiv (2021)
Adaptive Federated Optimization
Manzil Zaheer
Jakub Konečný
(2021)
A statistical perspective on distillation
Aditya Krishna Menon
Seungyeon Kim
International Conference on Machine Learning (ICML) 2021
Disentangling sampling and labeling bias for learning in large-output spaces
Aditya Krishna Menon
Sadeep Jayasumana
International Conference on Machine Learning (ICML) 2021
Are Transformers universal approximators of sequence-to-sequence functions?
Chulhee Yun
International Conference on Learning Representations (ICLR) (2020)