
Rishabh Agarwal
I am a staff research scientist in the Google DeepMind team in Montréal. My research interests mainly revolve around Reinforcement Learning (RL), often with the goal of making RL methods suitable for real-world problems, and includes an outstanding paper award at NeurIPS. My personal website is at agarwl.github.io.
Research Areas
Authored Publications
Sort By
Google
DistillSpec: Improving speculative decoding via knowledge distillation
Yongchao Zhou
Kaifeng Lyu
Aditya Menon
Jean-François Kagy
International Conference on Learning Representations (ICLR) (2024)
Proto-Value Networks: Scaling Representation Learning with Auxiliary Tasks
Jesse Farebrother
Joshua Greaves
Charline Le Lan
Marc Bellemare
International Conference on Learning Representations (ICLR) (2023)
Bigger, Better, Faster: Human-level Atari with human-level efficiency
Max Schwarzer
Johan Obando Ceron
Aaron Courville
Marc Bellemare
ICML (2023)
Reincarnating Reinforcement Learning: Reusing Prior Computation to Accelerate Progress
Max Allen Schwarzer
Aaron Courville
Marc G. Bellemare
NeurIPS (2022)
Neural Additive Models: Interpretable Machine Learning with Neural Networks
Levi Melnick
Nicholas Frosst
Ben Lengerich
Xuezhou Zhang
Rich Caruana
Geoffrey Everest Hinton
NeurIPS (2021)
Implicit Under-Parameterization Inhibits Data-Efficient Deep Reinforcement Learning
Aviral Kumar*
Dibya Ghosh
Sergey Levine
International Conference on Learning Representations (2021)