Rishabh Agarwal

Rishabh Agarwal

I am a staff research scientist in the Google DeepMind team in Montréal. My research interests mainly revolve around Reinforcement Learning (RL), often with the goal of making RL methods suitable for real-world problems, and includes an outstanding paper award at NeurIPS. My personal website is at agarwl.github.io.

Research Areas

Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Google
DistillSpec: Improving speculative decoding via knowledge distillation
Yongchao Zhou
Kaifeng Lyu
Aditya Menon
Jean-François Kagy
International Conference on Learning Representations (ICLR) (2024)
Proto-Value Networks: Scaling Representation Learning with Auxiliary Tasks
Jesse Farebrother
Joshua Greaves
Charline Le Lan
Marc Bellemare
International Conference on Learning Representations (ICLR) (2023)
Revisiting Bellman Errors for Offline Model Selection
Joshua P. Zitovsky
Daniel de Marchi
Michael R. Kosorok
ICML (2023)
Bigger, Better, Faster: Human-level Atari with human-level efficiency
Max Schwarzer
Johan Obando Ceron
Aaron Courville
Marc Bellemare
ICML (2023)
On the Generalization of Representations in Reinforcement Learning
Charline Le Lan
Stephen Tu
Adam Oberman
Marc G. Bellemare
AISTATS (2022)
Neural Additive Models: Interpretable Machine Learning with Neural Networks
Levi Melnick
Nicholas Frosst
Ben Lengerich
Xuezhou Zhang
Rich Caruana
Geoffrey Everest Hinton
NeurIPS (2021)
Implicit Under-Parameterization Inhibits Data-Efficient Deep Reinforcement Learning
Aviral Kumar*
Dibya Ghosh
Sergey Levine
International Conference on Learning Representations (2021)