Jump to Content

Revisiting Rainbow: More inclusive deep reinforcement learning research

Johan Samir Obando-Cerón
Proceedings of the 38th International Conference on Machine Learning, PMLR (2021)
Google Scholar

Abstract

Since the introduction of DQN by \cite{mnih2015humanlevel}, a vast majority of reinforcement learning research has focused on reinforcement learning with the use of deep neural networks. New methods are typically evaluated on a set of standard environments that have now become standard, such as the Arcade Learning Environment (ALE) \citep{bellemare2012ale}. While these benchmarks help standardize evaluation, their computational cost has the unfortunate side effect of widening the gap between those with ample access to computational resources, and those without. In this work we argue that, despite the community's emphasis on large-scale environments, the traditional ``small-scale'' environments can still yield valuable scientific insights and can help reduce the barriers to entry for newcomers from underserved communities. To substantiate our claims, we empirically revisit paper which introduced the Rainbow algorithm \citep{hessel18rainbow} and present some new insights into the algorithms used by Rainbow.