Contingency-Aware Exploration in Reinforcement Learning
Abstract
This paper investigates whether learning contingency-awareness and controllable
aspects of an environment can lead to better exploration in reinforcement learning.
To investigate this question, we consider an instantiation of this hypothesis evaluated on the Arcade Learning Element (ALE). In this study, we develop an attentive
dynamics model (ADM) that discovers controllable elements of the observations,
which are often associated with the location of the character in Atari games. The
ADM is trained in a self-supervised fashion to predict the actions taken by the agent.
The learned contingency information is used as a part of the state representation for
exploration purposes. We demonstrate that combining actor-critic algorithm with
count-based exploration using our representation achieves impressive results on a
set of notoriously challenging Atari games due to sparse rewards.1 For example,
we report a state-of-the-art score of >11,000 points on MONTEZUMA’S REVENGE
without using expert demonstrations, explicit high-level information (e.g., RAM
states), or supervisory data. Our experiments confirm that contingency-awareness
is indeed an extremely powerful concept for tackling exploration problems in
reinforcement learning and opens up interesting research questions for further
investigations.