Data-Efficient Hierarchical Reinforcement Learning

Ofir Nachum
Shane Gu
Honglak Lee
Sergey Levine
NeurIPS(2018)

Abstract

Hierarchical reinforcement learning (HRL) is a promising approach to extend traditional reinforcement learning (RL) methods to solve more complex tasks. Yet, the majority of current HRL methods require careful task-specific design and on-policy training, making them impractical in real-world scenarios. In this paper, we study how we can develop HRL algorithms that are general, in that they do not make onerous additional assumptions beyond standard RL algorithms, and efficient, in the sense that they can be used with modest numbers of interaction samples, making them suitable for real-world problems such as robotic control. For generality, we develop a scheme where lower-level controllers are supervised with goals that are learned and proposed automatically by the higher-level controllers. As for efficiency, we propose a principled method for using off-policy experience for both upper and lower-level behaviors, despite the fact that each is interacting with a non-stationary partner. This allows us to take advantage of recent advances in off-policy RL and train hierarchical policies with a much smaller amount of environment interactions than any generic on-policy method. We find that our resulting HRL agent is generally applicable and highly sample-efficient. Our experiments show that our method can be used to learn highly complex behaviors for simulated robots, such as pushing objects and utilizing them to reach target locations, using 10M samples. In comparisons with a number of prior HRL methods, we find that our approach substantially outperforms the previous state-of-the-art techniques.

Research Areas