Jump to Content

Offline Primitive Discovery for Accelerating Data-Driven Reinforcement Learning

Anurag Ajay
Aviral Kumar
Pulkit Agrawal
Sergey Levine
Ofir Nachum
ICLR 2021 (2021) (to appear)

Abstract

Reinforcement learning (RL) has achieved impressive performance in a variety of online settings in which an agent’s ability to query the environment for transitions and rewards is effectively unlimited. However, in many practical applications the situation is reversed: an agent may have access to large amounts of undirected offline experience data, while access to the online environment is severely limited. In this work, we focus on this practical setting, and our main insight is that, when presented with offline data composed of a variety of behaviors, an effective way to leverage this data is to extract a continuous space of recurring and temporally extended primitive behaviors before using these primitives for downstream task learning. Primitives extracted in this way serve two purposes: they delineate the behaviors that are supported by the data from those that are not, making them useful for avoiding distributional shift in offline RL; and they provide a degree of temporal abstraction, which mitigates compounding error propagation during reinforcement learning in theory, and leads to improved offline RL in practice. In addition to accelerating offline policy optimization, we show that performing offline primitive learning in this way can also be leveraged for improving few-shot imitation learning as well as exploration and transfer in online RL on a variety of challenging benchmark domains.

Research Areas