Jump to Content

Jump-Start Reinforcement Learning

Ike Uchendu
Yao Lu
Banghua Zhu
Mengyuan Yan
Joséphine Simon
Matt Bennice
Chuyuan Kelly Fu
Cong Ma
Jiantao Jiao
Sergey Levine
Karol Hausman
NeurIPS 2021 Robot Learning Workshop, RSS 2022 Scaling Robot Learning Workshop
Google Scholar

Abstract

Reinforcement learning (RL) provides a theoretical framework for continuously improving an agent's behavior via trial and error. However, efficiently learning policies from scratch can be very difficult, particularly for tasks that present exploration challenges. In such settings, it might be desirable to initialize RL with an existing policy, offline data, or demonstrations. However, naively performing such initialization in RL often works poorly, especially for value-based methods. In this paper, we present a meta algorithm that can use offline data, demonstrations, or a pre-existing policy to initialize an RL policy, and is compatible with any RL approach. In particular, we propose Jump-Start Reinforcement Learning (JSRL), an algorithm that employs two policies to solve tasks: a guide-policy, and an exploration-policy. By using the guide-policy to form a curriculum of starting states for the exploration-policy, we are able to efficiently improve performance on a set of simulated robotic tasks. In addition, we provide an upper bound on the sample complexity of JSRL and show that it is able to significantly outperform existing imitation and reinforcement learning algorithms, particularly in the small-data regime.

Research Areas