Jump to Content

Model-Based Offline Planning

Arthur Argenson
ICLR 2021 (2021)
Google Scholar

Abstract

Offline learning is a key part of making reinforcement learning (RL) useable in real systems. Offline RL looks at scenarios where there is data from a system's operation, but no direct access to the system when learning a policy. Recent work on training RL policies from offline data has shown results where a model-free policy is learned either from the data, or a modelled representation of the data. Model-free policies tend to be more performant, but are more opaque, harder to command externally, and less easy to integrate into larger systems. We propose an offline learner that generates a model that can be used to control the system directly through planning. This allows us to have easily controllable policies directly from data, without ever interacting with the system. We show the performance of our algorithm, Model-Based Offline Planning (MBOP) on a series of robotics-inspired tasks, and demonstrate its ability leverage planning to respect environmental constraints. We are able to create goal-conditioned polices for certain simulated systems from as little as 100 seconds of real-time system interaction.

Research Areas