Jump to Content

Imitation Learning via Off-Policy Distribution Matching

Ilya Kostrikov
Ofir Nachum
Submission for NeurIPS workshop, ICLR conference (2020)
Google Scholar

Abstract

When performing imitation learning from expert demonstrations, distribution matching is a popular approach, in which one typically alternates between estimating distribution ratios and then using these ratios as rewards in a standard reinforcement learning (RL) algorithm. Traditionally, estimation of the distribution ratio requires on-policy data, which has caused previous work to either be exorbitantly data-inefficient or alter the original objective in a manner that can drastically change its optimum. In this work, we show how the original distribution ratio estimation objective may be transformed in a principled manner to yield a completely off-policy objective. In addition to the data-efficiency that this provides, we are able to show that this objective also renders the use of a separate RL optimization unnecessary. Rather, an imitation policy may be learned directly from this objective without the use of explicit rewards. We call the resulting algorithm ValueDICE and evaluate it on a suite of popular imitation learning benchmarks, finding that it can consistently outperform state-of-the-art methods.

Research Areas