Cross-domain Imitation from Observations
Abstract
Imitation learning seeks to circumvent the difficulty in designing proper reward functions for
training agents by utilizing expert behavior. With environments modeled as Markov Decision Processes (MDP), most of the existing imitation algorithms are contingent on the availability of expert demonstrations in the same MDP as the one in which a new imitation policy is to be learned.
In this paper, we study the problem of how to imitate tasks when there exists discrepancies between the expert and agent MDP. These discrepancies across domains could include differing dynamics, viewpoint or morphology; we present a novel framework to learn correspondences across
such domains. Importantly, in contrast to prior works, we use unpaired and unaligned trajectories containing only states in the expert domain, to learn this correspondence. We utilize a cycleconsistency constraint on both the state space and a domain agnostic latent space to do this. In addition, we enforce consistency on the temporal position of states via a normalized position estimator function, to align the trajectories across the two domains. Once this correspondence is found,
we can directly transfer the demonstrations on one domain to the other and use it for imitation.
Experiments across a wide-variety of challenging domains demonstrate the efficacy of our approach.
training agents by utilizing expert behavior. With environments modeled as Markov Decision Processes (MDP), most of the existing imitation algorithms are contingent on the availability of expert demonstrations in the same MDP as the one in which a new imitation policy is to be learned.
In this paper, we study the problem of how to imitate tasks when there exists discrepancies between the expert and agent MDP. These discrepancies across domains could include differing dynamics, viewpoint or morphology; we present a novel framework to learn correspondences across
such domains. Importantly, in contrast to prior works, we use unpaired and unaligned trajectories containing only states in the expert domain, to learn this correspondence. We utilize a cycleconsistency constraint on both the state space and a domain agnostic latent space to do this. In addition, we enforce consistency on the temporal position of states via a normalized position estimator function, to align the trajectories across the two domains. Once this correspondence is found,
we can directly transfer the demonstrations on one domain to the other and use it for imitation.
Experiments across a wide-variety of challenging domains demonstrate the efficacy of our approach.