Google Research

State Representation Learning with Robotic Priors for Partially Observable Environments

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2019)

Abstract

We introduce Recurrent State Representation Learning (RSRL) to tackle the problem of state representation learning in robotics for partially observable environments. To learn low-dimensional state representations, we combine a Long Short Term Memory network with robotic priors. RSRL introduces new priors with landmarks and combines them with existing robotics priors from the literature to train the representations. To evaluate the quality of the learned state representation, we introduce validation networks that help us better visualize and quantitatively analyze the learned state representations. We show that the learned representations are low-dimensional, locally consistent, and can approximate the underlying true state for robot localization in simulated 3D maze environments. We use the learned representations for reinforcement learning and show that we achieve similar performance as training with the true state. The learned representations are robust to landmark misclassification errors.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work