Google Research

Lyapunov-based Safe Policy Optimization for Continuous Control

International Conference of Machine Learning 2019 (2019)


In many reinforcement learning applications, it is crucial that the agent interacts with the environment only through {\em safe} policies, i.e.,~policies that do not take the agent to undesirable situations. In this paper, we present safe policy optimization algorithms based on the Lyapunov approach to {\em constrained} Markov decision processes (CMDPs) with continuous actions. The safe policy optimization trains a neural network policy, such as deep deterministic policy gradient (DDPG) or proximal policy optimization (PPO), while guarantee near-constraint satisfaction for every policy update by projecting either the policy parameter or the action onto the set of feasible solutions induced by the state-dependent linearized Lyapunov constraints. Unlike the existing constrained policy gradient (PG) algorithms, ours are more data efficient as they are able to utilize both on-policy and off-policy data. Moreover, the action-projection version of our algorithms often leads to less conservative policy updates and allows for natural integration into an end-to-end PG training pipeline. We evaluate our algorithms and compare them with the state-of-the-art baselines on several simulated (MuJoCo) robot locomotion tasks, as well as a real-world indoor robot navigation problem.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work