Google Research

Understanding the impact of entropy on policy optimization

Abstract

Entropy regularization is commonly used to improve policy optimization in reinforcement learning. It is believed to help \emph{exploration} by encouraging a more stochastic policy. In this work, we analyze that claim and, through new visualizations of the optimization landscape, observe that its effect matches that of a regularizer. We show that even with access to the exact gradient, policy optimization is difficult due to the geometry of the objective function. We qualitatively show that, in some environments, entropy regularization can make the optimization landscape smoother thereby connecting local optima and enabling the use of larger learning rates. This work provides tools for understanding the underlying optimization landscape and highlights the challenge of designing general-purpose optimization algorithms in reinforcement learning.

Research Areas

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work