Jump to Content

Understanding the impact of entropy on policy optimization

Nicolas Le Roux
Mohammad Norouzi
ICML (2019)

Abstract

Entropy regularization is commonly used to improve policy optimization in reinforcement learning. It is believed to help \emph{exploration} by encouraging a more stochastic policy. In this work, we analyze that claim and, through new visualizations of the optimization landscape, observe that its effect matches that of a regularizer. We show that even with access to the exact gradient, policy optimization is difficult due to the geometry of the objective function. We qualitatively show that, in some environments, entropy regularization can make the optimization landscape smoother thereby connecting local optima and enabling the use of larger learning rates. This work provides tools for understanding the underlying optimization landscape and highlights the challenge of designing general-purpose optimization algorithms in reinforcement learning.

Research Areas