Qualitatively Characterizing Neural Network Optimization Problems

Ian Goodfellow
Andrew Saxe
International Conference on Learning Representations (2015)

Abstract

Training neural networks involves solving large-scale non-convex optimization
problems. This task has long been believed to be extremely difficult, with fear of
local minima and other obstacles motivating a variety of schemes to improve optimization,
such as unsupervised pretraining. However, modern neural networks are
able to achieve negligible training error on complex tasks, using only direct training
with stochastic gradient descent. We introduce a simple analysis technique to
look for evidence that such networks are overcoming local optima. We find that,
in fact, on a straight path from initialization to solution, a variety of state of the art
neural networks never encounter any significant obstacles.

Research Areas