Jump to Content

When does gradient descent with logistic loss interpolate using deep networks with smoothed ReLU activations?

Niladri Chatterji
Peter Bartlett
COLT (2021)
Google Scholar

Abstract

We prove that gradient descent applied to fixed-width deep networks with the logistic loss converges, and prove bounds on the rate of convergence. Our analysis applies for smoothed approximations to the ReLU proposed in previous applied work such as Swish and the Huberized ReLU. We provide two sufficient conditions for convergence. The first is simply a bound on the loss at initialization. The second is a data separation condition used in prior analyses.