- Niladri Chatterji
- Peter Bartlett
- Phil Long
COLT (2021)
We prove that gradient descent applied to fixed-width deep networks with the logistic loss converges, and prove bounds on the rate of convergence. Our analysis applies for smoothed approximations to the ReLU proposed in previous applied work such as Swish and the Huberized ReLU. We provide two sufficient conditions for convergence. The first is simply a bound on the loss at initialization. The second is a data separation condition used in prior analyses.
We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work