- Niladri Chatterji
- Peter Bartlett
- Phil Long
JMLR, vol. 22(1) (2021), 1−48
We study the training of finite-width two-layer (smoothed) ReLU networks for binary classification using the logistic loss. We show that gradient descent drives the training loss to zero, if the initial loss is small enough. When the data satisfies certain cluster and separation conditions, and the network is wide enough, we show that one step of gradient descent reduces the loss sufficiently that the first result applies. (In contrast, all past analyses that we know of fixed-width networks do not guarantee that the training loss goes to zero.)
We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work