Samuel L. Smith
Research Areas
Authored Publications
Sort By
Preview abstract
For infinitesimal learning rates, stochastic gradient descent (SGD) follows the path of gradient flow on the full batch loss function. However moderately large learning rates can achieve higher test accuracies, and this generalization benefit is not explained by convergence bounds, since the learning rate which maximizes test accuracy is often larger than the learning rate which minimizes training loss. To interpret this phenomenon we prove that for SGD with random shuffling, the mean SGD iterate also stays close to the path of gradient flow if the learning rate is small and finite, but on a modified loss. This modified loss is composed of the original loss function and an implicit regularizer, which penalizes the norms of the minibatch gradients. Under mild assumptions, when the batch size is small the scale of the implicit regularization term is proportional to the ratio of the learning rate to the batch size. We verify empirically that explicitly including the implicit regularizer in the loss can enhance the test accuracy when the learning rate is small.
View details
Cold Posteriors and Aleatoric Uncertainty
Ben Adlam
ICML workshop on Uncertainty and Robustness in Deep Learning (2020)
Preview abstract
Recent work has observed that one can outperform exact inference in Bayesian neural networks by tuning the "temperature" of the posterior on a validation set (the "cold posterior" effect). To help interpret this phenomenon, we argue that commonly used priors in Bayesian neural networks can significantly overestimate the aleatoric uncertainty in the labels on many classification datasets. This problem is particularly pronounced in academic benchmarks like MNIST or CIFAR, for which the quality of the labels is high. For the special case of Gaussian process regression, any positive temperature corresponds to a valid posterior under a modified prior, and tuning this temperature is directly analogous to empirical Bayes. On classification tasks, there is no direct equivalence between modifying the prior and tuning the temperature, however reducing the temperature can lead to models which better reflect our belief that one gains little information by relabeling existing examples in the training set. Therefore although cold posteriors do not always correspond to an exact inference procedure, we believe they may often better reflect our true prior beliefs.
View details
Preview abstract
We investigate how the behavior of stochastic gradient descent is influenced by model size. By studying families of models obtained by increasing the number of channels in a base network, we examine how the optimal hyperparameters---the batch size and learning rate at which the test error is minimized---correlate with the network width. We find that the optimal "normalized noise scale," which we define to be a function of the batch size, learning rate and the initialization conditions, is proportional to the number of channels (in the absence of batch normalization). This conclusion holds for MLPs, ConvNets and ResNets. A surprising consequence is that if we wish to maintain optimal performance as the network width increases, we must use increasingly small batch sizes. Based on our experiments, we also conjecture that there may be a critical width, beyond which the optimal performance of networks trained with constant SGD ceases to improve unless additional regularization is introduced.
View details
Preview abstract
Zhang et al. (2016) argued that understanding deep learning requires rethinking
generalization. To justify this claim, they showed that deep networks can easily
memorize randomly labeled training data, despite generalizing well when shown
real labels of the same inputs. We show here that the same phenomenon occurs
in small linear models with fewer than a thousand parameters; however there is
no need to rethink anything, since our observations are explained by evaluating
the Bayesian evidence in favor of each model. This Bayesian evidence penalizes
sharp minima. We also explore the “generalization gap” observed between small
and large batch training, identifying an optimum batch size which scales linearly
with both the learning rate and the size of the training set. Surprisingly, in our
experiments the generalization gap was closed by regularizing the model.
View details
Preview abstract
Recent work has argued that stochastic gradient descent can approximate the
Bayesian uncertainty in model parameters near local minima. In this work we
develop a similar correspondence for minibatch natural gradient descent (NGD).
We prove that for sufficiently small learning rates, if the model predictions on
the training set approach the true conditional distribution of labels given inputs,
the stationary distribution of minibatch NGD approaches a Bayesian posterior
near local minima. The temperature T = N/(2B) is controlled by the learning
rate , training set size N and batch size B. However minibatch NGD is not
parameterisation invariant and it does not sample a valid posterior away from
local minima. We therefore propose a novel optimiser, “stochastic NGD”, which
introduces the additional correction terms required to preserve both properties.
View details
Preview abstract
It is common practice to decay the learning rate. Here we show one can usually obtain the same learning curve on both training and test sets by instead increasing the batch size during training. This procedure is successful for stochastic gradient descent (SGD), SGD with momentum, Nesterov momentum, and Adam. It reaches equivalent test accuracies after the same number of training epochs, but with fewer parameter updates, leading to greater parallelism and shorter training times. We can further reduce the number of parameter updates by increasing the learning rate $\epsilon$ and scaling the batch size $B \propto \epsilon$. Finally, one can increase the momentum coefficient $m$ and scale $B \propto 1/(1-m)$, although this tends to slightly reduce the test accuracy. Crucially, our techniques allow us to repurpose existing training schedules for large batch training with no hyper-parameter tuning. We train Inception-ResNet-V2 on ImageNet to $77\%$ validation accuracy in under 2500 parameter updates, efficiently utilizing training batches of 65536 images.
View details