Stochastic natural gradient descent draws posterior samples in function space
Abstract
Recent work has argued that stochastic gradient descent can approximate the
Bayesian uncertainty in model parameters near local minima. In this work we
develop a similar correspondence for minibatch natural gradient descent (NGD).
We prove that for sufficiently small learning rates, if the model predictions on
the training set approach the true conditional distribution of labels given inputs,
the stationary distribution of minibatch NGD approaches a Bayesian posterior
near local minima. The temperature T = N/(2B) is controlled by the learning
rate , training set size N and batch size B. However minibatch NGD is not
parameterisation invariant and it does not sample a valid posterior away from
local minima. We therefore propose a novel optimiser, “stochastic NGD”, which
introduces the additional correction terms required to preserve both properties.
Bayesian uncertainty in model parameters near local minima. In this work we
develop a similar correspondence for minibatch natural gradient descent (NGD).
We prove that for sufficiently small learning rates, if the model predictions on
the training set approach the true conditional distribution of labels given inputs,
the stationary distribution of minibatch NGD approaches a Bayesian posterior
near local minima. The temperature T = N/(2B) is controlled by the learning
rate , training set size N and batch size B. However minibatch NGD is not
parameterisation invariant and it does not sample a valid posterior away from
local minima. We therefore propose a novel optimiser, “stochastic NGD”, which
introduces the additional correction terms required to preserve both properties.