Neural GPUs Learn Algorithms

Lukasz Kaiser
Ilya Sutskever
International Conference on Learning Representations (2016)

Abstract

Learning an algorithm from examples is a fundamental problem that has been
widely studied. It has been addressed using neural networks too, in particular by
Neural Turing Machines (NTMs). These are fully differentiable computers that
use backpropagation to learn their own programming. Despite their appeal NTMs
have a weakness that is caused by their sequential nature: they are not parallel and
are are hard to train due to their large depth when unfolded.
We present a neural network architecture to address this problem: the Neural
GPU. It is based on a type of convolutional gated recurrent unit and, like the
NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly
parallel which makes it easier to train and efficient to run.
An essential property of algorithms is their ability to handle inputs of arbitrary
size. We show that the Neural GPU can be trained on short instances of an algorithmic
task and successfully generalize to long instances. We verified it on a
number of tasks including long addition and long multiplication of numbers represented
in binary. We train the Neural GPU on numbers with up-to 20 bits and
observe no errors whatsoever while testing it, even on much longer numbers.
To achieve these results we introduce a technique for training deep recurrent networks:
parameter sharing relaxation. We also found a small amount of dropout
and gradient noise to have a large positive effect on learning and generalization.

Research Areas