Neural Random Access Machines
Abstract
Deep Neural Networks (DNNs) have achieved great success in supervised learning
tasks mainly due to their “depth”, allowing DNNs to represent functions whose
implementation requires some sequential computation, which turned out to be
sufficient to solve many previously-intractable problems.
Given that depth was a key ingredient in the success of DNNs, it is plausible
that much deeper neural models — namely, models that are computationally uni-
versal — would be able to solve much harder problems using less training data.
The Neural Turing Machine is the first neural network model of this kind, which
has been able to learn to solve a number of algorithmic tasks from input-output
examples using backpropagation. Although the Neural Turing Machine is compu-
tationally universal, it used a memory addressing mechanism that does not allow
for pointers, which makes the implementation of a number of natural algorithms
cumbersome.
In this paper, we propose and investigate a computationally universal model that
can manipulate and dereference pointers. Pointer manipulation is a natural opera-
tion, so it is interesting to determine whether they can be learned with backprop-
agation as well. We evaluate the new model on a number of simple algorithmic
tasks whose solution requires pointer manipulation and dereferencing. Our results
show that the proposed model can learn to solve algorithmic tasks of such type,
provided that they are not too difficult.
tasks mainly due to their “depth”, allowing DNNs to represent functions whose
implementation requires some sequential computation, which turned out to be
sufficient to solve many previously-intractable problems.
Given that depth was a key ingredient in the success of DNNs, it is plausible
that much deeper neural models — namely, models that are computationally uni-
versal — would be able to solve much harder problems using less training data.
The Neural Turing Machine is the first neural network model of this kind, which
has been able to learn to solve a number of algorithmic tasks from input-output
examples using backpropagation. Although the Neural Turing Machine is compu-
tationally universal, it used a memory addressing mechanism that does not allow
for pointers, which makes the implementation of a number of natural algorithms
cumbersome.
In this paper, we propose and investigate a computationally universal model that
can manipulate and dereference pointers. Pointer manipulation is a natural opera-
tion, so it is interesting to determine whether they can be learned with backprop-
agation as well. We evaluate the new model on a number of simple algorithmic
tasks whose solution requires pointer manipulation and dereferencing. Our results
show that the proposed model can learn to solve algorithmic tasks of such type,
provided that they are not too difficult.