Mathematical Reasoning in Latent Space

Christian Szegedy
Dennis Lee
Kshitij Bansal
Markus Rabe
Sarah Loos
ICLR 2020 (to appear)
Google Scholar

Abstract

We study the feasibility of performing reasoning steps in higher order logic purely in latent space of fixed dimension. We fix a class of rewrite rules in higher-order logic that can succeed or fail, and if they succeed, they produce a new mathematical expression. We train graph neural networks to predict the success of rewrite rules and the latent representation of the resulting mathematical formula in case the rewrite is successful. To study multi-step reasoning in latent space we then feed the predicted latent representations back into the neural network and study how well it predicts the success of rewrite rules without seeing the actual mathematical formula that resulted from the rewrite. Our experimental findings are that neural networks can make non-trivial predictions about the rewrite-success, even when they only see the predicted latent representations for several steps. Since our corpus of mathematical formulas includes a wide variety of mathematical disciplines, this experiment is a strong indicator for the feasibility of deduction in latent space.

Research Areas