Neural Logic Machines

Honghua Dong
Jiayuan Mao
Tian Lin
Chong Wang
Lihong Li
ICLR (2019)

Abstract

We propose the Neural Logic Machine (NLM), a neural-symbolic architecture for
both inductive learning and logic reasoning. NLMs exploit the power of both neural
networks—as function approximators, and logic programming—as a symbolic
processor for objects with properties, relations, logic connectives, and quantifiers.
After being trained on small-scale tasks (such as sorting short arrays), NLMs can
recover lifted rules, and generalize to large-scale tasks (such as sorting longer
arrays). In our experiments, NLMs achieve perfect generalization in a number of
tasks, from relational reasoning tasks on the family tree and general graphs, to
decision making tasks including sorting arrays, finding shortest paths, and playing
the blocks world. Most of these tasks are hard to accomplish for neural networks
or inductive logic programming alone.

Research Areas