Google Research

Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision

  • Chen Liang {+crazydonkey}
  • Jonathan Berant {+joberant}
  • Quoc V. Le
  • Ken Forbus
  • Ni Lao
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Vancouver, Canada (2017), pp. 23-33

Abstract

Modern semantic parsers, which map natural language utterances to executable logical forms, have been successfully trained over large knowledge bases from weak supervision, but require hand-crafted rules and substantial feature engineering. Recent attempts to train an end-to-end neural network for semantic parsing have either used strong supervision (full logical forms), or have employed synthetic datasets and differentiable operations. In this work, we propose the Boss-Programmer-Computer framework to integrate neural network models with symbolic operations. Within this framework, we introduce Neural Symbolic Machines, in which a sequence-to-sequence neural network "programmer" controls a non-differentiable "computer" that executes Lisp programs (equivalent to logical forms) and provides code assistance. The interaction between the "programmer" and "computer" dramatically reduces the search space and effectively learns the semantic parser from weak supervision over a large knowledge base, such as Freebase. Our model obtained new state-of-the-art performance on \textsc{WebQuestionsSP}, a challenging semantic parsing dataset.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work