Google Research

Teaching Machines to Read and Comprehend

  • Karl Moritz Hermann
  • Tomas Kocisky
  • Edward Grefenstette
  • Lasse Espeholt
  • Will Kay
  • Mustafa Suleyman
  • Phil Blunsom
NIPS (2015) (to appear)

Abstract

Teaching machines to read natural language documents remains an elusive chal- lenge. Such models can be tested on their ability to answer questions posed on

the contents of the documents that they have seen, but until now large scale su- pervised training and test datasets have been missing for such tasks. In this work

we introduce a new machine reading paradigm based on large scale supervised

training datasets extracted from readily available online sources. We define mod- els for this task based on both a traditional natural language processing pipeline,

and on attention based recurrent neural networks. Our results demonstrate that

neural network models are able to learn to read documents and answer complex

questions with minimal prior knowledge of language structure.

Research Areas

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work