Google Research

A Hybrid Retrieval-Generation Neural Conversation Model

  • Liu Yang
  • Junjie Hu
  • Minghui Qiu
  • Chen Qu
  • Jianfeng Gao
  • W. Bruce Croft
  • Xiaodong Liu
  • Yelong Shen
  • Jingjing Liu
Proceedings of the 28th ACM International Conference on Information and Knowledge Management (CIKM 2019), ACM (to appear)

Abstract

Intelligent personal assistant systems that are able to have multi-turn conversations with human users are becoming increasingly popular. Most previous research has been focused on using either retrieval-based or generation-based methods to develop such systems. Retrieval-based methods have the advantage of returning fluent and informative responses with great diversity. However, the performance of the methods is limited by the size of the response repository. On the other hand, generation-based methods can produce highly coherent responses on any topics. But the generated responses are often generic and not informative due to the lack of grounding knowledge. In this paper, we propose a hybrid neural conversation model that combines the merits of both response retrieval and generation methods. Experimental results on Twitter and Foursquare data show that the proposed model outperforms both retrieval-based methods and generation-based methods (including a recently proposed knowledge-grounded neural conversation model) under both automatic evaluation metrics and human evaluation. We hope that the findings in this study provide new insights on how to integrate text retrieval and text generation models for building conversation systems.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work