Google Research

Increasing Faithfulness in Knowledge-Grounded Dialogue with Controllable Features

(2021), pp. 704-718

Abstract

Knowledge-grounded dialogue systems are intended to convey information that is exclusively based on given evidence provided in a given source text. We discuss the challenges of training a generative neural dialogue model for such systems that is controlled to stay faithful to the evidence. Existing datasets contain a mix of conversational responses that are faithful to selected evidence as well as more subjective or chit-chat style responses. We propose different evaluation measures to disentangle these different styles of responses by quantifying the groundedness and objectivity. At training time, additional inputs based on these evaluation measures are given to the dialogue model. At generation time, these additional inputs act as stylistic controls that encourage the model to generate responses that are faithful to the provided evidence. We also investigate the usage of additional controls at decoding time using resampling techniques. In addition to automatic metrics, we perform a human evaluation study where raters perceive the output of these controlled generation models to be generally more objective and faithful to the evidence compared to baseline dialogue systems.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work