Google Research

Residual Energy-Based Models for End-to-End Speech Recognition

submit to Interspeech 2021 (2021)

Abstract

End-to-end models with auto-regressive decoders have shown impressive results for automatic speech recognition (ASR). These models formulate the sequence-level probability as a product of the conditional probabilities of all individual tokens given their histories. However, the performance of locally normalised models can be sub-optimal because of factors such as exposure bias. Consequently, the model distribution differs from the underlying data distribution. In this paper, the residual energy-based model (R-EBM) is proposed to complement the auto-regressive ASR model to close the gap between the two distributions. Meanwhile, R-EBMs can also be regarded as utterance-level confidence estimators, which may benefit many downstream tasks. Experiments on LibriSpeech dataset show that R-EBMs can reduce the word error rates (WERs) by 8.2\%/6.7\% while improving areas under precision-recall curves of confidence scores by 12.6\%/28.4\% on test-clean/test-other sets. Furthermore, on the state-of-the-art self-supervised learning baseline, R-EBMs also improve both recognition and confidence estimation performances significantly.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work