Jump to Content

Non-Attentive Tacotron: Robust and controllable neural TTS synthesis including unsupervised duration modeling

Jonathan Shen
Ye Jia
Mike Chrzanowski
Yu Zhang
Isaac Elias
(2020)

Abstract

This paper presents Non-Attentive Tacotron based on the Tacotron 2 text-to-speech model, where the attention mechanism is replaced with an explicit duration predictor. This improves robustness significantly as measured by unaligned duration ratio and word deletion rate, two new metrics introduced in this paper for large-scale robustness evaluation using a pre-trained speech recognition model. With the use of Gaussian upsampling, Non-Attentive Tacotron achieves a 5-scale mean opinion score in naturalness of 4.41, slightly outperforming Tacotron 2. The duration predictor enables both utterance-wide and per-phoneme control of duration at inference time. If accurate target duration are scarce or unavailable, it is still possible to train a duration predictor in a semi-supervised or unsupervised manner, with results almost as good as supervised training.