Jump to Content

Neural Models of Text Normalization for Speech Applications

Felix Stahlberg
Ke Wu
Xiaochang Peng
Computational Linguistics, vol. 45(2) (2019) (to appear)

Abstract

Machine learning, including neural network techniques, have been applied to virtually every domain in natural language processing. One problem that has been somewhat resistant to effective machine learning solutions is text normalization for speech applications such as text-to-speech synthesis (TTS). In this application, one must decide, for example, that "123" is verbalized as "one hundred twenty three" in "123 pages" but "one twenty three" in "123 King Ave". For this task, state-of-the-art industrial systems depend heavily on hand-written language-specific grammars. In this paper we present neural network models which treat text normalization for TTS as a sequence-to-sequence problem, in which the input is a text token in context, and the output is the verbalization of that token. We find that the most effective model (in terms of efficiency and accuracy) is a model where the sentential context is computed once and the results of that computation are combined with the computation of each token in sequence to compute the verbalization. This model allows for a great deal of flexibility in terms of representing the context, and also allows us to integrate tagging and segmentation into the process. The neural models perform very well overall, but there is one problem, namely that occasionally they will predict inappropriate verbalizations, such as reading "3cm" as "three kilometers". While rare, such verbalizations are a major issue for TTS applications. To deal with such cases, we develop an approach based on finite-state "covering grammars", which can be used to guide the neural models (either during training and decoding, or just during decoding) away from such "silly" verbalizations. These covering grammars can also largely be learned from data.