Vincent Wan
Research Areas
Authored Publications
Sort By
Training Text-To-Speech Systems From Synthetic Data: A Practical Approach For Accent Transfer Tasks
Lev Finkelstein
Norman Casagrande
Ye Jia
Alexey Petelin
Jonathan Shen
Yu Zhang
Interspeech (2022)
Preview abstract
Transfer tasks in text-to-speech (TTS) synthesis — where one
or more aspects of the speech of one set of speakers is transferred
to another set of speakers that do not feature these aspects originally —
remains a challenging task. One of the challenges is that models
that have high-quality transfer capabilities can have issues in stability,
making them impractical for user-facing critical tasks. This paper
demonstrates that transfer can be obtained by training an robust TTS
system on data generated by a less robust TTS system designed for a high-quality
transfer task; In particular, a CHiVE-BERT monolingual TTS
system is trained on the output of a Tacotron model designed
for accent transfer. While some quality loss is inevitable with
this approach, experimental results show that the models trained
on synthetic data this way can produce high quality audio displaying accent
transfer, while preserving speaker characteristics such as speaking style.
View details
Preview abstract
Intonation is characterized by rises and falls in pitch and energy. In previous work, we explicitly modelled these prosodic features using Clockwork Hierarchical Variational Autoencoders (CHiVE) to show we can generate multiple intonation contours for any text. However, recent advances in text-to-speech synthesis produce spectrograms which are inverted by neural vocoders to produce waveforms. Spectrograms encode intonation in a complex way; there is no simple, explicit representation analogous to pitch (fundamental frequency) and energy. In this paper, we extend CHiVE to model intonation within a spectrogram. Compared to the original model, the spectrogram extension gives better mean opinion scores in subjective listening tests. We show that the intonation in the generated spectrograms match the intonations represented by the generated pitch curves.
View details
CHiVE: Varying Prosody in Speech Synthesis with a Linguistically Driven Dynamic Hierarchical Conditional Variational Network
Jakub Vit
Proceedings of the 36th International Conference on Machine Learning (ICML 2019), PMLR, pp. 3331-3340
Preview abstract
The prosodic aspects of speech signals produced by current text-to-speech systems are typically averaged over training material, and as such lack the variety and liveliness found in natural speech. To avoid monotony and averaged prosody contours, it is desirable to have a way of modeling the variation in the prosodic aspects of speech, so audio signals can be synthesized in multiple ways for a given text. We present a new, hierarchically structured conditional variational autoencoder to generate prosodic features (fundamental frequency, energy and duration) suitable for use with a vocoder or a generative model like WaveNet. At inference time, an embedding representing the prosody of a sentence may be sampled from the variational layer to allow for prosodic variation. To efficiently capture the hierarchical nature of the linguistic input (words, syllables and phones), both the encoder and decoder parts of the auto-encoder are hierarchical, in line with the linguistic structure, with layers being clocked dynamically at the respective rates. We show in our experiments that our dynamic hierarchical network outperforms a non-hierarchical state-of-the-art baseline, and, additionally, that prosody transfer across sentences is possible by employing the prosody embedding of one sentence to generate the speech signal of another.
View details
Preview abstract
A neural network model that significant improves unit-selection-based Text-To-Speech synthesis is presented. The model employs a sequence-to-sequence LSTM-based autoencoder that compresses the acoustic and linguistic features of each unit to a fixed-size vector referred to as an embedding. Unit-selection is facilitated by formulating the target cost as an L2 distance in the embedding space. In open-domain speech synthesis the method achieves a 0.2 improvement in the MOS, while for limited-domain it reaches the cap of 4.5 MOS. Furthermore, the new TTS system halves the gap
between the previous unit-selection system and WaveNet in terms of quality while retaining low computational cost and latency.
View details