Jump to Content

Conformer Parrotron: a Faster and Stronger End-to-end SpeechConversion and Recognition Model for Atypical Speech

Zhehuai Chen
Xia Zhang
Youzheng Chen
Liyang Jiang
Andrea Chu
Rohan Doshi
interspeech 2021 (2021)

Abstract

Parrotron is an end-to-end personalizable model that enables many-to-one voice conversion and Automated Speech Recognition (ASR) simultaneously for atypical speech. In this work, we present the next-generation Parrotron model with improvements in overall performance and training and inference speeds. The proposed architecture builds on the recently popularized conformer encoder comprising of convolution and attention layer based blocks used in ASR. We introduce architectural modifications that sub-samples encoder activations to achieve speed-ups in training and inference. In order to jointly improve ASR and voice conversion quality, we show that this requires a corresponding up-sampling in the decoder network. We provide an in-depth analysis on how the proposed approach can maximize the efficiency of a speech-to-speech conversion model in the context of atypical speech. Experiments on both many-to-one and one-to-one dysarthric speech conversion tasks show that we can achieve up to 7X speedup and 35% relative reduction in WER over the previous best Transformer-based Parrotron model. We also show that these techniques are general enough and can provide similar wins on the transformer based Parrotron model.

Research Areas