Jump to Content

Unsupervised speech representation learning using WaveNet autoencoders

Jan Chorowski
Samy Bengio
Aäron van den Oord
IEEE Transactions on Audio, Speech, and Language Processing (2019)

Abstract

We consider the task of unsupervised extraction of meaningful latent representations of speech by applying auto-encoding neural networks to speech waveforms. The goal is to learn a representation which is able to capture high level semantic content from the signal, e.g. phoneme identities, while being invariant to confounding low level details in the signal such as the underlying pitch contour or background noise. The behavior of auto-encoder models depends on the kind of constraint that is applied to the latent representation. We compare three variants: a simple dimensionality reduction bottleneck, a Gaussian Variational Auto-Encoder (VAE), and a discrete Vector Quantized VAE (VQ-VAE). We analyze the quality of the learned representation in terms of its speaker independence, the ability to predict phonetic content, and the ability to accurately reconstruct individual spectrogram frames. Moreover, for the discrete encodings extracted using the VQ-VAE, we measure the ease of mapping them to phonemes. We introduce a regularization scheme that forces the representations to concentrate on the phonetic content of the utterance and report performance comparable with the top entries in the ZeroSpeech 2017 unsupervised acoustic unit discovery task.