Google Research

Parallel WaveNet: Fast High-Fidelity Speech Synthesis

  • Aäron van den Oord
  • Yazhe Li
  • Igor Babuschkin
  • Karen Simonyan
  • Oriol Vinyals
  • Koray Kavukcuoglu
  • George van den Driessche
  • Edward Lockhart
  • Luis Carlos Cobo Rus
  • Florian Stimberg
  • Norman Casagrande
  • Dominik Grewe
  • Seb Noury
  • Sander Dieleman
  • Erich Elsen
  • Nal Kalchbrenner
  • Heiga Zen
  • Alexander Graves
  • Helen King
  • Thomas Walters
  • Dan Belov
  • Demis Hassabis
NA, Google Deepmind, NA (2017)

Abstract

The recently-developed WaveNet architecture [27] is the current state of the art in realistic speech synthesis, consistently rated as more natural sounding for many different languages than any previous system. However, because WaveNet relies on sequential generation of one audio sample at a time, it is poorly suited to today’s massively parallel computers, and therefore hard to deploy in a real-time production setting. This paper introduces Probability Density Distillation, a new method for training a parallel feed-forward network from a trained WaveNet with no significant difference in quality. The resulting system is capable of generating high-fidelity speech samples at more than 20 times faster than real-time, and is deployed online by Google Assistant, including serving multiple English and Japanese voices.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work