Dan Belov
I am a Distinguished Engineer at DeepMind and Google. From 2005 to 2016 I have built large scale storage, search indexing and security systems at Google. Since 2016 I have been instrumental in building the engineering organization at DeepMind, including a large Robotics Lab. My teams have delivered infrastructure for scientific breakthroughs and have improved utilization of all ML training hardware at Google by 15%. I am now focusing on building novel systems to solve large scale scientific challenges.
I am interested in solving the following two problems:
1. Delivering infinite amount of compute at zero cost
2. Being able to run any program on any system efficiently without effort
Authored Publications
Sort By
Parallel WaveNet: Fast High-Fidelity Speech Synthesis
Aäron van den Oord
Yazhe Li
Igor Babuschkin
Karen Simonyan
Koray Kavukcuoglu
George van den Driessche
Luis Carlos Cobo Rus
Florian Stimberg
Norman Casagrande
Dominik Grewe
Seb Noury
Sander Dieleman
Erich Elsen
Nal Kalchbrenner
Alexander Graves
Helen King
Thomas Walters
Demis Hassabis
NA, Google Deepmind, NA (2017)
Preview abstract
The recently-developed WaveNet architecture [27] is the current state of the art in realistic speech synthesis, consistently rated as more natural sounding for many different languages than any previous system. However, because WaveNet relies on sequential generation of one audio sample at a time, it is poorly suited to today’s massively parallel computers, and therefore hard to deploy in a real-time production setting. This paper introduces Probability Density Distillation, a new method for training a parallel feed-forward network from a trained WaveNet with no significant difference in quality. The resulting system is capable of generating high-fidelity speech samples at more than 20 times faster than real-time, and is deployed online by Google Assistant, including serving multiple English and Japanese voices.
View details