Oriol Vinyals
Oriol Vinyals is a Principal Scientist at Google DeepMind, and a team lead of the Deep Learning group. His work focuses on Deep Learning and Artificial Intelligence. Prior to joining DeepMind, Oriol was part of the Google Brain team. He holds a Ph.D. in EECS from the University of California, Berkeley and is a recipient of the 2016 MIT TR35 innovator award. His research has been featured multiple times at the New York Times, Financial Times, WIRED, BBC, etc., and his articles have been cited over 70000 times. His academic involvement includes program chair for the International Conference on Learning Representations (ICLR) of 2017, and 2018. He has also been an area chair for many editions of the NeurIPS and ICML conferences. Some of his contributions such as seq2seq, knowledge distillation, or TensorFlow are used in Google Translate, Text-To-Speech, and Speech recognition, serving billions of queries every day, and he was the lead researcher of the AlphaStar project, creating an agent that defeated a top professional at the game of StarCraft, achieving Grandmaster level, also featured as the cover of Nature. At DeepMind he continues working on his areas of interest, which include artificial intelligence, with particular emphasis on machine learning, deep learning and reinforcement learning.
Authored Publications
Sort By
Emergent abilities of large language models
Barret Zoph
Colin Raffel
Dani Yogatama
Jason Wei
Liam B. Fedus
Maarten Paul Bosma
Percy Liang
Sebastian Borgeaud
Tatsunori B. Hashimoto
Yi Tay
TMLR (2022)
Preview abstract
Scaling up language models has been shown to predictably confer a range of benefits such as improved performance and sample efficiency. This paper discusses an unpredictable phenomenon that we call emergent abilities of large language models. Such emergent abilities have close to random performance until evaluated on a model of sufficiently large scale, and hence their emergence cannot be predicted by extrapolating a scaling law based on small-scale models. The emergence of such abilities suggests that additional scaling could further expand the range of tasks that language models can perform. We discuss the implications of these phenomena and suggest directions for future research.
View details
Reinforced Genetic Algorithm Learning for Optimizing Computation Graphs
Aditya Paliwal
Felix Gimeno
Vinod Gopal Nair
Yujia Li
Miles Lubin
International Conference on Learning Representations (ICLR) (2020)
Preview abstract
We present a deep reinforcement learning approach to minimizing the execution cost of neural network computation graphs in an optimizing compiler. Unlike earlier learning-based works that require training the optimizer on the same graph to be optimized, we propose a learning approach that trains an optimizer offline and then generalizes to previously unseen graphs without further training. This allows our approach to produce high-quality execution decisions on real-world TensorFlow graphs in seconds instead of hours. We consider two optimization tasks for computation graphs: minimizing running time and peak memory usage. In comparison to an extensive set of baselines, our approach achieves significant improvements over classical and other learning-based methods on these two tasks.
View details
Pointer Graph Networks
Matthew C. Overlan
Razvan Pascanu
Charles Blundell
Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS 2020) (2020) (to appear)
Preview abstract
Graph neural networks (GNNs) are typically applied to static graphs that are assumed to be known upfront. This static input structure is often informed purely by insight of the machine learning practitioner, and might not be optimal for the actual task the GNN is solving. In absence of reliable domain expertise, one might resort to inferring the latent graph structure, which is often difficult due to the vast search space of possible graphs. Here we introduce Pointer Graph Networks (PGNs) which augment sets or graphs with additional inferred edges for improved model expressivity. PGNs allow each node to dynamically point to another node, followed by message passing over these pointers. The sparsity of this adaptable graph structure makes learning tractable while still being sufficiently expressive to simulate complex algorithms. Critically, the pointing mechanism is directly supervised to model long-term sequences of operations on classical data structures, incorporating useful structural inductive biases from theoretical computer science. Qualitatively, we demonstrate that PGNs can learn parallelisable variants of pointer-based data structures, namely disjoint set unions and link/cut trees. PGNs generalise out-of-distribution to 5x larger test inputs on dynamic graph connectivity tasks, outperforming unrestricted GNNs and Deep Sets.
View details
Preview abstract
Due to the phenomenon of "posterior collapse," current latent variable generative models pose a challenging design choice that either weakens the capacity of the decoder or requires augmenting the objective so it does not only maximize the likelihood of the data. In this paper, we propose an alternative that utilizes the most powerful generative models as decoders, whilst optimising the variational lower bound all while ensuring that the latent variables preserve and encode useful information. Our proposed δ-VAEs achieve this by constraining the variational family for the posterior to have a minimum distance to the prior. For sequential latent variable models, our approach resembles the classic representation learning approach of slow feature analysis. We demonstrate the efficacy of our approach at modeling text on LM1B and modeling images: learning representations, improving sample quality, and achieving state of the art log-likelihood on CIFAR-10 and ImageNet 32×32.
View details
Preview abstract
Recurrent neural networks (RNNs) sequentially process data by updating their
state with each new data point, and have long been the de facto choice for sequence
modeling tasks. However, their inherently sequential computation makes them
slow to train. Feed-forward and convolutional architectures have recently been
shown to achieve superior results on some sequence modeling tasks such as machine
translation, with the added advantage that they concurrently process all inputs in
the sequence, leading to easy parallelization and faster training times. Despite these
successes, however, popular feed-forward sequence models like the Transformer
fail to generalize in many simple tasks that recurrent models handle with ease, e.g.
copying strings or even simple logical inference when the string or formula lengths
exceed those observed at training time. We propose the Universal Transformer
(UT), a parallel-in-time self-attentive recurrent sequence model which can be
cast as a generalization of the Transformer model and which addresses these
issues. UTs combine the parallelizability and global receptive field of feed-forward
sequence models like the Transformer with the recurrent inductive bias of RNNs.
We also add a dynamic per-position halting mechanism and find that it improves
accuracy on several tasks. In contrast to the standard Transformer, under certain
assumptions UTs can be shown to be Turing-complete. Our experiments show that
UTs outperform standard Transformers on a wide range of algorithmic and language
understanding tasks, including the challenging LAMBADA language modeling
task where UTs achieve a new state of the art, and machine translation where UTs
achieve a 0.9 BLEU improvement over Transformers on the WMT14 En-De dataset.
View details
Preview abstract
Recent years have witnessed significant progresses in deep Reinforcement Learning (RL). Empowered with large scale neural networks, carefully designed architectures, novel training algorithms and massively parallel computing devices, researchers are able to attack many challenging RL problems. However, in machine learning, more training power comes with a potential risk of more overfitting. As deep RL techniques are being applied to critical problems such as healthcare and finance, it is important to understand the generalization behaviors of the trained agents. In this paper, we conduct a systematic study of standard RL agents and find that they could overfit in various ways. Moreover, overfitting could happen "robustly": commonly used techniques in RL that add stochasticity do not necessarily prevent or detect overfitting. In particular, the same agents and learning algorithms could have drastically different test performance, even when all of them achieve optimal rewards during training. The observations call for more principled and careful evaluation protocols in RL. We conclude with a general discussion on overfitting in RL and a study of the generalization behaviors from the perspective of inductive bias.
View details
TEMPORAL MODELING USING DILATED CONVOLUTION AND GATING FOR VOICE-ACTIVITY-DETECTION
Gabor Simko
Aäron van den Oord
ICASSP 2018
Preview abstract
Voice-activity-detection (VAD) is the task of predicting where in
the utterance is speech versus background noise. It is an important
first step to determine when to open the microphone (i.e., start-of-
speech) and close the microphone (i.e., end-of-speech) for streaming
speech recognition applications such as Voice Search. Long short-
term memory neural networks (LSTMs) have been a popular archi-
tecture for sequential modeling for acoustic signals, and have been
successfully used for many VAD applications. However, it has been
observed that LSTMs suffer from state saturation problems when the
utterance is long (i.e., for voice dictation tasks), and thus requires the
LSTM state to be periodically reset. In this paper, we propse an alter-
native architecture that does not suffer from saturation problems by
modeling temporal variations through a stateless dilated convolution
neural network (CNN). The proposed architecture differs from con-
ventional CNNs in three respects (1) dilated causal convolution, (2)
gated activations and (3) residual connections. Results on a Google Voice
Typing task shows that the proposed architecture achieves 14% rela-
tive FA improvement at a FR of 1% over state-of-the-art LSTMs for
VAD task. We also include detailed experiments investigating the
factors that distinguish the proposed architecture from conventional
convolution.
View details
Hierarchical Representations for Efficient Architecture Search
Hanxiao Liu
Karen Simonyan
Chrisantha Fernando
Koray Kavukcuoglu
International Conference on Learning Representations (2018)
Preview abstract
We explore efficient neural architecture search methods and show that a simple yet powerful evolutionary algorithm can discover new architectures with excellent performance. Our approach combines a novel hierarchical genetic representation scheme that imitates the modularized design pattern commonly adopted by human experts, and an expressive search space that supports complex topologies. Our algorithm efficiently discovers architectures that outperform a large number of manually designed models for image classification, obtaining top-1 error of 3.6% on CIFAR-10 and 20.3% when transferred to ImageNet, which is competitive with the best existing neural architecture search approaches. We also present results using random search, achieving 0.3% less top-1 accuracy on CIFAR-10 and 0.1% less on ImageNet whilst reducing the search time from 36 hours down to 1 hour.
View details
Relational inductive biases, deep learning, and graph networks
Peter Battaglia
Jessica Blake Chandler Hamrick
Victor Bapst
Alvaro Sanchez
Vinicius Zambaldi
Mateusz Malinowski
Andrea Tacchetti
David Raposo
Adam Santoro
Ryan Faulkner
Caglar Gulcehre
Francis Song
Andy Ballard
Justin Gilmer
Ashish Vaswani
Kelsey Allen
Charles Nash
Victoria Jayne Langston
Chris Dyer
Nicolas Heess
Daan Wierstra
Matt Botvinick
Yujia Li
Razvan Pascanu
arXiv (2018)
Preview abstract
The purpose of this paper is to explore relational inductive biases in modern AI, especially
deep learning, describing a rough taxonomy of existing approaches, and introducing a common
mathematical framework for expressing and unifying various approaches. The key theme running through this work is structure—how the world is structured, and how the structure of different computational strategies determines their strengths and weaknesses.
View details
Parallel WaveNet: Fast High-Fidelity Speech Synthesis
Aäron van den Oord
Yazhe Li
Igor Babuschkin
Karen Simonyan
Koray Kavukcuoglu
George van den Driessche
Luis Carlos Cobo Rus
Florian Stimberg
Norman Casagrande
Dominik Grewe
Seb Noury
Sander Dieleman
Erich Elsen
Nal Kalchbrenner
Alexander Graves
Helen King
Thomas Walters
Demis Hassabis
NA, Google Deepmind, NA (2017)
Preview abstract
The recently-developed WaveNet architecture [27] is the current state of the art in realistic speech synthesis, consistently rated as more natural sounding for many different languages than any previous system. However, because WaveNet relies on sequential generation of one audio sample at a time, it is poorly suited to today’s massively parallel computers, and therefore hard to deploy in a real-time production setting. This paper introduces Probability Density Distillation, a new method for training a parallel feed-forward network from a trained WaveNet with no significant difference in quality. The resulting system is capable of generating high-fidelity speech samples at more than 20 times faster than real-time, and is deployed online by Google Assistant, including serving multiple English and Japanese voices.
View details