Petar Veličković
Petar Veličković is a Research Scientist at DeepMind. He holds a PhD degree from the University of Cambridge (obtained under the supervision of Pietro Liò), with prior collaborations at Nokia Bell Labs and Mila. His current research interests broadly involve devising neural network architectures that operate on nontrivially structured data (such as graphs), and their applications in algorithmic reasoning and computational biology. He has published his work in these areas at both machine learning venues (ICLR, NeurIPS-W, ICML-W) and biomedical venues and journals (Bioinformatics, PLOS One, JCB, PervasiveHealth). In particular, he is the first author of Graph Attention Networks, a popular convolutional layer for graphs, and Deep Graph Infomax, a scalable local/global unsupervised learning pipeline for graphs. His research has been featured in media outlets such as ZDNet. Additionally, he has co-organised workshops on Graph Representation Learning at ICLR 2019 and NeurIPS 2019.
Research Areas
Authored Publications
Sort By
Preview abstract
Graph Neural Networks (GNNs) have emerged as a powerful technique for learning on relational data. Owing to the relatively limited number of message passing steps they perform—and hence a smaller receptive field—there has been significant interest in improving their expressivity by incorporating structural aspects of the underlying graph. In this paper, we explore the use of affinity measures as features in graph neural networks, in particular measures arising from random walks, including effective resistance, hitting and commute times. We propose message passing networks based on these features and evaluate their performance on a variety of node and graph property prediction tasks. Our architecture has low computational complexity, while our features are invariant to the permutations of the underlying graph. The measures we compute allow the network to exploit the connectivity properties of the graph, thereby allowing us to outperform relevant benchmarks for a wide variety of tasks, often with significantly fewer message passing steps. On one of the largest publicly available graph regression datasets, OGB-LSC-PCQM4Mv1, we obtain the best known single-model validation MAE at the time of writing.
View details
Neural Execution of Graph Algorithms
Rex Ying
Matilde Padovano
Raia Hadsell
Charles Blundell
International Conference on Learning Representations (2020)
Preview abstract
Graph Neural Networks (GNNs) are a powerful representational tool for solving problems on graph-structured inputs. In almost all cases so far, however, they have been applied to directly recovering a final solution from raw inputs, without explicit guidance on how to structure their problem-solving. Here, instead, we focus on learning in the space of algorithms: we train several state-of-the-art GNN architectures to imitate individual steps of classical graph algorithms, parallel (breadth-first search, Bellman-Ford) as well as sequential (Prim's algorithm). As graph algorithms usually rely on making discrete decisions within neighbourhoods, we hypothesise that maximisation-based message passing neural networks are best-suited for such objectives, and validate this claim empirically. We also demonstrate how learning in the space of algorithms can yield new opportunities for positive transfer between tasks---showing how learning a shortest-path algorithm can be substantially improved when simultaneously learning a reachability algorithm.
View details
Pointer Graph Networks
Matthew C. Overlan
Razvan Pascanu
Charles Blundell
Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS 2020) (2020) (to appear)
Preview abstract
Graph neural networks (GNNs) are typically applied to static graphs that are assumed to be known upfront. This static input structure is often informed purely by insight of the machine learning practitioner, and might not be optimal for the actual task the GNN is solving. In absence of reliable domain expertise, one might resort to inferring the latent graph structure, which is often difficult due to the vast search space of possible graphs. Here we introduce Pointer Graph Networks (PGNs) which augment sets or graphs with additional inferred edges for improved model expressivity. PGNs allow each node to dynamically point to another node, followed by message passing over these pointers. The sparsity of this adaptable graph structure makes learning tractable while still being sufficiently expressive to simulate complex algorithms. Critically, the pointing mechanism is directly supervised to model long-term sequences of operations on classical data structures, incorporating useful structural inductive biases from theoretical computer science. Qualitatively, we demonstrate that PGNs can learn parallelisable variants of pointer-based data structures, namely disjoint set unions and link/cut trees. PGNs generalise out-of-distribution to 5x larger test inputs on dynamic graph connectivity tasks, outperforming unrestricted GNNs and Deep Sets.
View details
Principal Neighbourhood Aggregation for Graph Nets
Gabriele Corso
Luca Cavalleri
Dominique Beaini
Pietro Liò
Neural Information Processing Systems (2020) (to appear)
Preview abstract
Graph Neural Networks (GNNs) have been shown to be effective models for different predictive tasks on graph-structured data. Recent work on their expressive power has focused on isomorphism tasks and countable feature spaces. We extend this theoretical framework to include continuous features - which occur regularly in real-world input domains and within the hidden layers of GNNs - and we demonstrate the requirement for multiple aggregation functions in this setting. Accordingly, we propose Principal Neighbourhood Aggregation (PNA), a novel architecture combining multiple aggregators with degree-scalers (which generalize the sum aggregator). Finally, we compare the capacity of different models to capture and exploit the graph structure via a benchmark containing multiple tasks taken from classical graph theory, which demonstrates the capacity of our model.
View details
Deep Graph Infomax
Wiliam Fedus
William L. Hamilton
Pietro Liò
Yoshua Bengio
R Devon Hjelm
International Conference on Learning Representations (2019)
Preview abstract
We present Deep Graph Infomax (DGI), a general approach for learning node
representations within graph-structured data in an unsupervised manner. DGI relies
on maximizing mutual information between patch representations and corresponding
high-level summaries of graphs—both derived using established graph
convolutional network architectures. In contrast to most prior approaches to graph
representation learning, DGI does not rely on random walks, and is readily applicable
to both transductive and inductive downstream tasks. We demonstrate
competitive performance on a variety of node classification benchmarks, which at
times even exceeds the performance of supervised learning.
View details