- Aviral Kumar
- Jamie Kiros
- Jenny Liu
- Jimmy Ba
- Kevin Jordan Swersky
Abstract
We introduce Graph Normalizing Flows (GNFs), a new, reversible graph neural network (GNN) model for prediction and generation. On supervised tasks, GNFs perform similarly to GNNs, but at a significantly reduced memory footprint, allowing them to scale to larger graphs. In the unsupervised case, we combine GNFs with a novel graph auto-encoder to create a generative model of graph structures. Our GNF model is permutation invariant, generating entire graphs with a single feed-forward pass, and achieves competitive results with the state-of-the art autoregressive models, while being better suited to parallel computing architectures.
Research Areas
Learn more about how we do research
We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work