Graph Normalizing Flows

Jenny Liu*, Aviral Kumar*, Jimmy Ba, Jamie Kiros, Kevin Swersky

Abstract: We introduce graph normalizing flows (GNFs): a new, reversible graph neural network model for prediction and generation. On supervised tasks, graph normalizing flows perform similarly to message passing neural networks, but at a significantly reduced memory footprint, allowing them to scale to larger graphs. In the unsupervised case, we combine graph normalizing flows with a novel graph auto-encoder to create a generative model of graph structures. Our model is permutation-equivariant, generating entire graphs with a single feed-forward pass, and achieves competitive results with the state-of-the art auto-regressive models, while being better suited to parallel computing architectures.

Overview:

  • Graph normalizing flows (GNFs) consist of message passing (MP) steps that are inspired by the RealNVP model. For a given graph, each node's feature embedding is split into two halves. Each MP step consists of two intermediate steps: in each intermediate step, half of a node's embedding is held constant. The other half of the embedding is scaled and shifted by some function of the first half, where we use two graph neural networks to be the functions.
  • We present supervised GNF (GRevNet) experiments on datasets for semi-supervised document classification, protein-protein interactions, and molecular property prediction and provide a comparison to state-of-the-art Graph Attention Networks.
  • We present results for graph generation, which we treat as a structured density estimation problem:
    • First train a graph auto-encoder. Each node in a graph is given a randomly-initialized node embedding, and we use a graph neural network encoder to transform these initial embeddings into a structured set that encodes adjacency information as distance between nodes. We can then recover the original adjacency using a simple kernel function.
    • Given this graph auto-encoder, we can generate structured sets as training data for the GNF. The GNF learns to flow these structured sets into a space where the node embeddings look like random Gaussian vectors. We can then generate graphs by sampling more Gaussian vectors and flowing the GNF backwards.

A full message passing step in the GNF model.

Slides:

graph normalizing flows

Contact: For questions or comments, email Jenny Liu at jyliu@cs.toronto.edu.