Google Research

Efficient Computation of Higher-Order Subgraph Attribution via Message Passing

ICML (2022) (to appear)


Explaining graph neural networks (GNNs) has become more and more important recently. Higherorder interpretation schemes, such as GNNLRP (layer-wise relevance propagation for GNN), emerged as powerful tools for unraveling how different features interact thereby contributing to explaining GNNs. Methods such as GNN-LRP perform walks between nodes at each layer, and there are exponentially many such walks. In this work, we demonstrate that such exponential complexity can be avoided, in particular, we propose novel linear-time (w.r.t. depth) algorithms that enable to efficiently perform GNN-LRP for subgraphs. Our algorithms are derived via message passing techniques that make use of the distributive property, thereby directly computing quantities for higher-order explanations. We further adapt our efficient algorithms to compute a generalization of subgraph attributions that also takes into account the neighboring graph features. Experimental results show significant acceleration of the proposed algorithms and demonstrate a high usefulness and scalability of our novel generalized subgraph attribution.

Research Areas

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work