Google Research

Scaling Up Influence Functions

AAAI-22 (2022)

Abstract

We address efficient calculation of influence functions (Koh & Liang 2017) for tracking predictions back to the training data. We propose and analyze a new approach to speeding up the inverse Hessian calculation based on Arnoldi iteration (Arnoldi 1951). With this improvement, we achieve, to the best of our knowledge, the first successful implementation of influence functions that scales to full-size (language and vision) Transformer models with several hundreds of millions of parameters. We evaluate our approach in image classification and sequence-to-sequence tasks with tens to a hundred of millions of training examples. Our implementation will be publicly available at https://github.com/google-research/jax-influence.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work