Google Research

Data Summarization at Scale: A Two-Stage Submodular Approach

Proceedings of the 35th International Conference on Machine Learning, ICML 2018, PMLR

Abstract

The sheer scale of modern datasets has resulted in a dire need for summarization techniques that identify representative elements in a dataset, and then use them to construct a drastically smaller subset which still encodes a similar amount of information. Fortunately, the vast majority of data summarization tasks satisfy an intuitive diminishing returns condition known as submodularity, which allows us to find nearly-optimal solutions in linear time.

We focus on a two-stage submodular framework where the goal is to use some given training functions to reduce the ground set so that optimizing new functions (drawn from the same distribution) over the reduced set provides almost as much value as optimizing them over the entire ground set. In this paper, we develop the first streaming and distributed solutions to this problem. In addition to providing strong theoretical guarantees, we demonstrate both the utility and efficiency of our algorithms on real-world tasks including image summarization and ride-share optimization.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work