Google Research

Scaling Up Collaborative Filtering Data Sets through Randomized Fractal Expansions

  • Francois Belletti
  • Karthik Singaram Lakshmanan
  • Nicolas Mayoraz
  • Walid Krichene
  • Yi-fan Chen
  • John Anderson
  • Taylor Robie
  • Tayo Oguntebi
  • Amit Bleiwess
  • Dan Shirron
arXiv (2019)


Recommender System research suffers from a disconnect between the size of academic data sets and the scale of industrial production systems. In order to bridge that gap, we propose to generate large-scale User/Item interaction data sets by expanding pre-existing public data sets. Our key contribution is a technique that expands User/Item incidence matrices matrices to large numbers of rows (users), columns (items), and non-zero values (interactions). The proposed method adapts Kronecker Graph Theory to preserve key higher order statistical properties such as the fat-tailed distribution of user engagements, item popularity, and singular value spectra of user/item interaction matrices. Preserving such properties is key to building large realistic synthetic data sets which in turn can be employed reliably to benchmark Recommender Systems and the systems employed to train them. We further apply our stochastic expansion algorithm to the binarized MovieLens 20M data set, which comprises 20M interactions between 27K movies and 138K users. The resulting expanded data set has 1.2B ratings, 2.2M users, and 855K items, which can be scaled up or down.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work