Efficient Training of Retrieval Models using Negative Cache
Abstract
Factorized models, such as two tower neural network models, are widely used for
scoring (query, document) pairs in information retrieval tasks. These models are
typically trained by optimizing the model parameters to score relevant “positive"
pairs higher than the irrelevant “negative" ones. While a large set of negatives
typically improves the model performance, limited computation and memory
budgets place constraints on the number of negatives used during training. In this
paper, we develop a novel negative sampling technique for accelerating training
with softmax cross-entropy loss. By using cached (possibly stale) item embeddings,
our technique enables training with a large pool of negatives with reduced memory
and computation. We also develop a streaming variant of our algorithm geared
towards very large datasets. Furthermore, we establish a theoretical basis for our
approach by showing that updating a very small fraction of the cache at each
iteration can still ensure fast convergence. Finally, we experimentally validate our
approach and show that it is efficient and compares favorably with more complex,
state-of-the-art approaches.
scoring (query, document) pairs in information retrieval tasks. These models are
typically trained by optimizing the model parameters to score relevant “positive"
pairs higher than the irrelevant “negative" ones. While a large set of negatives
typically improves the model performance, limited computation and memory
budgets place constraints on the number of negatives used during training. In this
paper, we develop a novel negative sampling technique for accelerating training
with softmax cross-entropy loss. By using cached (possibly stale) item embeddings,
our technique enables training with a large pool of negatives with reduced memory
and computation. We also develop a streaming variant of our algorithm geared
towards very large datasets. Furthermore, we establish a theoretical basis for our
approach by showing that updating a very small fraction of the cache at each
iteration can still ensure fast convergence. Finally, we experimentally validate our
approach and show that it is efficient and compares favorably with more complex,
state-of-the-art approaches.