Large-scale Incremental Processing Using Distributed Transactions and Notifications

Proceedings of the 9th USENIX Symposium on Operating Systems Design and Implementation, USENIX (2010)
Google Scholar

Abstract

Updating an index of the web as documents are
crawled requires continuously transforming a large
repository of existing documents as new documents arrive. This task is one example of a class of data processing tasks that transform a large repository of data
via small, independent mutations. These tasks lie in a
gap between the capabilities of existing infrastructure.
Databases do not meet the storage or throughput requirements of these tasks: Google's indexing system stores
tens of petabytes of data and processes billions of updates per day on thousands of machines. MapReduce and
other batch-processing systems cannot process small updates individually as they rely on creating large batches
for efficiency.

We have built Percolator, a system for incrementally
processing updates to a large data set, and deployed it
to create the Google web search index. By replacing a
batch-based indexing system with an indexing system
based on incremental processing using Percolator, we
process the same number of documents per day, while
reducing the average age of documents in Google search
results by 50%.