
Goetz Graefe
Research Areas
Authored Publications
Sort By
Progressive Partitioning for Parallelized Query Execution in Google’s Napa
Indrajit Roy
Brad Adelberg
Shilpa Kolhar
Junichi Tatemura
Kevin Lai
Yanlai Huang
Divyakant Agrawal
Jim Chen
Yupu Zhang
49th International Conference on Very Large Data Bases, VLDB (2023), pp. 3475-3487
Preview abstract
Napa powers Google's critical data warehouse needs. It utilizes Log-Structured Merge Tree (LSM) for real-time data ingestion and achieves sub-second query latency for billions of queries per day. Napa handles a wide variety of query workloads: from full-table scans, to range scans, and multi-key lookups. Our design challenge is to handle this diverse query workload that runs concurrently. In particular, a large percentage of our query volume consists of external reporting queries characterized by multi-key lookups with strict sub-second query latency targets.
Query parallelization, which is achieved by processing a query in parallel by partitioning the input data (i.e., the SIMD model of computation), is an important technique to meet the low latency targets. Traditionally, the effectiveness of parallelization of a query is highly dependent on the alignment with the data partitioning established at write time. Unfortunately, such a write-time partitioning scheme cannot handle the highly variable parallelization requirements that are needed on a per-query basis.
The key to Napa’s success is its ability to adapt its query parallelization requirements on a per-query basis. This paper describes an index-based approach to perform data partitioning for queries that have sub-second latency requirements. Napa’s approach is progressive in that it can provide good partitioning within the time budgeted for partitioning. Since the end-to-end query time also includes the time to perform partitioning there is a tradeoff in terms of the time spent for partitioning and the resulting evenness of the partitioning. Our approach balances these opposing considerations to provide sub-second querying for billions of queries each day. We use production data to establish the effectiveness of Napa’s approach across easy to handle workloads to the most pathological conditions.
View details
Napa: Powering Scalable Data Warehousing with Robust Query Performance at Google
Thanh Do
Indrajit Roy
Haoyu Gao
Tao Lin
Mayank Singh Shishodia
Jianyi Liang
Sujata Sunil Kosalge
Tianhang Sun
Jay Patel
Ming Dai
Junichi Tatemura
Raman Grover
Kevin Lai
Min Chen
Xi Mao
Jeff Naughton
Bo Huang
Yao Liu
Prasanna Venkatasubramanian
Prashant Mishra
Yanlai Huang
Ramkumar Vadali
Maya Meng
Divyakanth Agrawal
Kefei Zhang
Jim Chen
Justin Tang
Haoyan Geng
Li Liu
Vijayshankar Raman
Sagar Trehan
Sourashis Roy
Zeleng Zhuang
Joey Zhang
Adam Li
Yupu Zhang
Hakan Hacıgümüş
Proceedings of the VLDB Endowment (PVLDB), 14 (12) (2021), pp. 2986-2998
Preview abstract
There are numerous Google services that continuously generate vast amounts of log data that are used to provide valuable insights to internal and external business users. We need to store and serve these planet-scale data sets under extremely demanding requirements of scalability, sub-second query response times, availability even in the case of entire data center failures, strong consistency guarantees, ingesting a massive stream of updates coming from the applications used around the globe. We have developed and deployed in production an analytical data management system, called Napa, to meet these requirements. Napa is the backend for multiple internal and external clients in Google so there is a strong expectation of variance-free robust query performance. At its core, Napa’s principal technologies for robust query performance include the aggressive use of materialized views that are maintained consistently as new data is ingested across multiple data centers. Our clients also demand flexibility in being able to adjust their query performance, data freshness, and costs to suit their unique needs. Robust query processing and flexible configuration of client databases are the hallmark of Napa design. Most of the related work in this area takes advantage of full flexibility to design the whole system without the need to support a diverse set of preexisting use cases, whereas Napa needs to deal with the hard constraints of applications that differ on which characteristics of the system are most important to optimize. Those constraints led us to make particular design decisions and also devise new techniques to meet the challenges. In this paper, we share our experiences in designing, implementing, deploying, and running Napa in production with some of Google’s most demanding applications.
View details
F1 Lightning: HTAP as a Service
Kelvin Lau
Jiacheng Yang
Zhan Yuan
Jeff Naughton
Ziyang Chen
Jeremy David Wood
Yuan Gao
Junxiong Zhou
Qiang Zeng
Xi Zhao
Jun Xu
Jun Ma
Ian James Rae
VLDB, VLDB Endowment (2020), ??-??
Preview abstract
The ongoing and increasing interest in HTAP (Hybrid Transactional and Analytical Processing) systems documents the intense interest from data owners in simultaneously running transactional and analytical workloads over the same data set. Much of the reported work on HTAP has arisen in the context of “green field” systems, answering the question “if we could design a system for HTAP from scratch, what would it look like?” While there is great merit in such an approach, and a lot of valuable technology has been developed with it, we found ourselves facing a different challenge: one in which there is a great deal of transactional data already existing in several transactional systems, heavily queried by an existing federated engine that does not “own” the transactional systems, supporting both new and legacy applications that demand transparent fast queries and transactions from this combination. This paper reports on our design and experiences with F1 Lightning, a system we built and deployed to meet this challenge. We describe our design decisions, some details of our implementation, and our experience with the system in production for some of Google's most demanding applications.
View details
Preview abstract
Business intelligence and web log analysis workloads often use queries with top-k clauses to produce the most relevant results. Values of k range from small to rather large and sometimes the requested output exceeds the capacity of the available main memory. When the requested output fits in the available memory existing top-k algorithms are efficient, as they can eliminate almost all but the top k results before sorting them. When the requested output exceeds the main memory capacity, existing algorithms externally sort the entire input, which can be very expensive. Furthermore, the drastic difference in execution cost when the memory capacity is exceeded results in an unpleasant user experience. Every day, tens of thousands of production top-k queries executed on F1 Query resort to an external sort of the input.
To address these challenges, we introduce a new top-k algorithm that is able to eliminate parts of the input before sorting or writing them to secondary storage, regardless of whether the requested output fits in the available memory. To achieve this, at execution time our algorithm creates a concise model of the input using histograms. The proposed algorithm is implemented as part of F1 Query and is used in production, where significantly accelerates top-k queries with outputs larger than the available memory. We evaluate our algorithm against existing top-k algorithms and show that it reduces I/O traffic and can be up to 11× faster.
View details
F1 Query: Declarative Querying at Scale
Bart Samwel
Ahmed Aly
Thanh Do
Somayeh Sardashti
Jiexing Li
Jiacheng Yang
Chanjun Yang
Jason Govig
Andrew Harn
Zhan Yuan
Daniel Tenedorio
Colin Zheng
Allen Yan
Orri Erling
Yang Xia
Qiang Zeng
Divy Agrawal
Jun Xu
Mohan Yang
Andrey Gubichev
Felix Weigel
Yiqun Wei
Ben Handy
Anurag Biyani
Ian Rae
Amr El-Helw
Shivakumar Venkataraman
David G Wilhite
PVLDB (2018), pp. 1835-1848
Preview abstract
F1 Query is a stand-alone, federated query processing platform that executes SQL queries against data stored in different file-based formats as well as different storage systems (e.g., BigTable, Spanner, Google Spreadsheets, etc.). F1 Query eliminates the need to maintain the traditional distinction between different types of data processing workloads by simultaneously supporting: (i) OLTP-style point queries that affect only a few records; (ii) low-latency OLAP querying of large amounts of data; and (iii) large ETL pipelines transforming data from multiple data sources into formats more suitable for analysis and reporting. F1 Query has also significantly reduced the need for developing hard-coded data processing pipelines by enabling declarative queries integrated with custom business logic. F1 Query satisfies key requirements that are highly desirable within Google: (i) it provides a unified view over data that is fragmented and distributed over multiple data sources; (ii) it leverages datacenter resources for performant query processing with high throughput and low latency; (iii) it provides high scalability for large data sizes by increasing computational parallelism; and (iv) it is extensible and uses innovative approaches to integrate complex business logic in declarative query processing. This paper presents the end-to-end design of F1 Query. Evolved out of F1, the distributed database that Google uses to manage its advertising data, F1 Query has been in production for multiple years at Google and serves the querying needs of a large number of users and systems.
View details