Hongrae Lee
Authored Publications
Sort By
LaMDA: Language Models for Dialog Applications
Aaron Daniel Cohen
Alena Butryna
Alicia Jin
Apoorv Kulshreshtha
Ben Zevenbergen
Chung-ching Chang
Cosmo Du
Daniel De Freitas Adiwardana
Dehao Chen
Dmitry (Dima) Lepikhin
Erin Hoffman-John
Igor Krivokon
James Qin
Jamie Hall
Joe Fenton
Johnny Soraker
Kathy Meier-Hellstern
Maarten Paul Bosma
Marc Joseph Pickett
Marcelo Amorim Menegali
Marian Croak
Maxim Krikun
Noam Shazeer
Rachel Bernstein
Ravi Rajakumar
Ray Kurzweil
Romal Thoppilan
Steven Zheng
Taylor Bos
Toju Duke
Tulsee Doshi
Vincent Y. Zhao
Will Rusch
Yuanzhong Xu
arXiv (2022)
Preview abstract
We present LaMDA: Language Models for Dialog Applications. LaMDA is a family of Transformer-based neural language models specialized for dialog, which have up to 137B parameters and arepre-trained on 1.56T words of public dialog data and web text. While model scaling alone canimprove quality, it shows less improvements on safety and factual grounding. We demonstrate thatfine-tuning with annotated data and enabling the model to consult external knowledge sources canlead to significant improvements towards the two key challenges of safety and factual grounding.The first challenge, safety, involves ensuring that the model’s responses are consistent with a set ofhuman values, such as preventing harmful suggestions and unfair bias. We quantify safety using ametric based on an illustrative set of values, and we find that filtering candidate responses using aLaMDA classifier fine-tuned with a small amount of crowdworker-annotated data offers a promisingapproach to improving model safety. The second challenge, factual grounding, involves enabling themodel to consult external knowledge sources, such as an information retrieval system, a languagetranslator, and a calculator. We quantify factuality using a groundedness metric, and we find that ourapproach enables the model to generate responses grounded in known sources, rather than responsesthat merely sound plausible. Finally, we explore the use of LaMDA in the domains of education andcontent recommendations, and analyze their helpfulness and role consistency.
View details
Preview abstract
Descriptive titles provide crucial context for interpreting tables that are extracted from web pages and are a key component of search features such as tabular featured snippets from Google and Bing. Prior approaches have attempted to produce titles by selecting existing text snippets associated with the table. These approaches, however, are limited by their dependence on suitable titles existing a priori. In our user study, we observe that the relevant information for the title tends to be scattered across the page, and often-more than 80% of the time-does not appear verbatim anywhere in the page. We propose instead the application of a sequence-to-sequence neural network model as a more generalizable approach for generating high-quality table titles. This is accomplished by extracting many text snippets that have potentially relevant information to the table, encoding them into an input sequence, and using both copy and generation mechanisms in the decoder to balance relevance and readability of the generated title. We validate this approach with human evaluation on sample web tables and report that while sequence models with only a copy mechanism or only a generation mechanism are easily outperformed by simple selection-based baselines, the model with both capabilities performs the best, approaching the quality of crowdsourced titles while training on fewer than ten thousand examples. To the best of our knowledge, the proposed technique is the first to consider text-generation methods for table titles, and establishes a new state of the art.
View details
Ten Years of Web Tables
Michael J. Cafarella
Alon Halevy
Cong Yu
Daisy Zhe Wang
Eugene Wu
PVLDB (2018)
Preview abstract
In 2008, we wrote about WebTables, an effort to exploit the large and diverse set of structured databases casually published online in the form of HTML tables. The pastdecade has seen a flurry of research and commercial activity around the WebTables project itself, as well as the broad topic of informal online structured data. As exciting as the past decade as been, we think the next ten years hold evenmore promise. In this paper, we will review the WebTables project, and try to place it in the broader context ofthe decade of work that followed. We will also propose an agenda for the next ten exciting years of work, a project that can draw upon many unexpected corners of the data management community
View details
Using SSDs to scale up Google Fusion Tables, a Database-in-the-Cloud
Yingyi Bu
Changkyu Kim
32nd IEEE International Conference on Data Engineering, ICDE 2016, Helsinki, Finland, May 16-20, 2016, {IEEE} Computer Society, pp. 1263-1274
Preview abstract
Flash memory solid state drives (SSDs) have increasingly been advocated and adopted as a means of speeding up and scaling up data-driven applications. SSDs are becoming more widely available as an option in the cloud. However, when an application considers SSDs in the cloud, the best option for the application may not be immediate, among a number of choices for placing SSDs in the layers of the cloud. Although there have been many studies on SSDs, they often concern a specific setting, and how different SSD options in the cloud compare with each other is less well understood. In this paper, we describe how Google Fusion Tables (GFT) used SSDs and what optimizations were implemented to scale up its in-memory processing, clearly showing opportunities and limitations of SSDs in the cloud with quantitative analyses. We first discuss various SSD placement strategies and compare them with low-level measurements, and propose SSD-placement guidelines for a variety of cloud data services. We then present internals of our column engine and optimizations to better use the performance characteristics of SSDs. We empirically demonstrate that the optimizations enable us to scale our application to much larger datasets while retaining the low-latency and simple query processing architecture.
View details
Applying WebTables in Practice
Preview
Sreeram Balakrishnan
Alon Halevy
Boulos Harb
Warren Shen
Kenneth Wilder
Fei Wu
Cong Yu
Conference on Innovative Data Systems Research (2015)
Recent Progress Towards an Ecosystem of Structured Data on the Web
Preview
Nitin Gupta
Alon Y. Halevy
Boulos Harb
Fei Wu
Cong Yu
ICDE (2013), pp. 5-8
Efficient spatial sampling of large geographical tables
Preview
Anish Das Sarma
Hector Gonzalez
Alon Halevy
Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data, ACM, New York, NY, USA, pp. 193-204
Preview abstract
Large-scale map visualization systems play an increasingly important role in presenting geographic datasets to end users. Since these datasets can be
extremely large, a map rendering system often needs to select a small
fraction of the data to visualize them in a limited space. This paper addresses the fundamental challenge of {\em thinning}:
determining appropriate samples of data to be shown on specific geographical
regions and zoom levels. Other than the sheer scale of the data, the thinning
problem is challenging because of a number of other reasons: (1) data can
consist of complex geographical shapes, (2) rendering of data needs to satisfy
certain constraints, such as data being preserved across zoom levels and
adjacent regions, and (3) after satisfying the constraints, an {\em optimal}
solution needs to be chosen based on {\em objectives} such as {\em
maximality}, {\em fairness}, and {\em importance} of data.
This paper formally defines and presents a complete solution to the thinning
problem. First, we express the problem as an integer programming formulation
that efficiently solves thinning for desired objectives. Second, we
present more efficient solutions for maximality, based on DFS traversal of a
spatial tree. Third, we consider the common special case of point datasets,
and present an even more efficient randomized algorithm. Finally, we have
implemented all techniques from this paper in Google Maps visualizations of
Fusion Tables, and we describe a set of experiments that demonstrate the
tradeoffs among the algorithms.
View details
CloudRAMSort: fast and efficient large-scale distributed RAM sort on shared-nothing cluster
Preview
Changkyu Kim
Jongsoo Park
Nadathur Satish
Pradeep Dubey
Jatin Chhugani
Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data, ACM, New York, NY, USA, pp. 841-850