Jump to Content

More Google Contributions to the Broader Scientific Community

January 26, 2011

Posted by Corinna Cortes and Alfred Spector, Google Research

Googlers actively engage with the scientific community by publishing technical papers, contributing open-source packages, working on standards, introducing new APIs and tools, giving talks and presentations, participating in ongoing technical debates, and much more. Our publications offer technical and algorithmic advances, demonstrate things we learn as we develop novel products and services, and shed light on some of the technical challenges we face at Google.

In an effort to highlight some of our work, we periodically select a number of publications to be featured on this site. We first posted a set of papers on this blog in mid-2010 and subsequently discussed them in more detail in the following blog postings. This blog posting highlights a few new noteworthy papers authored or co-authored by Googlers from the later half of 2010. In the coming weeks we will be offering a more in-depth look at these publications, but here are some summaries:

Algorithms and Electronic Commerce

Robust Mechanisms for Risk-Averse Sellers
ACM Conference on Electronic Commerce (EC)
Mukund Sundararajan and Qiqi Yan, Stanford University

In his seminal Nobel prize-winning work, Roger Myerson identified the revenue-maximizing auction for a risk-neutral auctioneer. In contrast, this work identifies good mechanisms for risk-averse auctioneers. These mechanisms trade a little revenue for better certainty, in the best possible way. We expect this work will help guide reserve-price selection in auctions where auctioneers/sellers want better control over their revenue.

Monitoring Algorithms for Negative Feedback Systems
World Wide Web Conference (WWW)
Mark Sandler and S. Muthukrishnan

In negative feedback systems, users report abusive content at a site to its owner for consideration or removal, but the users might not be honest. For the site owners, this represents a trade-off between vetting such user reports by humans vs. accepting them without vetting. This paper presents a mathematical framework for design and analysis of such systems and presents algorithms with provably good trade-offs against malicious users.

Allison Druin, University of Maryland, Elizabeth Foss, University of Maryland, Hilary Hutchinson, Evan Golub, University of Maryland, and Leshell Hatley, University of Maryland

In this paper, we describe seven search roles children display as information seekers using Internet keyword interfaces, based on a home study of 83 children ages 7, 9, and 11.

Machine Learning

Large Scale Image Annotation: Learning to Rank with Joint Word-Image Embeddings
European Conference on Machine Learning (ECML) Best Paper
Jason Weston, Samy Bengio, and Nicolas Usunier, Universite Paris 6 - LIP6

In this paper, we introduce a generic framework to find a joint representation of images and their labels, which can then be used for various tasks, including image ranking and image annotation. We simultaneously propose an efficient training algorithm that scales to tens of millions of images and hundreds of thousands of labels, while focusing training on making good predictions at the top of the ranked list. The models are both fast at prediction time and have low memory usage making it possible to house such systems on a laptop or mobile device.

Overlapping Experiment Infrastructure: More, Better
Faster Experimentation, Knowledge Discovery and Datamining (KDD)
Diane Tang, Ashish Agarwal, Deirdre O'Brien, and Mike Meyer

Google's data driven culture requires running a large number of live traffic experiments. This paper describes Google's overlapping experimental infrastructure where a single event (e.g. a web search) can be assigned to multiple simultaneous large experiments. The infrastructure and supporting tools provide a framework that enables running experiments from design to decision making and launch, and can be generalized to many other web applications.

North American Chapter of the Association for Computational Linguistics (NAACL)
Slav Petrov

It is well known that the Expectation Maximization algorithm can converge to widely varying local maxima. This paper shows that this can be advantageous when learning latent variable grammars for syntactic parsing. By combining multiple state-of-the-art individual grammars into an unweighted product model, parsing accuracy can be improved from 90.2% to 91.8% for English, and from 80.3% to 84.5% for German.

Software Engineering
International Symposium on Code Generation and Optimization (CGO)
Jason Mars, University of Virginia, Neil Vachharajani, Robert Hundt, Mary Lou Soffa, University of Virginia

This paper makes a big step forward in addressing an important and pressing problem in the field of Computer Science today. This work presents a lightweight runtime solution that significantly improves the utilization of datacenter servers by up to 58% on average. This work also received the CGO 2010 Best Presentation Award.

Maryam Kamvar and Doug Beeferman

Say What? Have you been speaking your search queries into your mobile device rather than typing them? Spoken search is available on Android, iPhone and Blackberry devices and we see an increasing numbers of searches coming in by voice on these phones. In our paper “Say What: Why users choose to speak their web queries” we investigate, on an aggregate level, what factors are most predictive of spoken queries. Understanding context in which a speech-driven search is used (or conversely not used) can be used to improve recognition engines and spoken interface design. So, save keystrokes and say your query!

Query Language Modeling for Voice Search
IEEE Workshop on Spoken Language Technology
Ciprian Chelba, Johan Schalkwyk, Thorsten Brants, Vida Ha, Boulos Harb, Will Neveitt, Carolina Parada*, Johns Hopkins University, and Peng Xu

The paper describes language modeling for google.com query data, and its application to speech recognition for Google Voice Search.
Our empirical findings include:
  • 10% relative gains in WER from large scale modeling,
  • a less known yet potentially quite detrimental interaction between Kneser-Ney smoothing and entropy pruning (approx. 10% relative increase in WER)
  • evidence that hints at non-stationarity of the query stream, and
  • surprisingly strong dependence across three English locales---USA, Britain and Australia.

Structured Data
Very Large Data Bases (VLDB)
Sergey Melnik, Andrey Gubarev, Jing Jing Long, Geoffrey Romer, Shiva Shivakumar, Matt Tolton, and Theo Vassilakis, Google Inc.

Dremel is a scalable, interactive ad-hoc query system. By combining multi-level execution trees and columnar data layout, it is capable of running aggregation queries over trillion-row tables in seconds. The system is widely used at Google and serves as the foundational technology behind BigQuery, a product launched in limited preview mode.

Systems and Infrastructure
USENIX Symposium on Operating Systems Design and Implementation (OSDI)
Daniel Peng and Frank Dabek

In the past, Google accumulated a whole day’s worth of changes to the web and ran a series of enormous MapReduces to apply this batch of changes to our index of the web. This system led to a delay of several days between crawling a document and presenting it to users in search results. To meet our goal of reducing the indexing delay to minutes, we needed to update the index as each individual document was crawled, rather than in daily batches. No existing infrastructure supported this kind of incremental transformation at web scale, so we built Percolator: a framework for transforming a large repository using small ACID transactions.

Availability in Globally Distributed Storage Systems
USENIX Symposium on Operating Systems Design and Implementation (OSDI)
Daniel Ford, Francois Labelle, Florentina Popovici, Murray Stokely, Van-Anh Truong*, Columbia University, Luiz Barroso, Carrie Grimes, and Sean Quinlan

In our paper, we characterize the availability of cloud storage systems, based on extensive monitoring of Google's main storage infrastructure, and the sources of failure which affect availability. We also present statistical models for reasoning about the impact of design choices such as data placement, recovery speed, and replication strategies, including replication across multiple data centers.

IEEE International Conference on Data Mining (ICDM)
Sergey Ioffe

With the huge amounts of very high-dimensional data, such as images and videos, we frequently need to "sketch" the data -- that is, represent it in a much more compact form, while still allowing us to accurately determine how different any two images or videos are. In this paper, we describe a sketching method for L1, one of the most common distance measures. It works by first hashing the data with a new algorithm, and then compressing each hash to a small number of bits, which is learned from data. This method is fast and allows the distances to be estimated accurately, while reducing the storage requirements by a factor of 100.

*) work carried out while at Google