# Gagan Aggarwal

Authored Publications

Google Publications

Other Publications

Sort By

Simple Mechanisms for Welfare Maximization in Rich Advertising Auctions

Divyarthi Mohan

Alex Psomas

Advances in Neural Information Processing Systems 35 (NeurIPS 2022) Main Conference Track

Preview abstract
Internet ad auctions have evolved from a few lines of text to richer informational layouts that include images, sitelinks, videos, etc. Ads in these new formats occupy varying amounts of space, and an advertiser can provide multiple formats, only one of which can be shown.The seller is now faced with a multi-parameter mechanism design problem.Computing an efficient allocation is computationally intractable, and therefore the standard Vickrey-Clarke-Groves (VCG) auction, while truthful and welfare-optimal, is impractical. In this paper, we tackle a fundamental problem in the design of modern ad auctions. We adopt a
Myersonian'' approach and study allocation rules that are monotone both in the bid and set of rich ads. We show that such rules can be paired with a payment function to give a truthful auction. Our main technical challenge is designing a monotone rule that yields a good approximation to the optimal welfare. Monotonicity doesn't hold for standard algorithms, e.g. the incremental bang-per-buck order, that give good approximations to
knapsack-like'' problems such as ours. In fact, we show that no deterministic monotone rule can approximate the optimal welfare within a factor better than 2 (while there is a non-monotone FPTAS). Our main result is a new, simple, greedy and monotone allocation rule that guarantees a 3-approximation. In ad auctions in practice, monotone allocation rules are often paired with the so-called \emph{Generalized Second Price (GSP)} payment rule, which charges the minimum threshold price below which the allocation changes. We prove that, even though our monotone allocation rule paired with GSP is not truthful, its Price of Anarchy (PoA) is bounded. Under standard no-overbidding assumptions, we prove bounds on the a pure and Bayes-Nash PoA. Finally, we experimentally test our algorithms on real-world data.
View details

Biobjective Online Bipartite Matching

Yang Cai

George Pierrakos

Workshop in Internet and Network Economics, Springer (2014), pp. 218-231

Preview abstract
Motivated by Online Ad allocation when there are multiple conflicting objectives, we introduce and study the problem of Biobjective Online Bipartite Matching, a strict generalization of the standard setting of Karp, Vazirani and Vazirani, where we are allowed to have edges of two colors, and the goal is to find a matching that is both large and balanced at the same time. We study both deterministic and randomized algorithms for
this problem; after showing that the single color upper bounds of 1/2 and 1 − 1/e carry over to our biobjective setting as well, we show that a very natural, albeit hard to analyze, deterministic algorithm achieves a competitive ratio of 0.343. We next show how a natural randomized algorithm matches this ratio, through a simpler analysis, and how a clever – and perhaps not immediately obvious – generalization of Ranking can beat the 1/2 bound and get a competitive ratio of 0.573, coming close to matching the upper bound of 0.63.
View details

Online Selection of Diverse Results

Debmalya Panigrahi

Atish Das Sarma

Proceedings of the 5th ACM international Conference on Web Search and Data Mining (2012), pp. 263-272

Preview abstract
The phenomenal growth in the volume of easily accessible information via various web-based services has made it essential for service providers to provide users with personalized representative summaries of such information. Further, online commercial services including social networking and micro-blogging websites, e-commerce portals, leisure and entertainment websites, etc. recommend interesting content to users that is simultaneously diverse on many different axes such as topic, geographic specificity, etc.
The key algorithmic question in all these applications is the generation of a succinct, representative, and relevant summary from a large stream of data coming from a variety of sources. In this paper, we formally model this optimization problem, identify its key structural characteristics, and use these observations to design an extremely scalable and efficient algorithm. We analyze the algorithm using theoretical techniques to show that it always produces a nearly optimal solution. In addition, we perform large-scale experiments on both real-world and synthetically generated datasets, which confirm that our algorithm performs even better than its analytical guarantees in practice, and also outperforms other candidate algorithms for the problem by a wide margin.
View details

Online Vertex-Weighted Bipartite Matching and Single-bid Budgeted Allocations

Chinmay Karande

Proceedings of ACM-SIAM Symposium on Discrete Algorithms (2011)

Preview abstract
We study the following vertex-weighted online bipartite matching problem: G(U, V, E) is a bipartite graph. The vertices in U have weights and are known ahead of time, while the vertices in V arrive online in an arbitrary order and have to be matched upon arrival. The goal is to maximize the sum of weights of the matched vertices in U. When all the weights are equal, this reduces to the classic online bipartite matching problem for which Karp, Vazirani and Vazirani gave an optimal (1 − 1/e)-competitive algorithm in their seminal work [KVV90]. Our main result is an optimal (1 − 1/e)-competitive randomized algorithm for general vertex
weights. We use random perturbations of weights by appropriately chosen multiplicative factors. Our solution constitutes the ﬁrst known generalization of the algorithm in [KVV90] in this model and provides new insights into the role of randomization in online allocation problems. It also effectively solves the problem of online budgeted allocations [MSVV05] in the case when an agent makes the same bid for any desired item, even if the bid is comparable to his budget - complementing the results of [MSVV05, BJN07] which apply when the bids are much smaller than the budgets.
View details

Achieving anonymity via clustering

Tomás Feder

Krishnaram Kenthapadi

Samir Khuller

Rina Panigrahy

Dilys Thomas

ACM Transactions on Algorithms, vol. 6 (2010), 49:1-49:19

Preview abstract
Publishing data for analysis from a table containing personal records, while maintaining individual privacy, is a problem of increasing importance today. The traditional approach of
de-identifying records is to remove identifying fields such as social security number, name etc. However, recent research has shown that a large fraction of the US population can be
identified using non-key attributes (called quasi-identifiers) such as date of birth, gender, and zip code. Sweeney proposed the k-anonymity model for privacy where non-key
attributes that leak information are suppressed or generalized so that, for every record in the modified table, there are at least k−1 other records having exactly the same values for
quasi-identifiers. We propose a new method for anonymizing data records, where quasi-identifiers of data records are first clustered and then cluster centers are published. To
ensure privacy of the data records, we impose the constraint that each cluster must contain no fewer than a pre-specified number of data records. This technique is more general since we have a much larger choice for cluster centers than k-Anonymity. In many cases, it lets us release a lot more information without compromising privacy. We also provide constant-factor approximation algorithms to come up with such a clustering. This is the first set of algorithms for the anonymization problem where the performance is independent of the anonymity parameter k. We further observe that a few outlier points can significantly increase the cost of anonymization. Hence, we extend our algorithms to allow an epsilon fraction of points to remain unclustered, i.e., deleted from the anonymized publication. Thus, by not releasing a small fraction of the database records, we can ensure that the data published for analysis has less distortion and hence is more useful. Our approximation algorithms for new clustering objectives are of independent interest and could be applicable in other clustering scenarios as well.
View details

Efficiency of (Revenue-)Optimal Mechanisms

Proceedings of the 10th ACM Conference on Electronic Commerce (2009)

Preview abstract
We compare the expected efficiency of revenue maximizing (or optimal) mechanisms with that of efficiency maximizing ones. We show that the efficiency of the revenue maximizing mechanism for selling a single item with (k + log_{e / e−1} k + 1) bidders is at least as much as the efficiency of the efficiency-maximizing mechanism with k bidders, when bidder valuations are drawn i.i.d. from a Monotone Hazard Rate distribution. Surprisingly, we also show that this bound is tight within a small additive constant of 5.7. In other words, Θ(log k) extra bidders suffice for the revenue maximizing mechanism to match the efficiency of the efficiency maximizing mechanism, while o(log k) do not. This is in contrast to the result of Bulow and Klemperer comparing the revenue of the two mechanisms, where only one extra bidder suffices. More precisely, they show that the revenue of the efficiency maximizing mechanism with k + 1 bidders is no less than the revenue of the revenue maximizing mechanism with k bidders.
We extend our result for the case of selling t identical items and show that 2.2 log k + tΘ(log log k) extra bidders suffice for the revenue maximizing mechanism to match the
efficiency of the efficiency maximizing mechanism.
In order to prove our results, we do a classification of Monotone Hazard Rate (MHR) distributions and identify a family of MHR distributions, such that for each class in our classification, there is a member of this family that is point-wise lower than every distribution in that class. This lets us prove interesting structural theorems about distributions with Monotone Hazard Rate.
View details

Sponsored Search Auctions for Markovian Users

S. Muthukrishnan

Fourth Workshop on Ad Auctions; Workshop on Internet and Network Economics (WINE). (2008)

Preview abstract
Sponsored search involves running an auction among advertisers who bid in order to have their ad shown next to search results for specific keywords. The most popular auction for sponsored search is the “Generalized Second Price” (GSP) auction where advertisers are assigned to slots in the decreasing order of their score, which is defined as the product of their bid and click-through rate. One of the main advantages of this simple ranking is that bidding strategy is intuitive: to move up to a more prominent slot on the results page, bid more. This makes it simple for advertisers to strategize. However this ranking only maximizes efficiency under the assumption that the probability of a user clicking on an ad is independent of the other ads shown on the page. We study a Markovian user model that does not make this assumption. Under this model, the most efficient assignment is no longer a simple ranking function as in GSP. We show that the optimal assignment can be found efficiently (even in near-linear time). As a result of the more sophisticated structure of the optimal assignment, bidding dynamics become more complex: indeed it is no longer clear that bidding more moves one higher on the page. Our main technical result is that despite the added complexity of the bidding dynamics, the optimal assignment has the property that ad position is still monotone in bid. Thus even in this richer user model, our mechanism retains the core bidding dynamics of the GSP auction that make it useful for advertisers.
View details

Theory research at Google

Preview
Nir Ailon

Florin Constantin

Eyal Even-Dar

Gereon Frahling

Monika R. Henzinger

S. Muthukrishnan

Noam Nisan

Anastasios Sidiropoulos

SIGACT News, vol. 39 (2008), pp. 10-28

Achieving Anonymity via Clustering

Preview
Tomás Feder

Krishnaram Kenthapadi

Samir Khuller

Rina Panigrahy

Dilys Thomas

Proceedings of the 25th ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems (PODS) (2006), pp. 153-162

Bidding to the Top: VCG and Equilibria of Position-Based Auctions

S. Muthukrishnan

Proceedings of the Fourth Workshop on Approximation and Online Algorithms (WAOA) (2006)

Preview abstract
Many popular search engines run an auction to determine the placement of advertisements next to search results. Current auctions at Google and Yahoo! let advertisers specify a single amount as their bid in the auction. This bid is interpreted as the maximum amount the advertiser is willing to pay per click on its ad. When search queries arrive, the bids are used to rank the ads linearly on the search result page. Advertisers seek to be high on the list, as this attracts more attention and more clicks. The advertisers pay for each user who clicks on their ad, and the amount charged depends on the bids of all the advertisers participating in the auction.
We study the problem of ranking ads and associated pricing mechanisms when the advertisers not only specify a bid, but additionally express their preference for positions in the list of ads. In particular, we study prefix position auctions where advertiser i can specify that she is interested only in the top κi positions.
We present a simple allocation and pricing mechanism that generalizes the desirable properties of current auctions that do not have position constraints. In addition, we show that our auction has an envy-free or symmetric Nash equilibrium with the same outcome in allocation and pricing as the well-known truthful Vickrey-Clarke-Groves (VCG) auction. Furthermore, we show that this equilibrium is the best such equilibrium for the advertisers in terms of the profit made by each advertiser. We also discuss other position-based auctions.
View details

The load rebalancing problem

Anonymizing Tables

Tomás Feder

Krishnaram Kenthapadi

Rajeev Motwani

Rina Panigrahy

Dilys Thomas

ICDT (2005), pp. 246-258

Two Can Keep A Secret: A Distributed Architecture for Secure Database Services

Mayank Bawa

Prasanna Ganesan

Hector Garcia-Molina

Krishnaram Kenthapadi

Rajeev Motwani

Utkarsh Srivastava

Dilys Thomas

Ying Xu

CIDR (2005)

Algorithms for the Database Layout Problem

Derandomization of auctions

Amos Fiat

Andrew V. Goldberg

Jason D. Hartline

Nicole Immorlica

Madhu Sudan

STOC (2005)

Vision Paper: Enabling Privacy for the Paranoids

Mayank Bawa

Prasanna Ganesan

Hector Garcia-Molina

Krishnaram Kenthapadi

Nina Mishra

Rajeev Motwani

Utkarsh Srivastava

Dilys Thomas

Jennifer Widom

Ying Xu 0002

VLDB (2004)

Complexities for generalized models of self-assembly

Algorithms for Multi-product Pricing

Secure Computation of the k th-Ranked Element

On Identifying Stable Ways to Configure Systems

Switch Scheduling via Randomized Edge Coloring

The load rebalancing problem