Gagan Aggarwal
Authored Publications
Sort By
Auto-bidding and Auctions in Online Advertising: A Survey
Ashwinkumar Badanidiyuru Varadaraja
Christopher Liaw
Haihao (Sean) Lu
Andres Perlroth
Georgios Piliouras
Ariel Schvartzman
Kelly Spendlove
Hanrui Zhang
Mingfei Zhao
ACM SIGecom Exchanges, 22 (2024)
Preview abstract
In this survey, we summarize recent developments in research fueled by the growing adoption of automated bidding strategies in online advertising. We explore the challenges and opportunities that have arisen as markets embrace this autobidding and cover a range of topics in this area, including bidding algorithms, equilibrium analysis and efficiency of common auction formats, and optimal auction design.
View details
Simple Mechanisms for Welfare Maximization in Rich Advertising Auctions
Divyarthi Mohan
Alex Psomas
Advances in Neural Information Processing Systems 35 (NeurIPS 2022) Main Conference Track
Preview abstract
Internet ad auctions have evolved from a few lines of text to richer informational layouts that include images, sitelinks, videos, etc. Ads in these new formats occupy varying amounts of space, and an advertiser can provide multiple formats, only one of which can be shown.The seller is now faced with a multi-parameter mechanism design problem.Computing an efficient allocation is computationally intractable, and therefore the standard Vickrey-Clarke-Groves (VCG) auction, while truthful and welfare-optimal, is impractical. In this paper, we tackle a fundamental problem in the design of modern ad auctions. We adopt a
Myersonian'' approach and study allocation rules that are monotone both in the bid and set of rich ads. We show that such rules can be paired with a payment function to give a truthful auction. Our main technical challenge is designing a monotone rule that yields a good approximation to the optimal welfare. Monotonicity doesn't hold for standard algorithms, e.g. the incremental bang-per-buck order, that give good approximations to
knapsack-like'' problems such as ours. In fact, we show that no deterministic monotone rule can approximate the optimal welfare within a factor better than 2 (while there is a non-monotone FPTAS). Our main result is a new, simple, greedy and monotone allocation rule that guarantees a 3-approximation. In ad auctions in practice, monotone allocation rules are often paired with the so-called \emph{Generalized Second Price (GSP)} payment rule, which charges the minimum threshold price below which the allocation changes. We prove that, even though our monotone allocation rule paired with GSP is not truthful, its Price of Anarchy (PoA) is bounded. Under standard no-overbidding assumptions, we prove bounds on the a pure and Bayes-Nash PoA. Finally, we experimentally test our algorithms on real-world data.
View details
Preview abstract
Autobidding is becoming increasingly important in the domain of online advertising, and has become a critical tool used by many advertisers for optimizing their ad campaigns. We formulate fundamental questions around the problem of bidding for performance under very general affine cost constraints. We design optimal single-agent bidding strategies for the general bidding problem, in multi-slot truthful auctions. We show that there is an intimate connection between bidding and auction design, in that the bidding formula is optimal if and only if the underlying auction is truthful. We show how a MWU algorithm can be used to learn this optimal bidding formula.
Next, we move from the single-agent view to taking a full-system view: What happens when all advertisers adopt optimal autobidding? We prove that in general settings, there exists an equilibrium between the bidding agents for all the advertisers. Further, we prove a Price of Anarchy result: For the general affine constraints, the total value (conversions) obtained by the advertisers in the bidding agent equilibrium is no less than 1/2 of what we could generate via a centralized ad allocation scheme, one which does not consider any auction incentives or provide any per-advertiser guarantee.
View details
Biobjective Online Bipartite Matching
Yang Cai
George Pierrakos
Workshop in Internet and Network Economics, Springer (2014), pp. 218-231
Preview abstract
Motivated by Online Ad allocation when there are multiple conflicting objectives, we introduce and study the problem of Biobjective Online Bipartite Matching, a strict generalization of the standard setting of Karp, Vazirani and Vazirani, where we are allowed to have edges of two colors, and the goal is to find a matching that is both large and balanced at the same time. We study both deterministic and randomized algorithms for
this problem; after showing that the single color upper bounds of 1/2 and 1 − 1/e carry over to our biobjective setting as well, we show that a very natural, albeit hard to analyze, deterministic algorithm achieves a competitive ratio of 0.343. We next show how a natural randomized algorithm matches this ratio, through a simpler analysis, and how a clever – and perhaps not immediately obvious – generalization of Ranking can beat the 1/2 bound and get a competitive ratio of 0.573, coming close to matching the upper bound of 0.63.
View details
Online Selection of Diverse Results
Debmalya Panigrahi
Atish Das Sarma
Proceedings of the 5th ACM international Conference on Web Search and Data Mining (2012), pp. 263-272
Preview abstract
The phenomenal growth in the volume of easily accessible information via various web-based services has made it essential for service providers to provide users with personalized representative summaries of such information. Further, online commercial services including social networking and micro-blogging websites, e-commerce portals, leisure and entertainment websites, etc. recommend interesting content to users that is simultaneously diverse on many different axes such as topic, geographic specificity, etc.
The key algorithmic question in all these applications is the generation of a succinct, representative, and relevant summary from a large stream of data coming from a variety of sources. In this paper, we formally model this optimization problem, identify its key structural characteristics, and use these observations to design an extremely scalable and efficient algorithm. We analyze the algorithm using theoretical techniques to show that it always produces a nearly optimal solution. In addition, we perform large-scale experiments on both real-world and synthetically generated datasets, which confirm that our algorithm performs even better than its analytical guarantees in practice, and also outperforms other candidate algorithms for the problem by a wide margin.
View details
Online Vertex-Weighted Bipartite Matching and Single-bid Budgeted Allocations
Chinmay Karande
Proceedings of ACM-SIAM Symposium on Discrete Algorithms (2011)
Preview abstract
We study the following vertex-weighted online bipartite matching problem: G(U, V, E) is a bipartite graph. The vertices in U have weights and are known ahead of time, while the vertices in V arrive online in an arbitrary order and have to be matched upon arrival. The goal is to maximize the sum of weights of the matched vertices in U. When all the weights are equal, this reduces to the classic online bipartite matching problem for which Karp, Vazirani and Vazirani gave an optimal (1 − 1/e)-competitive algorithm in their seminal work [KVV90]. Our main result is an optimal (1 − 1/e)-competitive randomized algorithm for general vertex
weights. We use random perturbations of weights by appropriately chosen multiplicative factors. Our solution constitutes the first known generalization of the algorithm in [KVV90] in this model and provides new insights into the role of randomization in online allocation problems. It also effectively solves the problem of online budgeted allocations [MSVV05] in the case when an agent makes the same bid for any desired item, even if the bid is comparable to his budget - complementing the results of [MSVV05, BJN07] which apply when the bids are much smaller than the budgets.
View details
Achieving anonymity via clustering
Tomás Feder
Krishnaram Kenthapadi
Samir Khuller
Rina Panigrahy
Dilys Thomas
ACM Transactions on Algorithms, 6 (2010), 49:1-49:19
Preview abstract
Publishing data for analysis from a table containing personal records, while maintaining individual privacy, is a problem of increasing importance today. The traditional approach of
de-identifying records is to remove identifying fields such as social security number, name etc. However, recent research has shown that a large fraction of the US population can be
identified using non-key attributes (called quasi-identifiers) such as date of birth, gender, and zip code. Sweeney proposed the k-anonymity model for privacy where non-key
attributes that leak information are suppressed or generalized so that, for every record in the modified table, there are at least k−1 other records having exactly the same values for
quasi-identifiers. We propose a new method for anonymizing data records, where quasi-identifiers of data records are first clustered and then cluster centers are published. To
ensure privacy of the data records, we impose the constraint that each cluster must contain no fewer than a pre-specified number of data records. This technique is more general since we have a much larger choice for cluster centers than k-Anonymity. In many cases, it lets us release a lot more information without compromising privacy. We also provide constant-factor approximation algorithms to come up with such a clustering. This is the first set of algorithms for the anonymization problem where the performance is independent of the anonymity parameter k. We further observe that a few outlier points can significantly increase the cost of anonymization. Hence, we extend our algorithms to allow an epsilon fraction of points to remain unclustered, i.e., deleted from the anonymized publication. Thus, by not releasing a small fraction of the database records, we can ensure that the data published for analysis has less distortion and hence is more useful. Our approximation algorithms for new clustering objectives are of independent interest and could be applicable in other clustering scenarios as well.
View details
Preview abstract
In sponsored search, a number of advertising slots is available on a search results page, and have to be allocated among a set of advertisers competing to display an ad on the page. This gives rise to a bipartite matching market that is typically cleared by the way of
an automated auction. Several auction mechanisms have been proposed, with variants of the Generalized Second Price (GSP) being widely used in practice.
There is a rich body of work on bipartite matching markets that builds upon the stable marriage model of Gale and Shapley and the assignment model of Shapley and Shubik. This line of research offers deep insights into the structure of stable outcomes in such
markets and their incentive properties.
In this paper, we model advertising auctions in terms of an assignment model with linear utilities, extended with bidder and item specific maximum and minimum prices. Auction mechanisms like the commonly used GSP or the well-known Vickrey-Clarke-Groves (VCG) can be interpreted as simply computing a bidder-optimal stable matching in this model, for a suitably defined set of bidder preferences, but our model includes much richer bidders and preferences. We prove that in our model the existence of a stable matching is guaranteed, and under a non-degeneracy assumption a bidder-optimal stable matching exists as well. We give an algorithm to find such matching in polynomial time, and use it to design truthful mechanism that generalizes GSP, is truthful for profit-maximizing bidders, correctly implements features like bidder-specific minimum prices and position-specific bids, and works for rich mixtures of bidders and preferences. Our main technical contributions are the existence of bidder-optimal matchings and strategyproofness of the resulting mechanism, and are proved by induction on the progress of the matching algorithm.
View details
Efficiency of (Revenue-)Optimal Mechanisms
Proceedings of the 10th ACM Conference on Electronic Commerce (2009)
Preview abstract
We compare the expected efficiency of revenue maximizing (or optimal) mechanisms with that of efficiency maximizing ones. We show that the efficiency of the revenue maximizing mechanism for selling a single item with (k + log_{e / e−1} k + 1) bidders is at least as much as the efficiency of the efficiency-maximizing mechanism with k bidders, when bidder valuations are drawn i.i.d. from a Monotone Hazard Rate distribution. Surprisingly, we also show that this bound is tight within a small additive constant of 5.7. In other words, Θ(log k) extra bidders suffice for the revenue maximizing mechanism to match the efficiency of the efficiency maximizing mechanism, while o(log k) do not. This is in contrast to the result of Bulow and Klemperer comparing the revenue of the two mechanisms, where only one extra bidder suffices. More precisely, they show that the revenue of the efficiency maximizing mechanism with k + 1 bidders is no less than the revenue of the revenue maximizing mechanism with k bidders.
We extend our result for the case of selling t identical items and show that 2.2 log k + tΘ(log log k) extra bidders suffice for the revenue maximizing mechanism to match the
efficiency of the efficiency maximizing mechanism.
In order to prove our results, we do a classification of Monotone Hazard Rate (MHR) distributions and identify a family of MHR distributions, such that for each class in our classification, there is a member of this family that is point-wise lower than every distribution in that class. This lets us prove interesting structural theorems about distributions with Monotone Hazard Rate.
View details
Sponsored Search Auctions for Markovian Users
S. Muthukrishnan
Fourth Workshop on Ad Auctions; Workshop on Internet and Network Economics (WINE). (2008)
Preview abstract
Sponsored search involves running an auction among advertisers who bid in order to have their ad shown next to search results for specific keywords. The most popular auction for sponsored search is the “Generalized Second Price” (GSP) auction where advertisers are assigned to slots in the decreasing order of their score, which is defined as the product of their bid and click-through rate. One of the main advantages of this simple ranking is that bidding strategy is intuitive: to move up to a more prominent slot on the results page, bid more. This makes it simple for advertisers to strategize. However this ranking only maximizes efficiency under the assumption that the probability of a user clicking on an ad is independent of the other ads shown on the page. We study a Markovian user model that does not make this assumption. Under this model, the most efficient assignment is no longer a simple ranking function as in GSP. We show that the optimal assignment can be found efficiently (even in near-linear time). As a result of the more sophisticated structure of the optimal assignment, bidding dynamics become more complex: indeed it is no longer clear that bidding more moves one higher on the page. Our main technical result is that despite the added complexity of the bidding dynamics, the optimal assignment has the property that ad position is still monotone in bid. Thus even in this richer user model, our mechanism retains the core bidding dynamics of the GSP auction that make it useful for advertisers.
View details