Publications

Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.

people standing in front of a screen with images and a chipboard

Publications

people standing in front of a screen with images and a chipboard

Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
1 - 15 of 420 publications
    Preview abstract Effective model calibration is a critical and indispensable component in developing Media Mix Models (MMMs). One advantage of Bayesian-based MMMs lies in their capacity to accommodate the information from experiment results and the modelers' domain knowledge about the ad effectiveness by setting priors for the model parameters. However, it remains ambiguous about how and which Bayesian priors should be tuned for calibration purpose. In this paper, we propose a new calibration method through model reparameterization. The reparameterized model includes Return on Ads Spend (ROAS) as a model parameter, enabling straightforward adjustment of its prior distribution to align with either experiment results or the modeler's prior knowledge. The proposed method also helps address several key challenges regarding combining MMMs and incrementality experiments. We use simulations to demonstrate that our approach can significantly reduce the bias and uncertainty in the resultant posterior ROAS estimates. View details
    Modeling Recommender Ecosystems: Research Challenges at the Intersection of Mechanism Design, Reinforcement Learning and Generative Models
    Martin Mladenov
    Proceedings of the 38th Annual AAAI Conference on Artificial Intelligence (AAAI-24), Vancouver(2024) (to appear)
    Preview abstract Modern recommender systems lie at the heart of complex ecosystems that couple the behavior of users, content providers, advertisers, and other actors. Despite this, the focus of the majority of recommender research---and most practical recommenders of any import---is on the \emph{local, myopic} optimization of the recommendations made to individual users. This comes at a significant cost to the \emph{long-term utility} that recommenders could generate for its users. We argue that explicitly modeling the incentives and behaviors of all actors in the system---and the interactions among them induced by the recommender's policy---is strictly necessary if one is to maximize the value the system brings to these actors and improve overall ecosystem ``health.'' Doing so requires: optimization over long horizons using techniques such as \emph{reinforcement learning}; making inevitable tradeoffs among the utility that can be generated for different actors using the methods of \emph{social choice}; reducing information asymmetry, while accounting for incentives and strategic behavior, using the tools of \emph{mechanism design}; better modeling of both user and item-provider behaviors by incorporating notions from \emph{behavioral economics and psychology}; and exploiting recent advances in \emph{generative and foundation models} to make these mechanisms interpretable and actionable. We propose a conceptual framework that encompasses these elements, and articulate a number of research challenges that emerge at the intersection of these different disciplines. View details
    Data Exchange Markets via Utility Balancing
    Aditya Bhaskara
    Sreenivas Gollapudi
    Sungjin Im
    Kamesh Munagala
    Govind S. Sankar
    WebConf(2024)
    Preview abstract This paper explores the design of a balanced data-sharing marketplace for entities with heterogeneous datasets and machine learning models that they seek to refine using data from other agents. The goal of the marketplace is to encourage participation for data sharing in the presence of such heterogeneity. Our market design approach for data sharing focuses on interim utility balance, where participants contribute and receive equitable utility from refinement of their models. We present such a market model for which we study computational complexity, solution existence, and approximation algorithms for welfare maximization and core stability. We finally support our theoretical insights with simulations on a mean estimation task inspired by road traffic delay estimation. View details
    Learning Thresholds with Latent Value and Censored Feedback
    Jiahao Zhang
    Tao Lin
    Weiqiang Zheng
    Zhe Feng
    Yifeng Teng
    Xiaotie Deng
    ICLR(2024)
    Preview abstract In this paper, we investigate a problem of \emph{actively} learning threshold in latent space, where the \emph{unknown} reward $g(\gamma, v)$ depends on the proposed threshold $\gamma$ and latent value $v$ and it can be \emph{only} achieved if the threshold is lower than or equal to the \emph{unknown} latent value. This problem has broad applications in practical scenarios, e.g., reserve price optimization in online auctions, online task assignments in crowdsourcing, setting recruiting bars in hiring, etc. We first characterize the query complexity of learning a threshold with the expected reward at most $\eps$ smaller than the optimum and prove that the number of queries needed can be infinitely large even when $g(\gamma, v)$ is monotone with respect to both $\gamma$ and $v$. On the positive side, we provide a tight query complexity $\Tilde{\Theta}(1/\eps^3)$ when $g$ is monotone and the CDF of value distribution is Lipschitz. Moreover, we show a tight $\Tilde{\Theta}(1/\eps^3)$ query complexity can be achieved as long as $g$ satisfies one-sided Lipschitzness, which provides a complete characterization for this problem. Finally, we extend this model to an online learning setting and demonstrate a tight $\Theta(T^{2/3})$ regret bound using continuous-arm bandit techniques and the aforementioned query complexity results. View details
    User Response in Ad Auctions: An MDP Formulation of Long-term Revenue Optimization
    Yang Cai
    Zhe Feng
    Christopher Liaw
    Grigoris Velegkas
    The Web Conference(2024)
    Preview abstract We propose a new Markov Decision Process (MDP) model for ad auctions to capture the user response to the quality of ads, with the objective of maximizing the long-term discounted revenue. By incorporating user response, our model takes into consideration all three parties involved in the auction (advertiser, auctioneer, and user). The state of the user is modeled as a user-specific click-through rate (CTR) with the CTR changing in the next round according to the set of ads shown to the user in the current round. We characterize the optimal mechanism for this MDP as a Myerson’s auction with a notion of modified virtual value, which relies on the value distribution of the advertiser, the current user state, and the future impact of showing the ad to the user. Leveraging this characterization, we design a sample-efficient and computationally-efficient algorithm which outputs an approximately optimal policy that requires only sample access to the true MDP and the value distributions of the bidders. Finally, we propose a simple mechanism built upon second price auctions with personalized reserve prices and show it can achieve a constant-factor approximation to the optimal long term discounted revenue. View details
    Individual Welfare Guarantees in the Autobidding World with Machine-learned Advice
    Negin Golrezaei
    Patrick Jaillet
    Jason Cheuk Nam Liang
    Proceedings of the ACM on Web Conference 2024, 267–275
    Preview abstract Online advertising channels commonly focus on maximizing total advertiser welfare to enhance channel health, and previous literature has studied augmenting ad auctions with machine learning predictions on advertiser values (also known asmachine-learned advice ) to improve total welfare. Yet, such improvements could come at the cost of individual bidders' welfare and do not shed light on how particular advertiser bidding strategies impact welfare. Motivated by this, we present an analysis on an individual bidder's welfare loss in the autobidding world for auctions with and without machine-learned advice, and also uncover how advertiser strategies relate to such losses. In particular, we demonstrate how ad platforms can utilize ML advice to improve welfare guarantee on the aggregate and individual bidder level by setting ML advice as personalized reserve prices when the platform consists ofautobidders who maximize value while respecting a return on ad spend (ROAS) constraint. Under parallel VCG auctions with such ML advice-based reserves, we present a worst-case welfare lower-bound guarantee for an individual autobidder, and show that the lower-bound guarantee is positively correlated with ML advice quality as well as the scale of bids induced by the autobidder's bidding strategies. Further, we show that no truthful, and possibly randomized mechanism with anonymous allocations can achieve universally better individual welfare guarantees than VCG, in the presence of personalized reserves based on ML-advice of equal quality. Moreover, we extend our individual welfare guarantee results to generalized first price (GFP) and generalized second price (GSP) auctions. Finally, we present numerical studies using semi-synthetic data derived from ad auction logs of a search ad platform to showcase improvements in individual welfare when setting personalized reserve prices with ML-advice. View details
    Efficiency of the Generalized Second-Price Auction for Value Maximizers
    Jieming Mao
    Hanrui Zhang
    Song Zuo
    Proceedings of the ACM on Web Conference 2024, 46–56
    Preview abstract We study the price of anarchy of the generalized second-price auction where bidders are value maximizers (i.e., autobidders). We show that in general the price of anarchy can be as bad as 0. For comparison, the price of anarchy of running VCG is 1/2 in the autobidding world. We further show a fined-grained price of anarchy with respect to the discount factors (i.e., the ratios of click probabilities between lower slots and the highest slot in each auction) in the generalized second-price auction, which highlights the qualitative relation between the smoothness of the discount factors and the efficiency of the generalized second-price auction. View details
    Non-uniform Bid-scaling and Equilibria for Different Auctions: An Empirical Study
    Jieming Mao
    Yifeng Teng
    Song Zuo
    Proceedings of the ACM on Web Conference 2024, 256–266
    Preview abstract In recent years, the growing adoption of autobidding has motivated the study of auction design with value-maximizing auto-bidders. It is known that under mild assumptions, uniform bid-scaling is an optimal bidding strategy in truthful auctions, e.g., Vickrey-Clarke-Groves auction (VCG), and the price of anarchy for VCG is 2. However, for other auction formats like First-Price Auction (FPA) and Generalized Second-Price auction (GSP), uniform bid-scaling may not be an optimal bidding strategy, and bidders have incentives to deviate to adopt strategies with non-uniform bid-scaling. Moreover, FPA can achieve optimal welfare if restricted to uniform bid-scaling, while its price of anarchy becomes 2 when non-uniform bid-scaling strategies are allowed. All these price of anarchy results have been focused on welfare approximation in the worst-case scenarios. To complement theoretical understandings, we empirically study how different auction formats (FPA, GSP, VCG) with different levels of non-uniform bid-scaling perform in an autobidding world with a synthetic dataset for auctions. Our empirical findings include: * For both uniform bid-scaling and non-uniform bid-scaling, FPA is better than GSP and GSP is better than VCG in terms of both welfare and profit; * A higher level of non-uniform bid-scaling leads to lower welfare performance in both FPA and GSP, while different levels of non-uniform bid-scaling have no effect in VCG. Our methodology of synthetic data generation may be of independent interest. View details
    Preview abstract Background: Physical activity levels worldwide have declined over recent decades, with the average number of daily steps decreasing steadily since 1995. Given that physical inactivity is a major modifiable risk factor for chronic disease and mortality, increasing the level of physical activity is a clear opportunity to improve population health on a broad scale. The current study aims to assess the cost-effectiveness and budget impact of a Fitbit-based intervention among healthy, but insufficiently active, adults to quantify the potential clinical and economic value for a commercially insured population in the U.S. Methods: An economic model was developed to compare physical activity, health outcomes, costs, and quality-adjusted life-years (QALYs) associated with usual care and a Fitbit-based intervention that consists of a consumer wearable device alongside goal setting and feedback features provided in a companion software application. Improvement in physical activity was measured in terms of mean daily step count. The effects of increased daily step count were characterized as reduced short-term healthcare costs and decreased incidence of chronic diseases with corresponding improvement in health utility and reduced disease costs. Published literature, standardized costing resources, and data from a National Institutes of Health-funded research program were utilized. Cost-effectiveness and budget impact analyses were performed for a hypothetical cohort of middle-aged adults. Results: The base case cost-effectiveness results found the Fitbit intervention to be dominant (less costly and more effective) compared to usual care. Discounted 15-year incremental costs and QALYs were -$1,257 and 0.011, respectively. In probabilistic analyses, the Fitbit intervention was dominant in 93% of simulations and either dominant or cost-effective (defined as less than $150,000/QALY gained) in 99.4% of simulations. For budget impact analyses conducted from the perspective of a U.S. Commercial payer, the Fitbit intervention was estimated to save approximately $6.5-million dollars over 2 years and $8.5-million dollars over 5 years for a cohort of 8,000 participants. Although the economic analysis results were very robust, the short-term healthcare cost savings were the most uncertain in this population and warrant further research. Conclusions: There is abundant evidence documenting the benefits of wearable activity trackers when used to increase physical activity as measured by daily step counts. Our research provides additional health economic evidence supporting implementation of wearable-based interventions to improve population health and offers compelling support for payers to consider including wearable-based physical activity interventions as part of a comprehensive portfolio of preventive health offerings for their insured populations. View details
    Preview abstract The Colonel Blotto Problem proposed by Borel in 1921 has served as a widely applicable model of budget-constrained simultaneous winner-take-all competitions in the social sciences. Applications include elections, advertising, R&D and more. However, the classic Blotto problem and variants limit the study to competitions over a finite set of discrete battlefields. In this paper, we extend the classical theory to study multiplayer Blotto games over arbitrary measurable battlegrounds, provide an algorithm to efficiently sample equilibria of symmetric "equipartionable" Generalized Blotto games, and characterize the symmetric fair equilibria of the Blotto game over the unit interval. View details
    Pairwise Ranking Losses of Click-Through Rates Prediction for Welfare Maximization in Ad Auctions
    Boxiang Lyu
    Zhe Feng
    Zachary Robertson
    Sanmi Koyejo
    ICML-23(2023)
    Preview abstract We study the design of loss functions for click-through rates (CTR) to optimize (social) welfare in ad auctions. Existing works either only focus on CTR predictions without consideration of business objectives (e.g. welfare) in auctions or assume that the distribution over the participants' expected cost-per-impression (eCPM) is known a priori and uses various additional assumptions on the parametric form of the distribution to derive loss functions for predicting CTRs. In this work, we bring back the welfare objectives of ad auctions into CTR predictions and propose a novel weighted rankloss to train CTR model. Compared with the previous literature, our approach provides a provable guarantee on welfare but without any assumptions on the eCPMs' distribution, while also avoiding the intractability of naively applying existing learning-to-rank methods. We further propose a theoretically justifiable technique for calibrating the losses using labels generated from a teacher network, only assuming that the teacher network has bounded $\ell_2$ generalization error. Finally, we demonstrate the practical advantages of the proposed loss on both synthetic and real-world data. View details
    Preview abstract Reach and frequency (R&F) is a core lever in the execution of ad campaigns, but it is not widely captured in the marketing mix models (MMMs) being fitted today due to the unavailability of accurate R&F metrics for some traditional media channels. Current practice usually uses impressions aggregated at regional level as inputs for MMMs, which does not take into account the fact that individuals can be exposed to an advertisement multiple times, and that the impact of an advertisement on an individual can change based on the number of times they are exposed. To address this limitation, we propose a R&F MMM which is an extension to Geo-level Bayesian Hierarchical Media Mix Modeling (GBHMMM) and is applicable when R&F data is available for at least one media channel. By incorporating R&F into MMM models, the new methodology is shown to produce more accurate estimates of the impact of marketing on business outcomes, and helps users optimize their campaign execution based on optimal frequency recommendations. View details
    Auctions without commitment in the autobidding world
    Andres Perlroth
    Proceedings of the ACM Web Conference, WWW 2023
    Preview abstract Advertisers in online ad auctions are increasingly using auto-bidding mechanisms to bid into auctions instead of directly bidding their value manually. One of the prominent auto-bidding formats is that of target cost-per-acquisition (tCPA) which maximizes the volume of conversions subject to a return-of-investment constraint. From an auction theoretic perspective however, this trend seems to go against foundational results that postulate that for profit-maximizing (aka quasi-linear) bidders, it is optimal to use a classic bidding system like marginal CPA (mCPA) bidding rather than using strategies like tCPA. In this paper we rationalize the adoption of such seemingly sub-optimal bidding within the canonical quasi-linear framework. The crux of the argument lies in the notion of commitment. We consider a multi-stage game where first the auctioneer declares the auction rules; then bidders select either the tCPA or mCPA bidding format (and submit bids accordingly); and then, if the auctioneer lacks commitment, it can revisit the rules of the auction (e.g., may readjust reserve prices depending on the bids submitted by the bidders). Our main result is that so long as a bidder believes that the auctioneer lacks commitment to follow the rule of the declared auction then the bidder will make a higher profit by choosing the tCPA format compared to the classic mCPA format. We then explore the commitment consequences for the auctioneer. In a simplified version of the model where there is only one bidder, we show that the tCPA subgame admits a credible equilibrium while the mCPA format does not. That is, when the bidder chooses the tCPA format the auctioneer can credibly implement the auction rules announced at the beginning of the game. We also show that, under some mild conditions, the auctioneer’s revenue is larger when the bidder uses the tCPA format rather than mCPA. We further quantify the value for the auctioneer to be able to commit to the declared auction rules. View details
    Efficiency of Non-Truthful Auctions in Auto-bidding: The Power of Randomization
    Christopher Liaw
    Andres Perlroth
    Proceedings of the ACM Web Conference, WWW 2023
    Preview abstract Auto-bidding is now widely adopted as an interface between advertisers and internet advertising as it allows advertisers to specify high-level goals, such as maximizing value subject to a value-per-spend constraint. Prior research has mainly focused on auctions that are truthful (such as a second-price auction) because these auctions admit simple (uniform) bidding strategies and are thus simpler to analyze. The main contribution of this paper is to characterize the efficiency across the spectrum of all auctions, including non-truthful auctions for which optimal bidding may be complex. For deterministic auctions, we show a dominance result: any uniform bidding equilibrium of a second-price auction (SPA) can be mapped to an equilibrium of any other auction – for example, first price auction (FPA) – with identical outcomes. In this sense, SPA with uniform bidding is an instance-wise optimal deterministic auction. Consequently, the price of anarchy (PoA) of any deterministic auction is at least the PoA of SPA with uniform bidding, which is known to be 2. We complement this by showing that the PoA of FPA without uniform bidding is 2. Next, we show, surprisingly, that truthful pricing is not dominant in the randomized setting. There is a randomized version of FPA that achieves a strictly smaller price of anarchy than its truthful counterpart when there are two bidders per query. Furthermore, this randomized FPA achieves the best-known PoA for two bidders, thus showing the power of non-truthfulness when combined with randomization. Finally, we show that no prior-free auction (even randomized, non-truthful) can improve on a PoA bound of 2 when there are a large number of advertisers per auction. These results should be interpreted qualitatively as follows. When the auction pressure is low, randomization and non-truthfulness is beneficial. On the other hand, if the auction pressure is intense, the benefits diminishes and it is optimal to implement a second-price auction. View details
    Learning to Bid in Contextual First Price Auctions
    Ashwinkumar Badanidiyuru Varadaraja
    Guru Prashanth Guruganesh
    Zhe Feng
    The Proceedings of the ACM Web Conference 2023
    Preview abstract In this paper, we investigate the problem about how to bid in repeated contextual first price auctions. We consider a single bidder (learner) who repeatedly bids in the first price auctions: at each time t, the learner observes a context xt ∈ R d and decides the bid based on historical information and xt. We assume a structured linear model of the maximum bid of all the others mt = α0 · xt + zt, where α0 ∈ R d is unknown to the learner and zt is randomly sampled from a noise distribution F with log-concave density function f. We consider both binary feedback (the learner can only observe whether she wins or not) and full information feedback (the learner can observe mt) at the end of each time t. For binary feedback, when the noise distribution F is known, we propose a bidding algorithm, by using maximum likelihood estimation (MLE) method to achieve at most Oe( p log(d)T ) regret. Moreover, we generalize this algorithm to the setting with binary feedback and the noise distribution is unknown but belongs to a parametrized family of distributions. For the full information feedback with unknown noise distribution, we provide an algorithm that achieves regret at most Oe( √ dT). Our approach combines an estimator for log-concave density functions and then MLE method to learn the noise distribution F and linear weight α0 simultaneously. We also provide a lower bound result such that any bidding policy in a broad class must achieve regret at least Ω(√ T), even when the learner receives the full information feedback and F is known. View details