Mohammad Mahdian

Mohammad Mahdian

Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Preview abstract We study the price of anarchy of the generalized second-price auction where bidders are value maximizers (i.e., autobidders). We show that in general the price of anarchy can be as bad as 0. For comparison, the price of anarchy of running VCG is 1/2 in the autobidding world. We further show a fined-grained price of anarchy with respect to the discount factors (i.e., the ratios of click probabilities between lower slots and the highest slot in each auction) in the generalized second-price auction, which highlights the qualitative relation between the smoothness of the discount factors and the efficiency of the generalized second-price auction. View details
    Preview abstract In this survey, we summarize recent developments in research fueled by the growing adoption of automated bidding strategies in online advertising. We explore the challenges and opportunities that have arisen as markets embrace this autobidding and cover a range of topics in this area, including bidding algorithms, equilibrium analysis and efficiency of common auction formats, and optimal auction design. View details
    Preview abstract Hierarchical Clustering is an unsupervised data analysis method which has been widely used for decades. Despite its popularity, it had an underdeveloped analytical foundation and to address this, Dasgupta recently introduced an optimization view-point of hierarchical clustering with pair- wise similarity information that spurred a line of work shedding light on old algorithms (e.g., Average-Linkage), but also designing new algorithms. Here, for the maximization dual of Dasgupta’s objec- tive (introduced by Moseley-Wang), we present polynomial-time 42.46% approximation algorithms that use Max-Uncut Bisection as a subroutine. The previous best worst-case approximation factor in polynomial time was 33.6%, improving only slightly over Average-Linkage which achieves 33.3%. Finally, we complement our positive results by providing APX-hardness (even for 0-1 similarities), under the Small Set Expansion hypothesis. View details
    Preview abstract As machine learning has become more and more integrated into our businesses and lifestyles, researchers have begun to recognize the necessity of ensuring machine learning systems are fair. Recently, there has been an interest in defining a notion of fairness that mitigates over-representation in traditional clustering. In this paper we extend this notion to hierarchical clustering, where the goal is to recursively partition the data to optimize a certain objective~\cite{dasgupta}. For various natural objectives, we obtain simple, efficient algorithms to find a provably good fair hierarchical clustering. Empirically, we show that our algorithms can find a fair hierarchical clustering, surprisingly, with only a negligible loss in the objective. View details
    Fair Correlation clustering
    23rd International Conference on Artificial Intelligence and Statistics (AISTATS 2020) (2020) (to appear)
    Preview abstract In this paper, we study correlation clustering under fairness constraints. Fair variants of k-median and k-center clustering have been studied recently, and approximation algorithms using a notion called fairlet decomposition have been proposed. We obtain approximation algorithms for fair correlation clustering under several important types of fairness constraints. Our results hinge on obtaining a fairlet decomposition for correlation clustering by introducing a novel combinatorial optimization problem. We define a fairlet decomposition with cost similar to the k-median cost and this allows us to obtain approximation algorithms for a wide range of fairness constraints. We complement our theoretical results with an in-depth analysis of our algorithms on real graphs where we show that fair solutions to correlation clustering can be obtained with limited increase in cost compared to the state-of-the-art (unfair) algorithms. View details
    Preview abstract Clustering is a fundamental problem in unsupervised machine learning. In many applications, clustering needs to be performed in presence of additional constraints, such as fairness or diversity constraints. In this paper, we formulate the problem of k-center clustering without over-representation, and propose approximation algorithms to solve the problem, as well as hardness results. We empirically evaluate our clusterings on real-world dataset and show that fairness can be obtained with limited effect on clustering quality. View details
    Response Prediction for Low-Regret Agents
    Ashwinkumar Badanidiyuru Varadaraja
    Sadra Yazdanbod
    Web and Internet Economics 2019
    Preview abstract Companies like Google and Microsoft run billions of auctions every day to sell advertising opportunities. Any change to the rules of these auctions can have a tremendous effect on the revenue of the company and the welfare of the advertisers and the users. Therefore, any change requires careful evaluation of its potential impacts. Currently, such impacts are often evaluated by running simulations or small controlled experiments. This, however, misses the important factor that the advertisers respond to changes. Our goal is to build a theoretical framework for predicting the actions of an agent (the advertiser) that is optimizing her actions in an uncertain environment. We model this problem using a variant of the multi armed bandit setting where playing an arm is costly. The cost of each arm changes over time and is publicly observable. The value of playing an arm is drawn stochastically from a static distribution and is observed by the agent and not by us. We, however, observe the actions of the agent. Our main result is that assuming the agent is playing a strategy with a regret of at most f(T) within the first T rounds, we can learn to play the multi-armed bandits game without observing the rewards) in such a way that the regret of our selected actions is at most O(k^4 (f(T) + 1) log(T)). View details
    Incentive-Aware Learning for Large Markets
    Proceedings of the 2018 World Wide Web Conference on World Wide Web, WWW 2018, Lyon, France, April 23-27, 2018, pp. 1369-1378
    Preview abstract In a typical learning problem, one key step is to use training data to pick one model from a collection of models that optimizes an objective function. In many multi-agent settings, the training data is generated through the actions of the agents, and the model is used to make a decision (e.g., how to sell an item) that affects the agents. An illustrative example of this is the problem of learning the reserve price in an auction. In such cases, the agents have an incentive to influence the training data (e.g., by manipulating their bids in the case of an auction) to game the system and achieve a more favorable outcome. In this paper, we study such incentive-aware learning problem in a general setting and show that it is possible to approximately optimize the objective function under two assumptions: (i) each individual agent is a “small” (part of the market); and (ii) there is a cost associated with manipulation. For our illustrative application, this nicely translates to a mechanism for setting approximately optimal reserve prices in auctions where no individual agent has significant market share. For this application, we also show that the second assumption (that manipulations are costly) is not necessary since we can “perturb” any auction to make it costly for the agents to manipulate. View details
    Preview abstract In online advertising, advertisers purchase ad placements by participating in a long sequence of repeated auctions. One of the most important features advertising platforms often provide, and advertisers often use, is budget management, which allows advertisers to control their cumulative expenditures. Advertisers typically declare the maximum daily amount they are willing to pay, and the platform adjusts allocations and payments to guarantee that cumulative expenditures do not exceed budgets. There are multiple ways to achieve this goal, and each one, when applied to all budget-constrained advertisers simultaneously, steers the system toward a different equilibrium. While previous research focused on online stochastic optimization techniques or game-theoretic equilibria of such settings, our goal in this paper is to compare the ``system equilibria'' of a range of budget management strategies in terms of the seller's profit and buyers' utility. In particular, we consider six different budget management strategies including probabilistic throttling, thresholding, bid shading, reserve pricing, and multiplicative boosting. We show these methods admit a system equilibrium in a rather general setting, and prove dominance relations between them in a simplified setting. Our study sheds light on the impact of budget management strategies on the tradeoff between the seller's profit and buyers' utility. Finally, we also empirically compare the system equilibria of these strategies using real ad auction data in sponsored search and randomly generated bids. The empirical study confirms our theoretical findings about the relative performances of budget management strategies. View details
    Pricing a low-regret seller
    Hoda Heidari
    Sadra Yazdanbod
    Proceedings of the Thirty-Third International Conference on Machine Learning (ICML 2016)
    Preview abstract As the number of ad exchanges has grown, publishers have turned to low regret learning algorithms to decide which exchange offers the best price for their inventory. This in turn opens the following question for the exchange: how to set prices to attract as many sellers as possible and maximize revenue. In this work we formulate this precisely as a learning problem, and present algorithms showing that by simply knowing that the counterparty is using a low regret algorithm is enough for the exchange to have its own low regret learning algorithm to find the optimal price. View details