# Vahab S. Mirrokni

Vahab Mirrokni is a Google Fellow and VP at Google Research, leading algorithm and optimization research groups at Google. These research teams include: market algorithms, large-scale graph mining, and
large-scale optimization. Previously he was a distinguished scientist and senior research director at Google. He received his PhD from MIT in 2005 and his B.Sc. from Sharif University of Technology in 2001. He joined Google Research in 2008, after research positions at Microsoft Research, MIT and Amazon.com. He is the co-winner of best paper awards at KDD, ACM EC, and SODA. His research areas include algorithms, distributed and stochastic optimization, and computational economics. Recently he has been working on various algorithmic problems in machine learning, online optimization and mechanism design, and large-scale graph-based learning . His full publication list by year can be found here . He also has an out-dated personal homepage.

Authored Publications

Google Publications

Other Publications

Sort By

Optimal Fully Dynamic k-Center Clustering for Adaptive and Oblivious Adversaries

Preview
Monika Henzinger

Andreas Wiese

Proceedings of the 2023 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)

Autobidding Auctions in the Presence of User Costs

Hanrui Zhang

Proceedings of the ACM Web Conference 2023, pp. 3428-3435

Preview abstract
Although costs are prevalent in ad auctions, not many auction theory works study auction design in the presence of cost in the classic settings. One reason is that most auctions design in the setting without cost directly generalize to the setting with cost when the bidders maximizing quasi-linear utility.
However, as auto-bidding becomes a major choice of advertisers in online advertising, the distinction from the underlying behavior model often leads to different solutions of many well-studied auctions. In the context of ad auctions with cost, VCG achieves optimal welfare with quasi-linear utility maximizing bidders, while has 0 welfare approximation guarantee with value maximizing bidders who follow the optimization behind common auto-bidding algorithms.
In this paper, we prove that the approximation welfare guarantee of VCG auction can be significantly improved by a minimal change --- introducing cost multipliers. We note that one can use either one multiplier per auction or one multiplier per bidder, but one global multiplier across all auctions and bidders does not work. Finally, to echo with our theoretical results, we conduct empirical evaluations using semi-synthetic data derived from real auction data of a major advertising platform.
View details

Approximately Optimal Core Shapes for Tensor Decompositions

Mehrdad Ghadiri

Proceedings of the 40th International Conference on Machine Learning (2023), pp. 11237-11254

Preview abstract
This work studies the combinatorial optimization problem of finding an optimal core tensor shape, also called multilinear rank, for a size-constrained Tucker decomposition. We give an algorithm with provable approximation guarantees for its reconstruction error via connections to higher-order singular values. Specifically, we introduce a novel Tucker packing problem, which we prove is NP-hard, and give a polynomial-time approximation scheme based on a reduction to the 2-dimensional knapsack problem with a matroid constraint. We also generalize our techniques to tree tensor network decompositions. We implement our algorithm using an integer programming solver, and show that its solution quality is competitive with (and sometimes better than) the greedy algorithm that uses the true Tucker decomposition loss at each step, while also running up to 1000x faster.
View details

Learning Rate Schedules in the Presence of Distribution Shift

Adel Javanmard

Pratik Worah

Proceedings of the 40th International Conference on Machine Learning (2023), pp. 9523-9546

Preview abstract
We design learning rate schedules that minimize regret for SGD-based online learning in the presence of a changing data distribution. We fully characterize the optimal learning rate schedule for online linear regression via a novel analysis with stochastic differential equations. For general convex loss functions, we propose new learning rate schedules that are robust to distribution shift, and we give upper and lower bounds for the regret that only differ by constants. For non-convex loss functions, we define a notion of regret based on the gradient norm of the estimated models and propose a learning schedule that minimizes an upper bound on the total expected regret. Intuitively, one expects changing loss landscapes to require more exploration, and we confirm that optimal learning rate schedules typically increase in the presence of distribution shift. Finally, we provide experiments for high-dimensional regression models and neural networks to illustrate these learning rate schedules and their cumulative regret.
View details

Sequential Attention for Feature Selection

Taisuke Yasuda

Proceedings of the 11th International Conference on Learning Representations (2023)

Preview abstract
Feature selection is the problem of selecting a subset of features for a machine learning model that maximizes model quality subject to a budget constraint. For neural networks, prior methods, including those based on L1 regularization, attention, and other techniques, typically select the entire feature subset in one evaluation round, ignoring the residual value of features during selection, i.e., the marginal contribution of a feature given that other features have already been selected. We propose a feature selection algorithm called Sequential Attention that achieves state-of-the-art empirical results for neural networks. This algorithm is based on an efficient one-pass implementation of greedy forward selection and uses attention weights at each step as a proxy for feature importance. We give theoretical insights into our algorithm for linear regression by showing that an adaptation to this setting is equivalent to the classical Orthogonal Matching Pursuit (OMP) algorithm, and thus inherits all of its provable guarantees. Our theoretical and empirical analyses offer new explanations towards the effectiveness of attention and its connections to overparameterization, which may be of independent interest.
View details

Design and analysis of bipartite experiments under a linear exposure-response model

Christopher Harshaw

Fredrik Savje

David Eisenstat

Proceedings of the 23rd ACM Conference on Economics and Computation (2022), pp. 606

Preview abstract
A bipartite experiment consists of one set of units being assigned treatments and another set of units for whichwe measure outcomes. The two sets of units are connected by a bipartite graph, governing how the treatedunits can affect the outcome units. In this paper, we consider estimation of the average total treatment effectin the bipartite experimental framework under a linear exposure-response model. We introduce the ExposureReweighted Linear (ERL) estimator, and show that the estimator is unbiased, consistent and asymptoticallynormal, provided that the bipartite graph is sufficiently sparse. To facilitate inference, we introduce an unbiasedand consistent estimator of the variance of theERLpoint estimator. In addition, we introduce a cluster-baseddesign,Exposure-Design, that uses heuristics to increase the precision of theERLestimator by realizinga desirable exposure distribution. Finally, we demonstrate the application of the described methodology tomarketplace experiments using a publicly available Amazon user-item review dataset.
View details

Stars: Tera-Scale Graph Building for Clustering and Learning

Warren Schudy

NeurIPS 2022 (2022)

Preview abstract
A fundamental procedure in the analysis of massive datasets is the construction of similarity graphs. Such graphs play a key role for many downstream tasks, including clustering, classification, graph learning, and nearest neighbor search. For these tasks, it is critical to build graphs which are sparse yet still representative of the underlying data. The benefits of sparsity are twofold: firstly, constructing dense graphs is infeasible in practice for large datasets, and secondly, the runtime of downstream tasks is directly controlled by the sparsity of the similarity graph. In this work, we present Stars: a highly scalable method for building extremely sparse graphs via two-hop spanners, which are graphs where similar points are connected by a path of length at most two. Stars can construct two-hop spanners with significantly fewer similarity comparisons, which are a major bottleneck for learning based models where comparisons are expensive to evaluate. Theoretically, we demonstrate that Stars builds a graph in nearly-linear time, where approximate nearest neighbors are contained within two-hop neighborhoods. To complement our results, we have deployed Stars for multiple data sets allowing for graph building at the Tera-Scale, i.e., for graphs with hundreds of billions of nodes and tens of trillions of edges. We evaluate the performance of Stars for clustering and graph learning, and demonstrate 10~1000-fold improvements in pairwise similarity comparisons and significant improvements in runtime with negligible quality loss.
View details

Massively Parallel k-Means Clustering for Perturbation Resilient Instances

Peilin Zhong

International Conference on Machine Learning, ICML 2022 (2022)

Preview abstract
We consider $k$-means clustering of $n$ data points in Euclidean space in the Massively Parallel Computation (MPC) model, a computational model which is an abstraction of modern massively parallel computing system such as MapReduce.
There was a conditional lower bound showing that getting $O(1)$-approximate $k$-means solution for general input points using $o(\log n)$ rounds in the MPC model is impossible.
However, the real-world data points usually have better structures.
One instance of interest is the set of data points which is perturbation resilient [Bilu \& Linial'2010].
In particular, a point set is $\alpha$-perturbation resilient for $k$-means if perturbing pairwise distances by multiplicative factors in the range $[1,\alpha]$ does not change the optimum $k$-means clusters.
We bypass the worst case lower bound by considering the perturbation resilient input points and showing $o(\log n)$ rounds $k$-means clustering algorithms for these instances in the MPC model.
Specifically, we show a fully scalable $(1+\varepsilon)$-approximate $k$-means clustering algorithm for $O(\alpha)$-perturbation resilient instance in the MPC model using $O(\log\log n)$ rounds and $\widetilde{O}_{\varepsilon,d}(n^{1+1/\alpha^2})$ total space.
If the space per machine is sufficient larger than $k$, i.e., at least $k\cdot n^{\Omega(1)}$, we also develop an optimal $k$-means clustering algorithm for $O(\alpha)$-perturbation resilient instance in the MPC model using $O(\log\log n)$ rounds and $\widetilde{O}_d(n\cdot(n^{1/\alpha^2}+k))$ total space.
View details

Improved Approximations for Euclidean k-means and k-median, via Nested Quasi-Independent Sets

Shyam Narayanan

54rd Annual ACM Symposium on Theory of Computing (STOC'22) (2022)

Preview abstract
Motivated by data analysis and machine learning applications, we consider the popular high-dimensional Euclidean $k$-median and $k$-means problems.
We propose a new primal-dual algorithm, inspired by the classic algorithm of Jain and Vazirani and the recent algorithm of Ahmadian et al.. Our algorithm achieves
an approximation ratio of respectively 2.40... and 5.95... for Euclidean $k$-median and $k$-means improving upon the
2.60... of Ahmadian et al. and the 6.12.. of Grandoni et al..
View details

Near-Optimal Private and Scalable k-Clustering

Shyam Narayanan

NeurIPS 2022 (2022)

Preview abstract
We study the differentially private (DP) $k$-means and $k$-median clustering problems of $n$ points in $d$-dimensional Euclidean space in the massively parallel computation (MPC) model. We provide two near-optimal algorithms where the near-optimality is in three aspects: they both achieve (1). $O(1)$ parallel computation rounds, (2). near-linear in $n$ and polynomial in $k$ total computational work (i.e., near-linear running time in the sequential setting), (3). $O(1)$ relative approximation and $\text{poly}(k, d)$ additive error, where $\Omega(1)$ relative approximation is provably necessary even for any polynomial-time non-private algorithm, and $\Omega(k)$ additive error is a provable lower bound for any polynomial-time DP $k$-means/median algorithm. Our two algorithms provide a tradeoff between the relative approximation and the additive error: the first has $O(1)$ relative approximation and $\sim (k^{2.5} + k^{1.01} \sqrt{d})$ additive error, and the second one achieves $(1+\gamma)$ relative approximation to the optimal non-private algorithm for an arbitrary small constant $\gamma>0$ and with $\text{poly}(k, d)$ additive error for a larger polynomial dependence on $k$ and $d$.
To achieve our result, we develop a general framework which partitions the data and reduces the DP clustering problem for the entire dataset to the DP clustering problem for each part. To control the blow-up of the additive error introduced by each part, we develop a novel charging argument which might be of independent interest.
View details

Optimal Mechanisms for Value Maximizers with Budget Constraints via Target Clipping

Proceedings of the 23rd ACM Conference on Economics and Computation (2022), pp. 475

Preview abstract
We study the design of revenue-maximizing mechanisms for value-maximizing agents with budget constraints. Agents have return-on-spend constraints requiring a minimum amount of value per unit of payment made and budget constraints limiting their total payments. The agents' only private information are the minimum admissible ratios on the return-on-spend constraint, referred to as the target ratios.
Our work is motivated by internet advertising platforms, where automated bidders are increasingly being adopted by advertisers to purchase advertising opportunities on their behalf. Instead of specifying bids for each keyword, advertiser set high-level goals, such as maximizing clicks, and targets on cost-per-clicks or return-on-spend, and the platform automatically purchases opportunities by bidding in different auctions. We present a model that abstracts away the complexities of the auto-bidding procurement process that is general enough to accommodate many allocation mechanisms such as auctions, matchings, etc.
We reduce the mechanism design problem when agents have private target ratios to a challenging non-linear optimization problem with monotonicity constraints. We provide a novel decomposition approach to tackle this problem that yields insights into the structure of optimal mechanisms and show that surprising features stem from the interaction on budget and return-on-spend constraints. Our optimal mechanism, which we dub the target-clipping mechanism, has an appealing structure: it sets a threshold on the target ratio of each agent, targets above the threshold are allocated efficiently, and targets below are clipped to the threshold.
View details

Scalable Differentially Private Clustering via Hierarchically Separated Trees

Chris Schwiegelshohn

David Saulpic

2022 ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2022) (to appear)

Preview abstract
We study the private $k$-median and $k$-means clustering problem in $d$ dimensional Euclidean space.
By leveraging tree embeddings, we give an efficient and easy to implement algorithm, that is empirically competitive with state of the art non private methods.
We prove that our method computes a solution with cost at most $O(d^{3/2}\log n)\cdot OPT + O(k d^2 \log^2 n / \epsilon^2)$, where $\epsilon$ is the privacy guarantee. (The dimension term, $d$, can be replaced with $O(\log k)$ using standard dimension reduction techniques.) Although the worst-case guarantee is worse than that of state of the art private clustering methods, the algorithm we propose is practical, runs in near-linear, $\tilde{O}(nkd)$, time and scales to tens of millions of points. We also show that our method is amenable to parallelization in large-scale distributed computing environments. In particular we show that our private algorithms can be implemented in logarithmic number of MPC rounds in the sublinear memory regime.
Finally, we complement our theoretical analysis with an empirical evaluation demonstrating the algorithm's efficiency and accuracy in comparison to other privacy clustering baselines.
View details

Differentially Private Graph Learning via Sensitivity-Bounded Personalized PageRank

Advances in Neural Information Processing Systems (2022)

Preview abstract
Personalized PageRank (PPR) is a fundamental tool in unsupervised learning of graph representations such as node ranking, labeling, and graph embedding. However, while data privacy is one of the most important recent concerns, existing PPR algorithms are not designed to protect user privacy. PPR is highly sensitive to the input graph edges: the difference of only one edge may cause a big change in the PPR vector, potentially leaking private user data.
In this work, we propose an algorithm which outputs an approximate PPR and has provably bounded sensitivity to input edges. In addition, we prove that our algorithm achieves similar accuracy to non-private algorithms when the input graph has large degrees. Our sensitivity-bounded PPR directly implies private algorithms for several tools of graph learning, such as, differentially private (DP) PPR ranking, DP node classification, and DP node embedding. To complement our theoretical analysis, we also empirically verify the practical performances of our algorithms.
View details

Non-Clairvoyant Dynamic Mechanism Design with Budget Constraints and Beyond

Proceedings of the 22nd ACM Conference on Economics and Computation (2021), pp. 369

Preview abstract
We provide a general design framework for dynamic mechanisms under complex environments, coined Lossless History Compression mechanisms. Lossless history compression mechanisms compress the history into a state carrying the least historical information without losing any generality in terms of either revenue or welfare. In particular, the characterization works for almost arbitrary constraints on the outcomes, and any objective function defined on the historical reports, allocations, and the cumulative payments. We then apply our framework to design a non-clairvoyant dynamic mechanism under budget and ex-post individual rationality constraints that is dynamic incentive-compatible and achieves non-trivial revenue performance, even without any knowledge about the future. In particular, our dynamic mechanism obtains a constant approximation to the optimal dynamic mechanism having access to all information in advance. To the best of our knowledge, this is the first dynamic mechanism that achieves a constant approximation and strictly respects dynamic incentive-compatibility and budget constraints without relying on any forecasts of the future.
View details

Revenue-Incentive Tradeoffs in Dynamic Reserve Pricing

International Conference on Machine Learning (2021), pp. 2601-2610

Preview abstract
Online advertisements are primarily sold via repeated auctions with reserve prices. In this paper, we study how to set reserves to boost revenue based on the historical bids of strategic buyers, while controlling the impact of such a policy on the incentive compatibility of the repeated auctions. Adopting an incentive compatibility metric which quantifies the incentives to shade bids, we propose a novel class of reserve pricing policies and provide analytical tradeoffs between their revenue performance and bid-shading incentives. The policies are inspired by the exponential mechanism from the literature on differential privacy, but our study uncovers mechanisms with significantly better revenue-incentive tradeoffs than the exponential mechanism in practice. We further empirically evaluate the tradeoffs on synthetic data as well as ad auction data from a major ad exchange to verify and support our theoretical findings.
View details

Non-Excludable Dynamic Mechanism Design

Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA), SIAM, pp. 1357-1373

Preview abstract
Dynamic mechanism design expands the scope on what type of allocation rules can be implemented and what revenue can be extracted when compared to static mechanisms. The case of excludable environments (i.e. one player can be de-allocated while keeping the allocation of the remaining players intact) is very well understood. The mechanisms in the literature don’t extend to the non-excludable environments. Two prototypical examples of such environments are: (i) public projects, where all players must have the same allocation; and (ii) non-disposable goods, where each item must be allocated to some player. We show a general mechanism that can asymptotically extract the full surplus in such environments. Moreover, we fully characterize which abstract mechanism design environments allow for full surplus extraction in the limit. Our characterization is based on the geometry of achievable utility sets – a convex set characterizing the expected utility profiles that can be implemented in a static mechanism.
View details

Synthetic Design: An Optimization Approach to Experimental Design with Synthetic Controls

Guido Imbens

Jann Spiess

Khashayar Khosravi

Miles Lubin

35th Conference on Neural Information Processing Systems (NeurIPS 2021) (2021)

Preview abstract
We investigate the optimal design of experimental studies that have pre-treatment outcome data available. The average treatment effect is estimated as the difference between the weighted average outcomes of the treated and control units. A number of commonly used approaches fit this formulation, including the difference-in-means estimator and a variety of synthetic-control techniques. We propose several methods for choosing the set of treated units in conjunction with the weights. Observing the NP-hardness of the problem, we introduce a mixed-integer programming formulation which selects both the treatment and control sets and unit weightings. We prove that these proposed approaches lead to qualitatively different experimental units being selected for treatment. We use simulations based on publicly available data from the US Bureau of Labor Statistics that show improvements in terms of mean squared error and statistical power when compared to simple and commonly used alternatives such as randomized trials.
View details

Robust Auction Design in the Auto-bidding World

Advances in Neural Information Processing Systems (2021)

Preview abstract
In classic auction theory, reserve prices are known to be effective for improving revenue for the auctioneer against quasi-linear utility maximizing bidders. The introduction of reserve prices, however, usually do not help improve total welfare of the auctioneer and the bidders. In this paper, we focus on value maximizing bidders with return on spend constraints---a paradigm that has drawn considerable attention recently as more advertisers adopt auto-bidding algorithms in advertising platforms---and show that the introduction of reserve prices has a novel impact on the market. Namely, by choosing reserve prices appropriately the auctioneer can improve not only the total revenue but also the total welfare. Our results also demonstrate that reserve prices are robust to bidder types, i.e., reserve prices work well for different bidder types, such as value maximizers and utility maximizers, without using bidder type information. We generalize these results for a variety of auction mechanisms such as VCG, GSP, and first-price auctions. Moreover, we show how to combine these results with additive boosts to improve the welfare of the outcomes of the auction further. Finally, we complement our theoretical observations with an empirical study confirming the effectiveness of these ideas using data from online advertising auctions.
View details

The Landscape of Autobidding Auctions: Value versus Utility Maximization

Proceedings of the 22nd ACM Conference on Economics and Computation (2021), pp. 132-133

Preview abstract
Internet advertisers are increasingly adopting automated bidders to buy advertising opportunities. Automated bidders simplify the procurement process by allowing advertisers to specify their goals and then bidding on their behalf in the auctions that are used to sell advertising slots. One popular goal adopted by advertisers is to maximize their clicks (or conversions) subject to a return on spend (RoS) constraint, which imposes that the ratio of total value to total spend is greater than a target ratio specified by the advertisers. The emergence of automated bidders brings into question whether the standard mechanisms used to sold ads are still effective in this new landscape. Thus motivated, in this paper we study the problem of characterizing optimal mechanisms for selling an item to one of multiple agents with return on spend constraints when either the values or target ratios are private. We consider two objectives for the agents: value maximization, which is becoming the prevalent objective in advertising markets, and utility maximization, which is the de facto paradigm in economic theory. Our goal is to understand the impact of the agents' private information and their objectives on the seller's revenue, and determine whether the first-best revenue, which is the optimal revenue without private information, is achievable.
View details

A Data-Driven Metric of Incentive Compatibility

Proceedings of The Web Conference (2020), pp. 1796-1806

Preview abstract
An incentive-compatible auction incentivizes buyers to truthfully reveal their private valuations. However, many ad auction mechanisms deployed in practice are not incentive-compatible, such as first-price auctions (for display advertising) and the generalized second-price auction (for search advertising). We introduce a new metric to quantify incentive compatibility in both static and dynamic environments. Our metric is data-driven and can be computed directly through black-box auction simulations without relying on reference mechanisms or complicated optimizations. We provide interpretable characterizations of our metric and prove that it is monotone in auction parameters for several mechanisms used in practice, such as soft floors and dynamic reserve prices. We empirically evaluate our metric on ad auction data from a major ad exchange and a major search engine to demonstrate its broad applicability in practice.
View details

Parallel Graph Algorithms in Constant Adaptive Rounds: Theory meets Practice

Laxman Dhulipala

Soheil Behnezhad

Warren J Schudy

VLDB 2020

Preview abstract
We study fundamental graph problems such as graph connectivity, minimum spanning forest (MSF), and approximate maximum (weight) matching in a distributed setting. In particular, we focus on the Adaptive Massively Parallel Computation (AMPC) model, which is a theoretical model that captures MapReduce-like computation augmented with a distributed hash table.
We show the first AMPC algorithms for all of the studied problems that run in a constant number of rounds and use only O(n^ϵ) space per machine, where 0<ϵ<1. Our results improve both upon the previous results in the AMPC model, as well as the best-known results in the MPC model, which is the theoretical model underpinning many popular distributed computation frameworks, such as MapReduce, Hadoop, Beam, Pregel and Giraph.
Finally, we provide an empirical comparison of the algorithms in the MPC and AMPC models in a fault-tolerant distriubted computation environment. We empirically evaluate our algorithms on a set of large real-world graphs and show that our AMPC algorithms can achieve improvements in both running time and round-complexity over optimized MPC baselines.
View details

Robust Pricing in Dynamic Mechanism Design

International Conference on Machine Learning (2020), pp. 2494-2503

Preview abstract
Motivated by the repeated sale of online ads via auctions, optimal pricing in repeated auctions has attracted a large body of research. While dynamic mechanisms offer powerful techniques to improve on both revenue and efficiency by optimizing auctions across different items, their reliance on exact distributional information of buyers’ valuations (present and future) limits their use in practice. In this paper, we propose robust dynamic mechanism design. We develop a new framework to design dynamic mechanisms that are robust to both estimation errors in value distributions and strategic behavior. We apply the framework in learning environments, leading to the first policy that achieves provably low regret against the optimal dynamic mechanism in contextual auctions, where the dynamic benchmark has full and accurate distributional information.
View details

Non-Clairvoyant Dynamic Mechanism Design

Pingzhong Tang

Econometrica, vol. 88(5) (2020), pp. 1939-1963

Preview abstract
We introduce a new family of dynamic mechanisms that restricts sellers from using future distributional knowledge. Since the allocation and pricing of each auction period do not depend on the type distributions of future periods, we call this family of dynamic mechanisms non‐clairvoyant.
We develop a framework (bank account mechanisms) for characterizing, designing, and proving lower bounds for dynamic mechanisms (clairvoyant or non‐clairvoyant). We use the same methods to compare the revenue extraction power of clairvoyant and non‐clairvoyant dynamic mechanisms.
View details

Non-monotone Submodular Maximization with Nearly Optimal Adaptivity and Query Complexity

Proceedings of the 36th International Conference on Machine Learning (2019), pp. 1833-1842

Preview abstract
Submodular maximization is a general optimization problem with a wide range of applications in machine learning (e.g., active learning, clustering, and feature selection). In large-scale optimization, the parallel running time of an algorithm is governed by its adaptivity, which measures the number of sequential rounds needed if the algorithm can execute polynomially-many independent oracle queries in parallel. While low adaptivity is ideal, it is not sufficient for an algorithm to be efficient in practice—there are many applications of distributed submodular optimization where the number of function evaluations becomes prohibitively expensive. Motivated by these applications, we study the adaptivity and query complexity of submodular maximization. In this paper, we give the first constant-factor approximation algorithm for maximizing a nonmonotone submodular function subject to a cardinality constraint k that runs in O(log(n)) adaptive rounds and makes O(n log(k)) oracle queries in expectation. In our empirical study, we use three real-world applications to compare our algorithm with several benchmarks for non-monotone submodular maximization. The results demonstrate that our algorithm finds competitive solutions using significantly fewer rounds and queries.
View details

Submodular Maximization with Nearly Optimal Approximation, Adaptivity and Query Complexity

Proceedings of the 30th Annual ACM-SIAM Symposium on Discrete Algorithms (2019), pp. 255-273

Preview abstract
Submodular optimization generalizes many classic problems in combinatorial optimization and has recently found a wide range of applications in machine learning (e.g., feature engineering and active learning). For many large-scale optimization problems, we are often concerned with the adaptivity complexity of an algorithm, which quantifies the number of sequential rounds where polynomially-many independent function evaluations can be executed in parallel. While low adaptivity is ideal, it is not sufficient for a distributed algorithm to be efficient, since in many practical applications of submodular optimization the number of function evaluations becomes prohibitively expensive. Motivated by these applications, we study the adaptivity and query complexity of adaptive submodular optimization.
Our main result is a distributed algorithm for maximizing a monotone submodular function with cardinality constraint k that achieves a (1 − 1/e − ε)-approximation in expectation. This algorithm runs in O(log(n)) adaptive rounds and makes O(n) calls to the function evaluation oracle in expectation. The approximation guarantee and query complexity are optimal, and the adaptivity is nearly optimal. Moreover, the number of queries is substantially less than in previous works. We also extend our results to the submodular cover problem to demonstrate the generality of our algorithm and techniques.
View details

Variance Reduction in Bipartite Experiments through Correlation Clustering

Warren Schudy

Thirty-third Conference on Neural Information Processing Systems (2019) (to appear)

Preview abstract
Causal inference in randomized experiments typically assumes that the units of randomization and the units of analysis are one and the same. In some applications, however, these two roles are played by distinct entities linked by a bipartite graph. The key challenge in such bipartite settings is how to avoid interference bias, which would typically arise if we simply randomized the treatment at the level of analysis units. One effective way of minimizing interference bias in standard experiments is through cluster randomization, but this design has not been studied in the bipartite setting where conventional clustering schemes can lead to poorly powered experiments. This paper introduces a novel clustering objective and a corresponding algorithm that partitions a bipartite graph so as to maximize the statistical power of a bipartite experiment on that graph. Whereas previous work relied on balanced partitioning, our formulation suggests the use of a correlation clustering objective. We use a publicly-available graph of Amazon user-item reviews to validate our solution and illustrate how it substantially increases the statistical power in bipartite experiments.
View details

Coresets Meet EDCS: Algorithms for Matching and Vertex Cover on Massive Graphs

Aaron Bernstein

Cliff Stein

Sepehr Assadi

Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), SIAM (2019), pp. 1616-1635

Preview abstract
Maximum matching and minimum vertex cover are among the most fundamental graph
optimization problems. Recently, randomized composable coresets were introduced as an effective
technique for solving these problems in various models of computation on massive graphs. In this
technique, one partitions the edges of an input graph randomly into multiple pieces, compresses
each piece into a smaller subgraph, namely a coreset, and solves the problem on the union of these
coresets to find the final solution. By designing small size randomized composable coresets, one
can obtain efficient algorithms, in a black-box way, in multiple computational models including
streaming, distributed communication, and the massively parallel computation (MPC) model.
We develop randomized composable coresets of size Oe(n) that for any constant ε > 0, give a
(3/2 + ε)-approximation to matching and a (3 + ε)-approximation to vertex cover. Our coresets
improve upon the previously best approximation ratio of O(1) for matching and O(log n) for
vertex cover. Most notably, our result for matching goes beyond a 2-approximation, which is
a natural barrier for maximum matching in many models of computation. Our coresets lead
to improved algorithms for the simultaneous communication model with randomly partitioned
input, the streaming model when the input arrives in a random order, and the MPC model with
O~(n√n) memory per machine and only two MPC rounds.
Furthermore, inspired by the recent work of Czumaj et al. (arXiv 2017), we study algorithms
for matching and vertex cover in the MPC model with only Oe(n) memory per machine. Building
on our coreset constructions, we develop parallel algorithms that give an O(1)-approximation
to both matching and vertex cover in only O(log log n) MPC rounds and O~(n) memory per
machine. We further improve the approximation ratio of our matching algorithm to (1 + ε) for
any constant ε > 0. Our results settle multiple open questions posed by Czumaj et al.
A key technical ingredient of our paper is a novel application of edge degree constrained
subgraphs (EDCS) that were previously introduced in the context of maintaining matchings in
dynamic graphs. At the heart of our proofs are new structural properties of EDCS that identify
these subgraphs as sparse certificates for large matchings and small vertex covers which are
quite robust to sampling and composition.
View details

Optimal Dynamic Auctions are Virtual Welfare Maximizers

Pingzhong Tang

Proceedings of the AAAI Conference on Artificial Intelligence (2019), pp. 2125-2132

Preview abstract
We are interested in the setting where a seller sells sequentially
arriving items, one per period, via a dynamic auction.
At the beginning of each period, each buyer draws a private
valuation for the item to be sold in that period and this valuation
is independent across buyers and periods. The auction
can be dynamic in the sense that the auction at period
t can be conditional on the bids in that period and all previous
periods, subject to certain appropriately defined incentive
compatible and individually rational conditions. Perhaps
not surprisingly, the revenue optimal dynamic auctions are
computationally hard to find and existing literatures that aim
to approximate the optimal auctions are all based on solving
complex dynamic programs. It remains largely open on the
structural interpretability of the optimal dynamic auctions.
In this paper, we show that any optimal dynamic auction is
a virtual welfare maximizer subject to some monotone allocation
constraints. In particular, the explicit definition of
the virtual value function above arises naturally from the
primal-dual analysis by relaxing the monotone constraints.
We further develop an ironing technique that gets rid of the
monotone allocation constraints. Quite different from Myerson’s
ironing approach, our technique is more technically involved
due to the interdependence of the virtual value functions
across buyers. We nevertheless show that ironing can be
done approximately and efficiently, which in turn leads to a
Fully Polynomial Time Approximation Scheme of the optimal
dynamic auction.
View details

Categorical Feature Compression via Submodular Optimization

Lin Chen

International Conference on Machine Learning (2019), pp. 515-523

Preview abstract
In modern machine learning tasks, the presence of categorical features with extremely large vocabularies is a reality. This becomes a bottleneck when using an ML model, which generally grows at least linearly with the vocabulary size, affecting the memory, training and inference costs, as well as overfitting risk. In this work, we seek to compress the vocabulary by maximizing the mutual information between the compressed categorical feature and the target binary labels. We note the relationship of this problem to that of quantization in a discrete memoryless channel, where there exists a quadratic-time algorithm to solve the problem. Unfortunately, such an algorithm does not scale to data sets with massive vocabularies and, in this paper, we develop a distributed quasi-linear O(n log n) algorithm with provable approximation guarantees. We first observe that although entropy is a submodular function, this is not the case for mutual information between a categorical feature and label. To tackle this problem, we define a set function over a different space, which still contains the optimal solution, and prove this function is submodular. We also provide a query oracle to the submodular function that runs in amortized logarithmic time, and is easy to compute in a distributed fashion. Combining these results with a greedy algorithm allows us to achieve a (1-1/e)-approximation in quasi-linear time. Finally, we compare our proposed algorithm to several existing approaches using the large-scale Criteo learning task and demonstrate better performance in retaining mutual information and also verify the learning performance of the compressed vocabulary.
View details

Dynamic Double Auctions: Towards First Best

Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019, pp. 157-172

Preview abstract
We study the problem of designing dynamic double auctions for two-sided markets in which a platform intermediates the trade between one seller offering independent items to multiple buyers, repeatedly over a finite horizon, when agents have private values. Motivated by online advertising and ride-hailing markets, we seek to design mechanisms satisfying the following properties: no positive transfers, i.e., the platform never asks the seller to make payments nor buyers are ever paid and periodic individual rationality, i.e., every agent should derive a non-negative utility from every trade opportunity. We provide mechanisms satisfying these requirements that are asymptotically efficient and budget-balanced with high probability as the number of trading opportunities grows. Moreover, we show that the average expected profit obtained by the platform under these mechanisms asymptotically approaches first-best (the maximum possible welfare generated by the market).
View details

Preferred Deals in General Environments

Proceedings of the 28th International Joint Conference on Artificial Intelligence (2019), pp. 231-237

Preview abstract
A preferred deal is a special contract for selling impressions of display ad inventory. By accepting a deal, a buyer agrees to buy a minimum amount of impressions at a fixed price per impression, and is granted priority access to the impressions before they are sent to an open auction on an ad exchange. We consider the problem of designing preferred deals (inventory, price, quantity) in the presence of general convex constraints, including budget constraints, and propose an approximation algorithm to maximize the revenue obtained from the deals. We then evaluate our algorithm using auction data from a major advertising exchange and our empirical results show that the algorithm achieves around 95% of the optimal revenue.
View details

Cache-aware load balancing of data center applications

Aaron Schild

Ray Yang

Richard Zhuang

Proceedings of the VLDB Endowment, vol. 12 (2019), pp. 709-723

Preview abstract
Our deployment of cache-aware load balancing in the Google web search backend reduced cache misses by ~0.5x, contributing to a double-digit percentage increase in the throughput of our serving clusters by relieving a bottleneck. This innovation has benefited all production workloads since 2015, serving billions of queries daily.
A load balancer forwards each query to one of several identical serving replicas. The replica pulls each term's postings list into RAM from flash, either locally or over the network. Flash bandwidth is a critical bottleneck, motivating an application-directed RAM cache on each replica. Sending the same term reliably to the same replica would increase the chance it hits cache, and avoid polluting the other replicas' caches. However, most queries contain multiple terms and we have to send the whole query to one replica, so it is not possible to achieve a perfect partitioning of terms to replicas.
We solve this via a voting scheme, whereby the load balancer conducts a weighted vote by the terms in each query, and sends the query to the winning replica. We develop a multi-stage scalable algorithm to learn these weights. We first construct a large-scale term-query graph from logs and apply a distributed balanced graph partitioning algorithm to cluster each term to a preferred replica. This yields a good but simplistic initial voting table, which we then iteratively refine via cache simulation to capture feedback effects.
View details

Randomized Experimental Design via Geographic Clustering

David Rolnick

Amir Najmi

Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (2019)

Preview abstract
Web-based services often run randomized experiments to improve their products. A popular way to run these experiments is to use geographical regions as units of experimentation, since this does not require tracking of individual users or browser cookies. Since users may issue queries from multiple geographical locations, georegions cannot be considered independent and interference may be present in the experiment. In this paper, we study this problem, and first present GeoCUTS, a novel algorithm that forms geographical clusters to minimize interference while preserving balance in cluster size. We use a random sample of anonymized traffic from Google Search to form a graph representing user movements, then construct a geographically coherent clustering of the graph. Our main technical contribution is a statistical framework to measure the effectiveness of clusterings. Furthermore, we perform empirical evaluations showing that the performance of GeoCUTS is comparable to hand-crafted geo-regions with respect to both novel and existing metrics.
View details

Scalable diversity maximization via small-size composable core-sets

31st ACM Symposium on Parallelism in Algorithms and Architectures (SPAA) (2019)

Preview abstract
Maximizing diversity is a central problem in information re-
trieval and data mining systems with prominent applications
in web search, recommender systems, news aggregators and
product search. In this paper, we study a diversity maxi-
mization problem (a.k.a. maximum dispersion problem) in
which given a set of n objects in a metric space, one wants to
find a subset of k objects with the maximum sum of pairwise
distances.
To solve this problem in a scalable distributed manner, we
apply a novel distributed framework for tackling large-scale
problems known as randomized composable core-sets: divide
the big data set into smaller parts, solve the problem for
each part, combine the solutions from each part, and solve
the problem on the union of these solutions. Our algorithms
improve significantly over the approximation guarantees of
state-of-the-art core-set-based algorithms while using min-
imum possible intermediate output size. In particular, we
present a simple distributed algorithm that achieves an al-
most optimal communication complexity, and moreover, it
asymptotically achieves approximation factor of 1/2 which
is the best possible approximation factor for the global opti-
mization problem under certain complexity theory assump-
tions.
Our algorithms are scalable and practical as shown by
our extensive empirical evaluation with large datasets and
they can be easily adapted to the major distributed comput-
ing systems like MapReduce. Furthermore, we show empir-
ically that, in real-life instances, our algorithms reach close-
to-optimal solutions with approximation factor of > 90%.
This approximation factor is far exceeding the approxima-
tion barrier for the problem and provide useful output.
View details

A Robust Non-Clairvoyant Dynamic Mechanism for Contextual Auctions

Advances in Neural Information Processing Systems (2019), pp. 8657-8667

Preview abstract
Dynamic mechanisms offer powerful techniques to improve on both revenue and efficiency by linking sequential auctions using state information, but these techniques rely on exact distributional information of the buyers’ valuations (present and future), which limits their use in learning settings. In this paper, we consider the problem of contextual auctions where the seller gradually learns a model of the buyer's valuation as a function of the context (e.g., item features) and seeks a pricing policy that optimizes revenue. Building on the concept of a bank account mechanism---a special class of dynamic mechanisms that is known to be revenue-optimal---we develop a non-clairvoyant dynamic mechanism that is robust to both estimation errors in the buyer's value distribution and strategic behavior on the part of the buyer. We then tailor its structure to achieve a policy with provably low regret against a constant approximation of the optimal dynamic mechanism in contextual auctions. Our result substantially improves on previous results that only provide revenue guarantees against static benchmarks.
View details

On-Device Algorithms for Public-Private Data with Absolute Privacy

Proceedings of The Web Conference 2019 (WWW'19) (to appear)

Preview abstract
Motivated by the increasing need to preserve privacy in digital devices, we introduce the on-device public-private model of computation.
Our motivation comes from social-network based recommender systems in which the users want to receive recommendations based on the information available on their devices, as well as the suggestions of their social contacts, without sharing such information or contacts with the central recommendation system.
Our model allows us to solve many algorithmic problems while providing absolute (deterministic) guarantees of the privacy of on-device data and the user's contacts. In fact, we ensure that the private data and private contacts are never revealed to the central system.
Our restrictive model of computation presents several interesting algorithmic challenges because any computation based on private information and contacts must be performed on local devices of limited capabilities. Despite these challenges, under realistic assumptions of inter-device communication, we show several efficient algorithms for fundamental data mining and machine learning problems, ranging from k-means clustering to heavy hitters. We complement this analysis with strong impossibility results for efficient private algorithms without allowing inter-device communication. In our experimental evaluation, we show that our private algorithms provide results almost as accurate as those of the non-private ones while speeding up the on-device computations by orders of magnitude.
View details

Locality-Sensitive Hashing for f-Divergences: Mutual Information Loss and Beyond

Lin Chen

Advances in Neural Information Processing Systems (2019), pp. 10044-10054

Preview abstract
Computing approximate nearest neighbors in high dimensional spaces is a central problem in large-scale data mining with a wide range of applications in machine learning and data science. A popular and effective technique in computing nearest neighbors approximately is the Locality-Sensitive Hashing (LSH) scheme. In this paper, we aim to develop LSH schemes for distance functions that measure the distance between two probability distributions, particularly for f-divergences as well as a generalization to capture mutual information loss.
First, we provide a general framework to design LHS schemes for f-divergence distance functions, and develop LSH schemes for the generalized Jensen-Shannon divergence and triangular discrimination in this framework. We show a two-sided approximation result for approximation of the generalized Jensen-Shannon divergence by the Hellinger distance, which may be of independent interest. Next, we show a general method of reducing the problem of design an LSH scheme for a Kreın kernel (which can be expressed as the difference of two positive definite kernels) to the problem of maximum inner product search. We exemplify this method by applying it to the mutual information loss divergence, due to its several important applications such as model compression.
View details

Dynamic Mechanism Design in the Field

Rita Ren

Proceedings of the 2018 World Wide Web Conference on World Wide Web, WWW 2018, Lyon, France, April 23-27, 2018, {ACM}, pp. 1359-1368

Preview abstract
Dynamic mechanisms are a powerful technique in designing revenue-maximizing repeated auctions. Despite their strength, these types of mechanisms have not been widely adopted in practice for several reasons, e.g., for their complexity, and for their sensitivity to the accuracy of predicting buyers' value distributions. In this paper, we aim to address these shortcomings and develop simple dynamic mechanisms that can be implemented efficiently, and provide theoretical guidelines for decreasing the sensitivity of dynamic mechanisms on prediction accuracy of buyers' value distributions. We prove that the dynamic mechanism we propose is provably dynamic incentive compatible, and introduce a notion of buyers' regret in dynamic mechanisms, and show that our mechanism achieves bounded regret while improving revenue and social welfare compared to a static reserve pricing policy. Finally, we confirm our theoretical analysis via an extensive empirical study of our dynamic auction on real data sets from online adverting. For example, we show our dynamic mechanisms can provide a 17% revenue lift with relative regret less than 0.2%.
View details

Non-Clairvoyant Dynamic Mechanism Design

Pingzhong Tang

Proceedings of the 2018 ACM Conference on Economics and Computation, Ithaca, NY, USA, June 18-22, 2018, ACM, pp. 169

Preview abstract
Despite their better revenue and welfare guarantees for repeated auctions, dynamic mechanisms have not been widely adopted in practice. This is partly due to the complexity of their implementation as well as their unrealistic use of forecasting for future periods. We address these shortcomings and present a new family of dynamic mechanisms that are simple and require no distributional knowledge of future periods.
This paper introduces the concept of non-clairvoyance in dynamic mechanism design, which is a measure-theoretic restriction on the information that the seller is allowed to use. A dynamic mechanism is non-clairvoyant if the allocation and pricing rule at each period does not depend on the type distributions in the future periods.
We develop a framework (bank account mechanisms) for characterizing, designing, and proving lower bounds for dynamic mechanisms (clairvoyant or non-clairvoyant). This framework is used to characterize the revenue extraction power of the non-clairvoyant mechanisms with respect to the mechanisms that are allowed unrestricted use of distributional knowledge.
View details

Proportion allocation: Simple, Distributed, and Diverse Matching with high Entropy

Shipra Agrawal

The 35th International Conference on Machine Learning (ICML 2018)

Preview abstract
Inspired by many application of bipartite matchings in online advertising and machine learning, we study a simple and natural iterative proportional allocation algorithm:
Maintain a priority score $\priority_a$ for each node $a\in\advertisers$ on one side of the bipartition, initialized as $\priority_a=1$. Iteratively allocate each node $i\in \impressions$ on the other side to eligible nodes in $\advertisers$ in proportion of their priority scores. After each round, for each node $a\in \advertisers$, decrease or increase the score $\priority_a$ based on whether it is over- or under- allocated. Our first result is that this simple, distributed algorithm converges to a $(1-\epsilon)$-approximate fractional $b$-matching solution in $O({\log n\over \epsilon^2} )$ rounds. We also extend the proportional allocation algorithm and convergence results to the maximum weighted matching problem, and show that the algorithm can be naturally tuned to produce maximum matching with {\em high entropy}. High entropy, in turn, implies additional desirable properties of this matching, e.g., it satisfies certain diversity and fairness (aka anonymity) properties that are desirable in a variety of applications in online advertising and machine learning.
View details

Optimal Distributed Submodular Optimization via Sketching

Hossein Esfandiari

Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2018), pp. 1138-1147

Preview abstract
As an important special case of submodular optimization problems, coverage problems are a central problem in optimization with a wide range of applications in data mining and machine learning. As
we need to handle larger and larger data sets, there is a clear need to develop distributed solutions to
these problems. While several results have been developed for distributed coverage maximizations, all
the existing method have notable limitations, e.g., they all achieve either suboptimal approximation
guarantees or suboptimal space and memory complexities. Moreover, most previous results for submodular maximization either explicitly or implicitly assume that one has a value oracle access to
the submodular function. Such a value oracle for coverage functions has the following form: given a subfamily of (input) subsets, determine the size of the union of the subsets in this subfamily.
View details

Stochastic bandits robust to adversarial corruptions

Thodoris Lykouris

Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2018, Los Angeles, CA, USA, June 25-29, 2018, ?114-122

Preview abstract
We introduce a new model of stochastic bandits with adversarial corruptions which aims to capture settings where most of the input follows a stochastic pattern but some fraction of it can be adversarially changed to trick the algorithm, e.g., click fraud, fake reviews and email spam. The goal of this model is to encourage the design of bandit algorithms that (i) work well in mixed adversarial and stochastic models, and (ii) whose performance deteriorates gracefully as we move from fully stochastic to fully adversarial models.
We consider a setting with $K$ arms with underlying stochastic distributions such that $\Delta(a)$ is the difference between the mean of arm $a$ and the optimal arm $a^*$. The total corruption $C$ corresponds to the total perturbation that an adversary can add to the stochastic input. Our main result is an algorithm with regret bounded by {$\bigO\prn*{\sum_{a{\neq a^*}}\frac{K\cdot C\log\prn*{\nicefrac{KT}{\delta}}+\log\prn*{T}}{\Delta(a)}\cdot\log\prn*{\nicefrac{KT}{\delta}}}$ with $1-\delta$ probability and pseudoregret $\bigO\prn*{\sum_{a{\neq a^*}}\frac{K\cdot C+\log\prn*{T}}{\Delta(a)}\cdot\log\prn*{KT}}$.}
Notably, our algorithm is agnostic to the total corruption and the bound holds with respect to the total corruption that was added in retrospect. We also provide a lower bound showing that the linear dependency on the corruption level is necessary.
View details

Consistent Hashing with Bounded Loads

Mikkel Thorup

Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms (2018), pp. 587-604

Preview abstract
Designing algorithms for balanced allocation of clients to servers in dynamic settings is a challenging problem for a variety of reasons. Both servers and clients may be added and/or removed from the system periodically, and the main objectives of allocation algorithms are: the uniformity of the allocation, and the number of moves after adding or removing a server or a client. The most popular solution for our dynamic settings is Consistent Hashing. However, the load balancing of consistent hashing is no better than a random assignment of clients to servers, so with n of each, we expect many servers to be overloaded with Θ(logn/loglogn) clients. In this paper, with n clients and n servers, we get a guaranteed max-load of 2 while only moving an expected constant number of clients for each update.
We take an arbitrary user specified balancing parameter c=1+ϵ>1. With m balls and n bins in the system, we want no load above ⌈cm/n⌉. Meanwhile we want to bound the expected number of balls that have to be moved when a ball or server is added or removed. Compared with general lower bounds without capacity constraints, we show that in our algorithm when a ball or bin is inserted or deleted, the expected number of balls that have to be moved is increased only by a multiplicative factor O(1ϵ2) for ϵ≤1 (Theorem 4) and by a factor 1+O(logcc) for ϵ≥1 (Theorem 3). Technically, the latter bound is the most challenging to prove. It implies that we for superconstant c only pay a negligible cost in extra moves. We also get the same bounds for the simpler problem where we instead of a user specified balancing parameter have a fixed bin capacity C for all bins.
View details

Robust Repeated Auctions Under Heterogeneous Buyer Behavior

Shipra Agrawal

Constantinos Daskalakis

Proceedings of the Nineteenth ACM Conference on Economics and Computation, EC '18 (2018)

Preview abstract
We study revenue optimization in a repeated auction between a single seller and a single buyer. Traditionally, the design of repeated auctions requires strong modeling assumptions about the bidder behavior, such as it being myopic, infinite lookahead, or some specific form of learning behavior. Is it possible to design mechanisms which are simultaneously optimal against a multitude of possible buyer behaviors? We answer this question by designing a simple state-based mechanism that is simultaneously approximately optimal against a k-lookahead buyer for all k, a buyer who is a no-regret learner, and a buyer who is a policy-regret learner. Against each type of buyer our mechanism attains a constant fraction of the optimal revenue attainable against that type of buyer. We complement our positive results with almost tight impossibility results, showing that the revenue approximation tradeoffs achieved by our mechanism for different lookahead attitudes are near-optimal.
View details

Incentive-Aware Learning for Large Markets

Proceedings of the 2018 World Wide Web Conference on World Wide Web, WWW 2018, Lyon, France, April 23-27, 2018, pp. 1369-1378

Preview abstract
In a typical learning problem, one key step is to use training data to pick one model from a collection of models that optimizes an objective function. In many multi-agent settings, the training data is generated through the actions of the agents, and the model is
used to make a decision (e.g., how to sell an item) that affects the agents. An illustrative example of this is the problem of learning the
reserve price in an auction. In such cases, the agents have an incentive to influence the training data (e.g., by manipulating their bids in
the case of an auction) to game the system and achieve a more favorable outcome. In this paper, we study such incentive-aware learning
problem in a general setting and show that it is possible to approximately optimize the objective function under two assumptions:
(i) each individual agent is a “small” (part of the market); and (ii) there is a cost associated with manipulation. For our illustrative
application, this nicely translates to a mechanism for setting approximately optimal reserve prices in auctions where no individual
agent has significant market share. For this application, we also show that the second assumption (that manipulations are costly) is
not necessary since we can “perturb” any auction to make it costly for the agents to manipulate.
View details

ONLINE SUBMODULAR WELFARE MAXIMIZATION: GREEDY BEATS 1/2 IN RANDOM ORDER

SIAM Journal on Computing, vol. 47(3) (2018), pp. 1056-1086

Preview abstract
In the submodular welfare maximization (SWM) problem, the input consists of a set of n items, each of which must be allocated to one of m agents. Each agent ell has a valuation function v_ell, where v_ell(S) denotes the welfare obtained by this agent if she receives the set of items S. The functions v_ell are all submodular; as is standard, we assume that they are monotone and v_ell(∅) = 0. The goal is to partition the items into m disjoint subsets S1, S2, . . . , Sm in order to maximize the social welfare, defined as \Sum_ell v_ell(S_ell). A simple greedy algorithm gives a 1/2-approximation to SWM in the offline setting, and this was the best known until Vondr´ak’s recent (1 − 1/e)-approximation algorithm. In this paper, we consider the online version of SWM. Here, items arrive one at a time in an online manner; when an item arrives, the algorithm must make an irrevocable decision about which agent to assign it to before seeing any subsequent items. This problem is motivated by applications to Internet advertising, where user ad impressions must be allocated to advertisers whose value is a submodular function of the set of users/impressions they receive. There are two natural models that differ in the order in which items arrive. In the fully adversarial setting, an adversary can construct an arbitrary/worst-case instance, as well as pick the order in which items arrive in order to minimize the algorithm’s performance. In this setting, the 1/2-competitive greedy algorithm is the best possible. To improve on this, one must weaken the adversary slightly: In the random order model, the adversary can construct a worst-case set of items and valuations but does not control the order in which the items arrive; instead, they are assumed to arrive in a random order. The random order model has been well studied for online SWM and various special cases, but the best known competitive ratio (even for several special cases) is 1/2 + 1/n, which is barely better than the ratio for the adversarial order. Obtaining a competitive ratio of 1/2 + Ω(1) for the random order model has been an important open problem for several years. We solve this open problem by demonstrating that the greedy algorithm has a competitive ratio of at least 0.505 for online SWM in the random order model. This is the first result showing a competitive ratio bounded above 1/2 in the random order model, even for special cases such as the weighted matching or budgeted allocation problem (without the so-called large capacity assumptions). For special cases of submodular functions including weighted matching, weighted coverage functions, and a broader class of “second-order supermodular” functions, we provide a different analysis that gives a competitive ratio of 0.51. We analyze the greedy algorithm using a factor-revealing linear program, bounding how the assignment of one item can decrease potential welfare from assigning future items. In addition to our new competitive ratios for online SWM, we make two further contributions: First, we define the classes of second-order modular, supermodular, and submodular functions, which are likely to be of independent interest in submodular optimization.
Second, we obtain an improved competitive ratio via a technique we refer to as gain linearizing,
which may be useful in other contexts: Essentially, we linearize the submodular function by dividing the gain of an optimal solution into gain from individual elements, compare the algorithm’s gain when it assigns an element to the optimal solution’s gain from the element, and, crucially, bound the extent to which assigning elements can affect the potential gain of other elements.
View details

A study of compact reserve pricing languages

Hossein Esfandiari

Saeed Seddighin

Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (2017), pp. 363-368

Preview abstract
Motivated by ad auctions we study the multiplicative reserve price system (MRPS). MRPS is a compact way to set reserve prices on several auctions simultaneously. In this paper first we consider the problem of finding the best way of assigning reserve prices in this system. We show that this problem is NP-hard. Next, by characterizing the the properties of an optimum solution we design an approximation algorithm for this problem. Interestingly, our experiments show that our algorithm achieves 90–98% of the optimum solution that sets the reserve price of each auction independently (i.e., the optimum set of reserve prices). We show that in the adversarial setting we lose at most a logarithmic factor compare to the optimum set of reserve prices.
To show the tightness of our results in the adversarial setting, we show that there is no compact pricing system (i.e. a pricing system that uses O(n^{1−ε}) bits to set n reserve prices) that loses less than a logarithmic factor compare to the optimum set of reserve prices, in the worst case. Notice that, interestingly, this hardness result holds for any pricing system and not only MRPS.
View details

Almost Optimal Streaming Algorithms for Coverage Problems

Hossein Esfandiari

29th ACM Symposium on Parallelism in Algorithms and Architectures (2017)

Preview abstract
Maximum coverage and minimum set cover problems --collectively called coverage problems-- have been studied extensively in streaming models. However, previous research not only achieve sub-optimal approximation factors and space complexities, but also study a restricted set arrival model which makes an explicit or implicit assumption on oracle access to the sets, ignoring the complexity of reading and storing the whole set at once. In this paper, we address the above shortcomings, and present algorithms with improved approximation factor and improved space complexity, and prove that our results are almost tight. Moreover, unlike most of previous work, our results hold on a more general edge arrival model. More specifically, we present (almost) optimal approximation algorithms for maximum coverage and minimum set cover problems in the streaming model with an (almost) optimal space complexity of O~(n), i.e., the space is {\em independent of the size of the sets or the size of the ground set of elements}. These results not only improve over the best known algorithms for the set arrival model, but also are the first such algorithms for the more powerful {\em edge arrival} model. In order to achieve the above results, we introduce a new general sketching technique for coverage functions: This sketching scheme can be applied to convert an α-approximation algorithm for a coverage problem to a $(1-\eps)\alpha$-approximation algorithm for the same problem in streaming, or RAM models. We show the significance of our sketching technique by ruling out the possibility of solving coverage problems via accessing (as a black box) a $(1 \pm \eps)$-approximate oracle (e.g., a sketch function) that estimates the coverage function on any subfamily of the sets.
View details

A Study of Compact Reserve Pricing Languages

Hossein Esfandiari

Saeed Seddighin

AAAI 2017 (2017), pp. 363-368

Preview abstract
Online advertising allows advertisers to implement fine-tuned targeting of users. While such precise targeting leads to more effective advertising, it introduces challenging multidimensional pricing and bidding problems for publishers and advertisers. In this context, advertisers and publishers need to deal with an exponential number of possibilities. As a result, designing efficient and compact multidimensional bidding and pricing systems and algorithms are practically important for online advertisement. Compact bidding languages have already been studied in the context of multiplicative bidding. In this paper, we study the compact pricing problem.
View details

Bicriteria Distributed Submodular Maximization in a Few Rounds

Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures (2017)

Preview abstract
We study the problem of efficiently optimizing submodular functions under cardinality constraints in distributed setting. Recently, several distributed algorithms for this problem have been introduced which either achieve a sub-optimal solution or they run in super-constant number of rounds of computation. Unlike previous work, we aim to design distributed algorithms in multiple rounds with almost optimal approximation guarantees at the cost of outputting a larger number of elements. Toward this goal, we present a distributed algorithm that, for any ε > 0 and any constant r, outputs a set S of O(rk/ε^(1/r)) items in r rounds, and achieves a (1-ε)-approximation of the value of the optimum set with k items. This is the first distributed algorithm that achieves an approximation factor of (1-ε) running in less than log 1/ε number of rounds. We also prove a hardness result showing that the output of any 1-ε approximation distributed algorithm limited to one distributed round should have at least Ω(k/ε) items. In light of this hardness result, our distributed algorithm in one round, r = 1, is asymptotically tight in terms of the output size. We support the theoretical guarantees with an extensive empirical study of our algorithm showing that achieving almost optimum solutions is indeed possible in a few rounds for large-scale real datasets.
View details

Affinity Clustering: Hierarchical Clustering at Scale

Soheil Behnezhad

Mahsa Derakhshan

MohammadTaghi Hajiaghayi

Raimondas Kiveris

NIPS 2017, pp. 6867-6877

Preview abstract
Graph clustering is a fundamental task in many data-mining and machine-learning pipelines. In particular, identifying good hierarchical clustering structure is at the same time a fundamental and challenging problem for several applications. In many applications, the amount of data to analyze is increasing at an astonishing rate each day. Hence there is a need for new solutions to efficiently compute effective hierarchical clusterings on such huge data.
In this paper, we propose algorithms to address this problem. First, we analyze minimum spanning tree-based clustering algorithms and their corresponding hierarchical clusterings. In particular we consider classic single-linkage clustering based on Kruskal's algorithm and a variation of Boruvka algorithm that we call affinity clustering and prove new interesting properties of these clusterings via the concept of certificates. Then we present new algorithms in the MapReduce model and their efficient real world implementations via Distributed Hash Tables (DHTs). Our MapReduce algorithms indeed improve upon the previous MapReduce algorithms for finding a minimum spanning tree in graphs as well. Finally we show experimentally that our algorithms are scalable for huge data and competitive with state-of-the-art algorithms. In particular we show that Affinity Clustering is in practice superior to several state-of-the-art clustering algorithms.
View details

Overcommitment in Cloud Services – Bin Packing with Chance Constraints

Maxime Cohen

Phil Keller

ACM SIGMETRICS / International Conference on Measurement and Modeling of Computer Systems 2017

Preview abstract
This paper considers the traditional problem of resource allocation, i.e., scheduling jobs on machines. One such recent application is cloud computing, where jobs arrive in an online fashion with capacity requirements and need to be allocated to physical machines. It is often observed that the requested capacities are not fully utilized, hence offering an opportunity to employ an overcommitment policy, i.e., selling resources beyond capacity. Setting the right overcommitment level can induce a significant cost reduction for the cloud provider, while taking a very low risk of violating capacity constraints. We introduce and study a model that quantifies the value of overcommitment by modeling the problem as a bin packing with chance constraints. We then propose an alternative formulation that transforms each chance constraint into a sub-modular function. We show that our model captures the risk pooling effect and allows to guide scheduling and overcommitment decisions. We also develop a family of online algorithms that are intuitive, easy to implement and provide a constant factor guarantee from optimal. Finally, we calibrate our model using realistic workload data and test our approach in a practical setting. Our analysis and experiments illustrate the benefit of overcommitting in cloud services.
View details

Bicriteria Distributed Submodular Maximization in a Few Rounds

29th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA) (2017)

Preview abstract
We study the problem of efficiently optimizing submodular functions under
cardinality constraints in distributed setting. Recently, several
distributed algorithms for this problem have been introduced
which either achieve a sub-optimal solution or they run in super-constant
number
of rounds of computation. Unlike previous work, we aim
to design distributed algorithms in multiple rounds
with almost optimal approximation guarantees
at the cost of outputting a larger number of elements.
Toward this goal, we present a distributed algorithm that, for any \epsilon >
0
and any constant r,
outputs a set
S of O(rk/\epsilon^{1\over r}) items in r rounds, and achieves
a (1-\epsilon)-approximation of the value of the optimum set with k items.
This is the first
distributed algorithm
that achieves an approximation factor of (1-\epsilon) running in less than
$\log {1\over \epsilon}$ number of rounds.
We also prove a hardness result showing that the output of any $1-\epsilon$ approximation distributed algorithm limited to one distributed round should have at least $\Omega(k/\epsilon)$ items. In light of this hardness result, our distributed algorithm in one round, $r=1$, is asymptotically tight in terms of the output size.
We support the theoretical guarantees
with an extensive empirical study of our
algorithm showing that achieving almost optimum solutions is indeed possible in
a few rounds for large-scale real datasets.
View details

Dynamic Revenue Sharing

Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA

Preview abstract
Many online platforms act as intermediaries between a seller and a set of buyers. Examples of such settings include online retailers (such as Ebay) selling items on behalf of sellers to buyers, or advertising exchanges (such as AdX) selling pageviews on behalf of publishers to advertisers. In such settings, revenue sharing is a central part of running such a marketplace for the intermediary, and fixed-percentage revenue sharing schemes are often used to split the revenue among the platform and the sellers. In particular, such revenue sharing schemes require the platform to (i) take at most a constant fraction α of the revenue from auctions and (ii) pay the seller at least the seller declared opportunity cost c for each item sold. A straightforward way to satisfy the constraints is to set a reserve price at c/(1 − α ) for each item, but it is not the optimal solution on maximizing the profit of the intermediary.
While previous studies (by Mirrokni and Gomes, and by Niazadeh et al) focused on revenue-sharing schemes in static double auctions, in this paper, we take advantage of the repeated nature of the auctions, and present solutions based on dynamic mechanism design. In particular, we introduce dynamic revenue sharing schemes where we balance the two constraints over different auctions to achieve higher profit. This is directly motivated by the practice of advertising exchanges where the fixed-percentage revenue-share should be met across all auctions and not in each auction. In this paper, we characterize the optimal revenue sharing scheme that satisfies both constraints in expectation. Finally, we empirically evaluate our revenue sharing scheme on real
data.
View details

Scalable Feature Selection via Distributed Diversity Maximization

Sepehr Abbasi Zadeh

Mehrdad Ghadiri

AAAI (2017), pp. 2876-2883

Preview abstract
Feature selection is a fundamental problem in machine learning
and data mining. The majority of feature selection algorithms
are designed for running on a single machine (centralized
setting) and they are less applicable to very large
datasets. Although there are some distributed methods to
tackle this problem, most of them are distributing the data
horizontally which are not suitable for datasets with a large
number of features and few number of instances. Thus, in this
paper, we introduce a novel vertically distributable feature selection
method in order to speed up this process and be able
to handle very large datasets in a scalable manner. In general,
feature selection methods aim at selecting relevant and
non-redundant features (Minimum Redundancy and Maximum
Relevance). It is much harder to consider redundancy
in a vertically distributed setting than a centralized setting
since there is no global access to the whole data. To the best
of our knowledge, this is the first attempt toward solving the
feature selection problem with a vertically distributed filter
method which handles the redundancy with consistently comparable
results with centralized methods. In this paper, we
formalize the feature selection problem as a diversity maximization
problem by introducing a mutual-information-based
metric distance on the features. We show the effectiveness of
our method by performing an extensive empirical study. In
particular, we show that our distributed method outperforms
state-of-the-art centralized feature selection algorithms on a
variety of datasets. From a theoretical point of view, we have
proved that the used greedy algorithm in our method achieves
an approximation factor of 1/4 for the diversity maximization
problem in a distributed setting with high probability. Furthermore,
we improve this to 8/25 expected approximation
using multiplicity in our distribution.
View details

Dynamic Auctions with Bank Accounts

Pingzhong Tang

Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (2016), pp. 387-393

Preview abstract
Consider a buyer with independent additive valuations for a set of goods, and a seller who is constrained to sell one item at a time in an online fashion. If the seller is constrained to run independent auctions for each item, then he would run Myerson’s optimal auction for each item. If the seller is allowed to use the full power of dynamic mechanism design and have the auction for each item depend on the outcome of the previous auctions, he is able to perform much better. The main issues in implementing such strategies in online settings where items arrive over time are that the auction might be too complicated or it makes too strong assumptions on the buyer’s rationality or seller’s commitment over time. This motivates us to explore a restricted family of dynamic auctions that can be implemented in an online fashion and without too much commitment from the seller ahead of time. In particular, we study a set of auction in which the space of single-shot auctions is augmented with a structure that we call bank account, a real number for each node that summarizes the history so far. This structure allows the seller to store deficits or surpluses of buyer utility from each individual auction and even them out on the long run. This is akin to enforcing individual rationality constraint on average rather than per auction. We also study the effect of enforcing a maximum limit to the values that bank account might grow, which means that we enforce that besides the auction being individually rational on average it is also not far from being individually rational at any given interval. Interestingly, even with these restrictions, we can achieve
significantly better revenue and social welfare compared to separate Myerson auctions.
View details

Greedy Column Subset Selection: New Bounds and Distributed Algorithms

Aditya Bhaskara

Jason Altschuler

ICML (2016) (to appear)

Preview abstract
The problem of matrix column subset selection
has recently attracted a large body of research,
with feature selection serving as one obvious and
important application. Among the techniques
that have been applied to solve this problem, the
greedy algorithm has been shown to be quite
effective in practice. However, our theoretical
guarantees on its performance have not been ex-
plored thoroughly, especially in a distributed set-
ting. In this paper, we study the greedy algorithm
for the column subset selection problem from a
theoretical and empirical perspective and show
its effectiveness in a distributed setting. In par-
ticular, we provide an improved approximation
guarantee for the greedy algorithm, and present
the first distributed implementation of this algo-
rithm with provable approximation factors. We
use the idea of randomized composable core-
sets, developed recently in the context of sub-
modular maximization. Finally, we validate the
effectiveness of this distributed algorithm via an
empirical study.
View details

Reservation Exchange Markets for Internet Advertising

Gagan Goel

Stefano Leonardi

Afshin Nikzad

LIPIcs, vol. 55, 142:1-142:13

Preview abstract
Internet display advertising industry follows two main business models. One model is based on direct deals between publishers and advertisers where they sign legal contracts containing terms of fulfillment for a future inventory. The second model is a spot market based on auctioning page views in real-time on advertising exchange (AdX) platforms such as DoubleClick's Ad Exchange, RightMedia, or AppNexus. These exchanges play the role of intermediaries who sell items (e.g. page-views) on behalf of a seller (e.g. a publisher) to buyers (e.g., advertisers) on the opposite side of the market. The computational and economics issues arising in this second model have been extensively investigated in recent times. In this work, we consider a third emerging model called reservation exchange market. A reservation exchange is a two-sided market between buyer orders for blocks of advertisers' impressions and seller orders for blocks of publishers' page views. The goal is to match seller orders to buyer orders while providing the right incentives to both sides. In this work we first describe the important features of mechanisms for efficient reservation exchange markets. We then address the algorithmic problems of designing revenue sharing schemes to provide a fair division between sellers of the revenue collected from buyers. A major conceptual contribution of this work is in showing that even though both clinching ascending auctions and VCG mechanisms achieve the same outcome from a buyer perspective, however, from the perspective of revenue sharing among sellers, clinching ascending auctions are much more informative than VCG auctions.
View details

Whole-Page Optimization and Submodular Welfare Maximization with Online Bidders

Nikhil R. Devanur

Zhiyi Huang

ACM Trans. Economics and Comput. 4(3) (2016)

Preview abstract
In the context of online ad serving, display ads may appear on different types of webpages, where each page includes several ad slots and therefore multiple ads can be shown on each page. The set of ads that can be assigned to ad slots of the same page needs to satisfy various prespecified constraints including exclusion constraints, diversity constraints, and the like. Upon arrival of a user, the ad serving system needs to allocate a set of ads to the current webpage respecting these per-page allocation constraints. Previous slot-based settings ignore the important concept of a page and may lead to highly suboptimal results in general. In this article, motivated by these applications in display advertising and inspired by the submodular welfare maximization problem with online bidders, we study a general class of page-based ad allocation problems, present the first (tight) constant-factor approximation algorithms for these problems, and confirm the performance of our algorithms experimentally on real-world datasets.
A key technical ingredient of our results is a novel primal-dual analysis for handling free disposal, which updates dual variables using a “level function” instead of a single level and unifies with previous analyses of related problems. This new analysis method allows us to handle arbitrarily complicated allocation constraints for each page. Our main result is an algorithm that achieves a 1 &minus frac 1 e &minus o(1)-competitive ratio. Moreover, our experiments on real-world datasets show significant improvements of our page-based algorithms compared to the slot-based algorithms.
Finally, we observe that our problem is closely related to the submodular welfare maximization (SWM) problem. In particular, we introduce a variant of the SWM problem with online bidders and show how to solve this problem using our algorithm for whole-page optimization.
View details

Ego-net Community Mining Applied to Friend Suggestion

Ismail Sebe

Ahmed Taei

Sunita Verma

Proceedings of VLDB (2016)

Preview abstract
In this paper, we present a study of the community structure
of ego-networks—the graphs representing the connections
among the neighbors of a node—for several online social
networks. Toward this goal, we design a new technique to
efficiently build and cluster all the ego-nets of a graph in
parallel (note that even just building the ego-nets efficiently
is challenging on large networks). Our experimental findings
are quite compelling: at a microscopic level it is easy to
detect high quality communities.
Leveraging on this fact we, then, develop new features
for friend suggestion based on co-occurrences of two nodes
in different ego-nets’ communities. Our new features can
be computed efficiently on very large scale graphs by just
analyzing the neighborhood of each node. Furthermore, we
prove formally on a stylized model, and by experimental
analysis that this new similarity measure outperforms the
classic local features employed for friend suggestions
View details

Distributed Balanced Partitioning via Linear Embedding

Ninth ACM International Conference on Web Search and Data Mining (WSDM), ACM (2016), pp. 387-396

Preview abstract
Balanced partitioning is often a crucial first step in solving large-scale graph optimization problems: in some cases, a big graph is chopped into pieces that fit on one machine to be processed independently before stitching the results together, leading to certain suboptimality from the interaction among different pieces. In other cases, links between different parts may show up in the running time and/or network communications cost, hence the desire to have small cut size.
We study a distributed balanced partitioning problem where the goal is to partition the vertices of a given graph into k pieces, minimizing the total cut size. Our algorithm is composed of a few steps that are easily implementable in distributed computation frameworks, e.g., MapReduce. The algorithm first embeds nodes of the graph onto a line, and then processes nodes in a distributed manner guided by the linear embedding order. We examine various ways to find the first embedding, e.g., via a hierarchical clustering or Hilbert curves. Then we apply four different techniques such as local swaps, minimum cuts on partition boundaries, as well as contraction and dynamic programming.
Our empirical study compares the above techniques with each other, and to previous work in distributed algorithms, e.g., a label propagation method [34], FENNEL [32] and Spinner [23]. We report our results both on a private map graph and several public social networks, and show that our results beat previous distributed algorithms: we notice, e.g., 15-25% reduction in cut size over [34]. We also observe that our algorithms allow for scalable distributed implementation for any number of partitions. Finally, we apply our techniques for the Google Maps Driving Directions to minimize the number of multi-shard queries with the goal of saving in CPU usage. During live experiments, we observe an ≈ 40% drop in the number of multi-shard queries when comparing our method with a standard geography-based method.
View details

Automated Decomposition of Build Targets

Mohsen Vakilian

J. David Morgenthaler

Proceedings of the 37th International Conference on Software Engineering, IEEE Computer Society (2015), pp. 123-133

Preview abstract
A (build) target specifies the information that is needed to automatically build a software artifact. This paper focuses on underutilized targets—an important dependency problem that we identified at Google. An underutilized target is one with files not needed by some of its dependents. Underutilized targets result in less modular code, overly large artifacts, slow builds, and unnecessary build and test triggers. To mitigate these problems, programmers decompose underutilized targets into smaller targets. However, manually decomposing a target is tedious and error-prone. Although we prove that finding the best target decomposition is NP-hard, we introduce a greedy algorithm that proposes a decomposition through iterative unification of the strongly connected components of the target. Our tool found that 19,994 of 40,000 Java library targets at Google can be decomposed to at least two targets. The results show that our tool is (1) efficient because it analyzes a target in two minutes on average and (2) effective because for each of 1,010 targets, it would save at least 50% of the total execution time of the tests triggered by the target.
View details

Decentralized utilitarian mechanisms for scheduling games

Richard Cole

José R. Correa

Vasilis Gkatzelis

Neil Olver

Games and Economic Behavior, vol. 92 (2015), pp. 306-326

Preview abstract
Game Theory and Mechanism Design are by now standard tools for studying and designing massive decentralized systems. Unfortunately, designing mechanisms that induce socially efficient outcomes often requires full information and prohibitively large computational resources. In this work we study simple mechanisms that require only local information. Specifically, in the setting of a classic scheduling problem, we demonstrate local mechanisms that induce outcomes with social cost close to that of the socially optimal solution. Somewhat counter-intuitively, we find that mechanisms yielding Pareto dominated outcomes may in fact enhance the overall performance of the system, and we provide a justification of these results by interpreting these inefficiencies as externalities being internalized. We also show how to employ randomization to obtain yet further improvements. Lastly, we use the game-theoretic insights gained to obtain a new combinatorial approximation algorithm for the underlying optimization problem.
Decentralized utilitarian mechanisms for scheduling games. Available from: https://www.researchgate.net/publication/275166819_Decentralized_utilitarian_mechanisms_for_scheduling_games [accessed May 2, 2017].
View details

Expanders via Local Edge Flips

Zeyuan Allen-Zhu,

Aditya Bhaskara

Lorenzo Orecchia

Society for Industrial and Applied Mathematics (2015), pp. 259-269

Preview abstract
Designing distributed and scalable algorithms to improve network connectivity is a central topic in peer-to-peer networks. In this paper we focus on the following well-known problem: given an n-node d-regular network for d=Ω(logn), we want to design a decentralized, local algorithm that transforms the graph into one that has good connectivity properties (low diameter, expansion, etc.) without affecting the sparsity of the graph. To this end, Mahlmann and Schindelhauer introduced the random "flip" transformation, where in each time step, a random pair of vertices that have an edge decide to `swap a neighbor'. They conjectured that performing O(nd) such flips at random would convert any connected d-regular graph into a d-regular expander graph, with high probability. However, the best known upper bound for the number of steps is roughly O(n^17d^23), obtained via a delicate Markov chain comparison argument.
Our main result is to prove that a natural instantiation of the random flip produces an expander in at most O(n^2d^2√logn) steps, with high probability. Our argument uses a potential-function analysis based on the matrix exponential, together with the recent beautiful results on the higher-order Cheeger inequality of graphs. We also show that our technique can be used to analyze another well-studied random process known as the `random switch', and show that it produces an expander in O(nd) steps with high probability.
View details

Optimal Coordination Mechanisms for Unrelated Machine Scheduling

Preview
Yossi Azar

Lisa Fleischer

Kamal Jain

Operations Research, vol. 63 (2015), pp. 489-500

Concise Bid Optimization Strategies with Multiple Budget Constraints

Arash Asadpour

WINE, The 10th Conference on Web and Internet Economics (2014)

Preview abstract
A major challenge faced by the marketers attempting to optimize their advertising campaigns is to deal with budget constraints. The problem is even harder in the face of multidimensional budget constraints, particularly in the presence of many decision variables involved, and the interplay among the decision variables through these such constraints. Concise bidding strategies help advertisers deal with this challenge by introducing fewer variables to act on.
In this paper, we study the problem of finding optimal concise bidding strategies for advertising campaigns with multiple budget constraints. Given bid landscapes—i.e., predicted value (e.g., number of clicks) and cost per click for any bid—that are typically provided by ad-serving systems, we optimize the value given budget constraints. In particular, we consider bidding strategies that consist of no more than k different bids for all keywords. For constant k, we provide a PTAS to optimize the profit, whereas for arbitrary k we show how constant-factor approximation can be obtained via a combination of solution enumeration and dependent LP-rounding techniques.
Finally, we evaluate the performance of our algorithms on real datasets in two regimes with 1- and 3-dimensional budget constraint. In the former case where uniform bidding has provable performance guarantee, our algorithm beats the state of the art by an increase of 1% to 6% in the expected number of clicks. This is achieved by only two or three clusters—contrast with the single cluster permitted in uniform bidding. With only three dimensions in the budget constraint (one for total consumption, and another two for enforcing minimal diversity), the gap between the performance of our algorithm and an enhanced version of uniform bidding grows to an average of 5% to 6% (sometimes as high as 9%). Although the details of experiments for the multidimensional budget constraint to the full version of the paper are deferred to the full version of the paper, we report some highlights from the results.
View details

Multiplicative Bidding in Online Advertising

Sam Chiu-wai Wong

ACM Conference on Economics and Computation (EC) (2014)

Preview abstract
In this paper, we initiate the study of the multiplicative bidding language adopted by major Internet search companies. In multiplicative bidding, the effective bid on a particular search auction is the product of a base bid and bid adjustments that are dependent on features of the search (for example, the geographic location of the user, or the platform on which the search is conducted). We consider the task faced by the advertiser when setting these bid adjustments, and establish a foundational optimization problem that captures the core difficulty of bidding under this language. We give matching algorithmic and approximation hardness results for this problem; these results are against an information-theoretic bound, and thus have implications on the power of the multiplicative bidding language itself. Inspired by empirical studies of search engine price data, we then codify the relevant restrictions of the problem, and give further algorithmic and hardness results. Our main technical contribution is an O(log n)-approximation for the case of multiplicative prices and monotone values. We also provide empirical validations of our problem restrictions, and test our algorithms on real data against natural benchmarks. Our experiments show that they perform favorably compare with the baseline.
View details

Distributed Balanced Clustering via Mapping Coresets

Aditya Bhaskara

NIPS, Neural Information Processing Systems Foundation (2014)

Preview abstract
Large-scale clustering of data points in metric spaces is an important problem in mining big data sets. For many applications, we face explicit or implicit size constraints for each cluster which leads to the problem of clustering under capacity constraints or the balanced clustering'' problem. Although the balanced clustering problem has been widely studied, developing a theoretically sound distributed algorithm remains an open problem. In the present paper we develop a general framework based on mapping coresets'' to tackle this issue. For a wide range of clustering objective functions such as k-center, k-median, and k-means, our techniques give distributed algorithms for balanced clustering that match the best known single machine approximation ratios.
View details

Reduce and aggregate: similarity ranking in multi-categorical bipartite graphs

Stefano Leonardi

WWW (2014), pp. 349-360

Preview abstract
We study the problem of computing similarity rankings in large-scale multi-categorical bipartite graphs, where the two sides of the graph represent actors and items, and the items are partitioned into an arbitrary set of categories. The problem has several real-world applications, including identifying competing advertisers and suggesting related queries in an online advertising system or finding users with similar interests and suggesting content to them. In these settings, we are interested in computing on-the-fly rankings of similar actors, given an actor and an arbitrary subset of categories of interest. Two main challenges arise: First, the bipartite graphs are huge and often lopsided (e.g. the system might receive billions of queries while presenting only millions of advertisers). Second, the sheer number of possible combinations of categories prevents the pre-computation of the results for all of them. We present a novel algorithmic framework that addresses both issues for the computation of several graph-theoretical similarity measures, including # common neighbors, and Personalized PageRank. We show how to tackle the imbalance in the graphs to speed up the computation and provide efficient real-time algorithms for computing rankings for an arbitrary subset of categories. Finally, we show experimentally the accuracy of our approach with real-world data, using both public graphs and a very large dataset from Google AdWords.
View details

PASS Approximation: A Framework for Analyzing and Designing Heuristics

Preview
Uriel Feige

Nicole Immorlica

Hamid Nazerzadeh

Algorithmica, vol. 66 (2013), pp. 450-478

Bicriteria Online Matching: Maximizing Weight and Cardinality

International Conference on Web and Internet Economics (WINE) (2013), pp. 305-318

Preview abstract
Inspired by online ad allocation problems,
many results have been developed for
online matching problems.
Most of the previous work deals with a single objective,
but, in practice, there is a need to optimize multiple objectives.
Here, as an illustrative example motivated by display ads allocation,
we study a bi-objective online matching problem.
In particular, we consider a set of fixed nodes (ads) with capacity
constraints, and a set of online items (pageviews) arriving one by
one. Upon arrival of an online item $i$, a set of eligible fixed
neighbors (ads) for the item is revealed, together with a weight
$w_{ia}$ for eligible neighbor $a$. The problem is to assign each item to
an eligible neighbor online, while respecting the capacity
constraints; the goal is to maximize both the total weight of the
matching and the cardinality.
In this paper, we present both approximation algorithms and
hardness results for this problem.
An $(\alpha, \beta)$-approximation for this problem is a matching with
weight at least $\alpha$ fraction of the maximum
weighted matching, and cardinality
at least $\beta$ fraction of maximum cardinality matching.
We present a parametrized approximation
algorithm that allows a smooth tradeoff curve between the two
objectives: when the capacities of fixed nodes are
large, we give a $p(1- 1/e^{1/p}), (1-p)(1-1/e^{1/1-p})$-approximation
for any $0 \le p \le 1$, and prove a `hardness curve' combining several
inapproximability
results. These upper and lower bounds are always close (with a maximum
gap of $9\%$), and exactly coincide at the point $(0.43, 0.43)$.
For small capacities, we present a
smooth parametrized approximation curve for the problem
between $(0,1-1/e)$ and $(1/2,0)$ passing
through a $(1/3,0.3698)$-approximation.
View details

Whole-page optimization and submodular welfare maximization with online bidders

Nikhil Devanur

Zhiyi Huang

ACM Conference on Electronic Commerce (EC) 2013, pp. 305-322

A Local Algorithm for Finding Well-Connected Clusters

Zeyuan Allen Zhu

The 30th International Conference on Machine Learning, ICML 2013

Preview abstract
Motivated by applications of large-scale graph clustering, we study random-walk-based LOCAL algorithms whose running times depend only on the size of the output cluster, rather than the entire graph. In particular, we develop a method with better theoretical guarantee compared to all previous work, both in terms of the clustering accuracy and the conductance of the output set. We also prove that our analysis is tight, and perform empirical evaluation to support our theory on both synthetic and real data. More specifically, our method outperforms prior work when the cluster is WELL-CONNECTED. In fact, the better it is well-connected inside, the more significant improvement we can obtain. Our results shed light on why in practice some random-walk-based algorithms perform better than its previous theory, and help guide future research about local clustering.
View details

Equilibrium pricing with positive externalities

Preview
Nima AhmadiPourAnari

Shayan Ehsani

Mohammad Ghodsi

Nima Haghpanah

Nicole Immorlica

Hamid Mahini

Theor. Comput. Sci., vol. 476 (2013), pp. 1-15

Two-stage Robust Network Design with Exponential Scenarios

Preview
Rohit Khandekar

Guy Kortsarz

Mohammad R. Salavatipour

Algorithmica, vol. 65 (2013), pp. 391-408

On the advantage of overlapping clusters for minimizing conductance

Preview
Rohit Khandekar

Guy Kortsarz

Proceedings of the 10th Latin American international conference on Theoretical Informatics, Springer-Verlag, Berlin, Heidelberg (2012), pp. 494-505

Overlapping clusters for distributed computation

Preview
Reid Andersen

David Gleich

ACM Conference on Web Search and Data Mining (WSDM) (2012)

Simultaneous Approximations for Adversarial and Stochastic Online Budgeted Allocation

Shayan Oveis Gharan

Symposium on Discrete Algorithms (SODA), ACM/SIAM (2012)

Preview abstract
Motivated by online ad allocation, we study the problem of simultaneous approximations for the adversarial and stochastic online budgeted allocation problem. This problem consists of a bipartite graph G = (X, Y, E), where the nodes of Y along with their corresponding capacities are known beforehand to the algorithm, and the nodes of X arrive online. When a node of X arrives, its incident edges, and their respective weights are revealed, and the algorithm can match it to a neighbor in Y. The objective is to maximize the weight of the final matching, while respecting the capacities.
When nodes arrive in an adversarial order, the best competitive ratio is known to be 1 - 1/e, and it can be achieved by the Ranking [18], and its generalizations (Balance [16, 21]). On the other hand, if the nodes arrive through a random permutation, it is possible to achieve a competitive ratio of 1 -- ε [9]. In this paper we design algorithms that achieve a competitive ratio better than 1 -- 1/e on average, while preserving a nearly optimal worst case competitive ratio. Ideally, we want to achieve the best of both worlds, i.e, to design an algorithm with the optimal competitive ratio in both the adversarial and random arrival models. We achieve this for unweighted graphs, but show that it is not possible for weighted graphs.
In particular, for unweighted graphs, under some mild assumptions, we show that Balance achieves a competitive ratio of 1 -- ε in a random permutation model. For weighted graphs, however, we prove this is not possible; we prove that no online algorithm that achieves an approximation factor of 1 -- 1/ε for the worst-case inputs may achieve an average approximation factor better than 97.6% for random inputs. In light of this hardness result, we aim to design algorithms with improved approximation ratios in the random arrival model while preserving the competitive ratio of 1 -- 1/ε in the worst case. To this end, we show the algorithm proposed by [21] achieves a competitive ratio of 0.76 for the random arrival model, while having a 1 -- 1/ε ratio in the worst case.
View details

How to approximate optimal auctions

Preview
Nima Haghpanah

Nicole Immorlica

Kamesh Munagala

SIGecom Exchanges, vol. 11 (2012), pp. 30-33

On the Non-progressive Spread of Influence through Social Networks

Preview
MohammadAmin Fazli

Mohammad Ghodsi

Jafar Habibi

Pooya Jalaly Khalilabad

Sina Sadeghian

LATIN (2012)

Convergence and approximation in potential games

Preview
George Christodoulou

Anastasios Sidiropoulos

Theor. Comput. Sci., vol. 438 (2012), pp. 13-27

Optimal Auctions with Positive Network Externalities

Preview
Nima Haghpanah

Nicole Immorlica

K. Munagala

ACM Conference on Electronic Commerce (2011)

Online Stochastic Weighted Matching: Improved Approximation Algorithms

Preview
Bernard Haeupler

Workshop of Network and Internet Economics (WINE) 2011

Yield Optimization of Display Advertising with Ad Exchange

S. Muthukrishnan

ACM Conference on Electronic Commerce (2011)

Preview abstract
In light of the growing market of Ad Exchanges for the real-time sale of advertising slots, publishers face new challenges in choosing between the allocation of contract-based reservation ads and spot market ads. In this setting, the publisher should take into account the tradeoff between short-term revenue from an Ad Exchange and quality of allocating reservation ads. In this paper, we formalize this combined optimization problem as a stochastic control problem and derive an efficient policy for online ad allocation in settings with general joint distribution over placement quality and exchange bids. We prove asymptotic optimality of this policy in terms of any trade-off between quality of delivered reservation ads and revenue from the exchange, and provide a rigorous bound for its convergence rate to the optimal policy. We also give experimental results on data derived from real publisher inventory, showing that our policy can achieve any pareto-optimal point on the quality vs. revenue curve. Finally, we study a parametric training-based algorithm in which instead of learning the dual variables from a sample data (as is done in non-parametric training-based algorithms), we learn the parameters of the distribution and construct those dual variables from the learned parameter values. We compare parametric and non-parametric ways to estimate from data both analytically and experimentally in the special case without the ad exchange, and show that though both methods converge to the optimal policy as the sample size grows, our parametric method converges faster, and thus performs better on smaller samples.
View details

Auctions with intermediaries: extended abstract

S. Muthukrishnan

Mallesh M. Pai

ACM Conference on Electronic Commerce (2010), pp. 23-32

Preview abstract
Inspired by online advertisement exchange systems, we study a setting where potential buyers of a unique, indivisible good attempt to purchase from a central seller via a set of intermediaries. Each intermediary has captive buyers, and runs an auction for a 'contingent' good. Based on the outcome, the intermediary bids in a subsequent upstream auction run by the seller. In this paper, we study the equilibria and incentives of intermediaries and the central seller.
We find that combining the notion of optimal auction design with the double-marginalization arising from the presence of intermediaries yields new strategic elements not present in either setting individually: we show that in equilibrium, revenue-maximizing intermediaries will use an auction with a randomized reserve price chosen from an interval. We characterize the interval and the probability distribution from which this reserve price is chosen as a function of the distribution of buyers' types. Furthermore, we characterize the revenue maximizing auction for the central seller by taking into account the effect of his choice of mechanism on the mechanisms offered by the intermediaries. We find that the optimal reserve price offered by the seller decreases with the number of buyers (but remains strictly positive); in contrast to the classical optimal auction without intermediaries, where the reserve price is independent of the number of buyers.
View details

Optimal Iterative Pricing over Social Networks (Extended Abstract)

Preview
Hessameddin Akhlaghpour

Mohammad Ghodsi

Nima Haghpanah

Hamid Mahini

Afshin Nikzad

WINE (2010), pp. 415-423

Maximizing Nonmonotone Submodular Functions under Matroid or Knapsack Constraints

Preview
Jon Lee

Viswanath Nagarajan

Maxim Sviridenko

SIAM J. Discrete Math., vol. 23 (2010), pp. 2053-2078

Equilibrium Pricing with Positive Externalities (Extended Abstract)

Preview
Nima Anari

Shayan Ehsani

Mohammad Ghodsi

Nima Haghpanah

Nicole Immorlica

Hamid Mahini

WINE (2010), pp. 424-431

Online Stochastic Packing Applied to Display Ad Allocation

Monika Henzinger

Clifford Stein

ESA (1) (2010), pp. 182-194

Preview abstract
Inspired by online ad allocation, we study online stochastic packing integer programs from theoretical and practical standpoints. We first present a near-optimal online algorithm for a general class of packing integer programs which model various online resource allocation problems including online variants of routing, ad allocations, generalized assignment, and combinatorial auctions. As our main theoretical result, we prove that a simple dual training-based algorithm achieves a (1 − o(1))-approximation guarantee in the random order stochastic model. This is a significant improvement over logarithmic or constant-factor approximations for the adversarial variants of the same problems (e.g. factor 1−1e1−1e for online ad allocation, and log(m) for online routing). We then focus on the online display ad allocation problem and study the efficiency and fairness of various training-based and online allocation algorithms on data sets collected from real-life display ad allocation system. Our experimental evaluation confirms the effectiveness of training-based algorithms on real data sets, and also indicates an intrinsic trade-off between fairness and efficiency.
View details

Competitive Routing over Time

Preview
Martin Hoefer

Heiko Röglin

Shang-Hua Teng

Workshop of Internet Economics (WINE) (2009), pp. 18-29

Online Ad Assignment with Free Disposal

Preview
S. Muthukrishnan

Workshop of Internet Economics (WINE) (2009), pp. 374-385

Online Stochastic Matching: Beating 1-1/e

S. Muthukrishnan

Symposium on the Foundations of Computer Science (FOCS) (2009)

Preview abstract
We study the online stochastic bipartite matching problem, in a form motivated by display ad allocation on the Internet. In the online, but adversarial case, the celebrated result of Karp, Vazirani and Vazirani gives an approximation ratio of 1−1e≃0.632, a very familiar bound that holds for many online problems; further, the bound is tight in this case. In the online, stochastic case when nodes are drawn repeatedly from a known distribution, the greedy algorithm matches this approximation ratio, but still, no algorithm is known that beats the 1−1e bound.Our main result is a 0.67-approximation online algorithm for stochastic bipartite matching, breaking this 1−1e barrier. Furthermore, we show that no online algorithm can produce a 1−ϵ approximation for an arbitrarily small ϵ for this problem. Our algorithms are based on computing an optimal offline solution to the expected instance, and using this solution as a guideline in the process of online allocation. We employ a novel application of the idea of the power of two choices from load balancing: we compute two disjoint solutions to the expected instance, and use both of them in the online algorithm in a prescribed preference order. To identify these two disjoint solutions, we solve a max flow problem in a boosted flow graph, and then carefully decompose this maximum flow to two edge-disjoint (near-) matchings. In addition to guiding the online decision making, these two offline solutions are used to characterize an upper bound for the optimum in any scenario. This is done by identifying a cut whose value we can bound under the arrival distribution. At the end, we discuss extensions of our results to more general bipartite allocations that are important in a display ad application.
View details

Coordination mechanisms for selfish scheduling

Preview
Nicole Immorlica

Li (Erran) Li

Andreas S. Schulz

Theor. Comput. Sci., vol. 410 (2009), pp. 1589-1598

Permutation betting markets: singleton betting with extra information

Preview
Mohammad Ghodsi

Hamid Mahini

ACM Conference on Electronic Commerce (2008), pp. 180-189

On non-progressive spread of influence through social networks

MohammadAmin Fazli

Mohammad Ghodsi

Jafar Habibi

Pooya Jalaly Khalilabadi

Sina Sadeghian Sadeghabad

Theor. Comput. Sci., vol. 550 (2014), pp. 36-50

Tight Approximation Algorithms for Maximum Separable Assignment Problems

Lisa Fleischer

Michel X. Goemans

Maxim Sviridenko

Math. Oper. Res., vol. 36 (2011), pp. 416-431

Secure Overlay Network Design

Online Stochastic Ad Allocation: Efficiency and Fairness

Monika Henzinger

Clifford Stein

CoRR, vol. abs/1001.5076 (2010)

Coordination Mechanisms for Weighted Sum of Completion Times in Machine Scheduling

Non-monotone submodular maximization under matroid and knapsack constraints

Bid Optimization in Broad-Match Ad Auctions

Eyal Even-Dar

Yishay Mansour

S. Muthukrishnan

Uri Nadav

CoRR, vol. abs/0901.3754 (2009)

Non-monotone submodular maximization under matroid and knapsack constraints

(Almost) optimal coordination mechanisms for unrelated machine scheduling

Approximating Minimum-Power Degree and Connectivity Problems

On the Stability of Web Crawling and Web Search

Reid Andersen

Christian Borgs

Jennifer T. Chayes

John E. Hopcroft

Shang-Hua Teng

ISAAC (2008), pp. 680-691

Tight information-theoretic lower bounds for welfare maximization in combinatorial auctions

Michael Schapira

Jan Vondrák

ACM Conference on Electronic Commerce (2008), pp. 70-77

Market Games and Content Distribution

Encyclopedia of Algorithms (2008)

The myth of the folk theorem

Christian Borgs

Jennifer T. Chayes

Nicole Immorlica

Adam Tauman Kalai

Christos H. Papadimitriou

STOC (2008), pp. 365-372

Fast convergence to nearly optimal solutions in potential games

Baruch Awerbuch

Yossi Azar

Amir Epstein

Alexander Skopalik

ACM Conference on Electronic Commerce (2008), pp. 264-273

Optimal marketing strategies over social networks

A combinatorial allocation mechanism with penalties for banner advertising

Limitations of cross-monotonic cost-sharing schemes

Robust PageRank and locally computable spam detection features

Reid Andersen

Christian Borgs

Jennifer T. Chayes

John E. Hopcroft

Kamal Jain

Shang-Hua Teng

AIRWeb (2008), pp. 69-76

Uncoordinated two-sided matching markets

Heiner Ackermann

Paul W. Goldberg

Heiko R{\

Berthold V{\

ACM Conference on Electronic Commerce (2008), pp. 256-263

Trust-based recommendation systems: an axiomatic approach

Reid Andersen

Christian Borgs

Jennifer T. Chayes

Uriel Feige

Abraham D. Flaxman

Adam Kalai

Moshe Tennenholtz

WWW (2008), pp. 199-208

Robust Combinatorial Optimization with Exponential Scenarios

Power optimization in fault-tolerant topology control algorithms for wireless multi-hop networks

Mohammad Taghi Hajiaghayi

Nicole Immorlica

IEEE/ACM Trans. Netw., vol. 15 (2007), pp. 1345-1358

A Recommender System Based on Local Random Walks and Spectral Methods

Subjective-cost policy routing

Joan Feigenbaum

David R. Karger

Rahul Sami

Theor. Comput. Sci., vol. 378 (2007), pp. 175-189

Cell Breathing in Wireless LANs: Algorithms and Evaluation

Paramvir Bahl

Mohammad Taghi Hajiaghayi

Kamal Jain

Lili Qiu

Amin Saberi

IEEE Trans. Mob. Comput., vol. 6 (2007), pp. 164-178

Maximizing Non-Monotone Submodular Functions

Local Computation of PageRank Contributions

Reid Andersen

Christian Borgs

Jennifer T. Chayes

John E. Hopcraft

Shang-Hua Teng

WAW (2007), pp. 150-165

A Unified Approach to Congestion Games and Two-Sided Markets

Market sharing games applied to content distribution in ad hoc networks

Michel X. Goemans

Li Li

Marina Thottan

IEEE Journal on Selected Areas in Communications, vol. 24 (2006), pp. 1020-1033

Tight approximation algorithms for maximum general assignment problems

Bandwidth Sharing Network Design for Multi-Class Traffic

Assignment Problems in Rental Markets

A relation between choosability and uniquely list colorability

Saieed Akbari

Bashir S. Sadjad

J. Comb. Theory, Ser. B, vol. 96 (2006), pp. 577-583

Convergence and Approximation in Potential Games

Fault-Tolerant and 3-Dimensional Distributed Topology Control Algorithms in Wireless Multi-hop Networks

Mohsen Bahramgiri

Mohammad Taghi Hajiaghayi

Wireless Networks, vol. 12 (2006), pp. 179-188

Secure Overlay Network Design

Sink Equilibria and Convergence

Limitations of cross-monotonic cost sharing schemes

Power Optimization for Connectivity Problems

Coordination Mechanisms for Selfish Scheduling

Cycle Cover with Short Cycles

Subjective-Cost Policy Routing

Traffic engineering of management flows by link augmentations on confluent trees

Randeep Bhatia

Nicole Immorlica

Tracy Kimbrel

Seffi Naor

Baruch Schieber

SPAA (2005), pp. 289-298

Market sharing games applied to content distribution in ad-hoc networks

Convergence Issues in Competitive Games

On spectrum sharing games

Locality-sensitive hashing scheme based on p-stable distributions

Mayur Datar

Nicole Immorlica

Piotr Indyk

Symposium on Computational Geometry (2004), pp. 253-262

A Simple Polynomial Time Framework For Reduced Path Decomposition in Multi-Path Routing

Distributed Network Monitoring for Evolving IP Networks

Power optimization in fault-tolerant topology control algorithms for wireless multi-hop networks

The facility location problem with general cost functions

RoboCup-2001: The Fifth Robotic Soccer World Championships

Manuela M. Veloso

Tucker R. Balch

Peter Stone

Hiroaki Kitano

Fuminori Yamasaki

Ken Endo

Minoru Asada

Mansour Jamzad

Sayyed Bashir Sadjad

Moslem Kazemi

Hamid Reza Chitsaz

Abbas Heydarnoori

Mohammad Taghi Hajiaghayi

Ehsan Chiniforooshan

AI Magazine, vol. 23 (2002), pp. 55-68

Length-constrained path-matchings in graphs

Mohammad Ghodsi

Mohammad Taghi Hajiaghayi

Networks, vol. 39 (2002), pp. 210-215

A Fast Vision System for Middle Size Robots in RoboCup

Mansour Jamzad

Sayyed Bashir Sadjad

Moslem Kazemi

Hamid Reza Chitsaz

Abbas Heydarnoori

Mohammad Taghi Hajiaghayi

Ehsan Chiniforooshan

RoboCup (2001), pp. 71-80

K_r-Free Uniquely Vertex Colorable Graphs with Minimum Possible Edges

Saeed Akbari

Sayyed Bashir Sadjad

J. Comb. Theory, Ser. B, vol. 82 (2001), pp. 316-318

Basic Requirements for a Teamwork in Middle Size RoboCup

Mansour Jamzad

Hamid Reza Chitsaz

Amirali Foroughnassiraei

Reza Ghorbani

Moslem Kazemi

Sayyed Bashir Sadjad

RoboCup (2001), pp. 621-626

On the simultaneous edge-coloring conjecture

Mohammad Taghi Hajiaghayi

Ebadollah S. Mahmoodian

Amin Saberi

Ruzbeh Tusserkani

Discrete Mathematics, vol. 216 (2000), pp. 267-272

A Goal Keeper for Middle Size RoboCup

Mansour Jamzad

Amirali Foroughnassiraei

Mohammad Taghi Hajiaghayi

Reza Ghorbani

Abbas Heydarnoori

Moslem Kazemi

Hamid Reza Chitsaz

Farid Mobasser

Mohsen Ebrahimi Moghaddam

M. Gudarzi

N. Ghaffarzadegan

RoboCup (2000), pp. 583-586