Uri Nadav
Research Areas
Authored Publications
Sort By
Hitting the High Notes: Subset Selection for Maximizing Expected Order Statistics
Alex Psomas
Aviad Rubinstein
Advances in Neural Information Processing Systems (2020)
Preview abstract
We consider the fundamental problem of selecting k out of n random variables
in a way that the expected highest or second-highest value is maximized. This
question captures several applications where we have uncertainty about the quality
of candidates (e.g. auction bids, search results) and have the capacity to explore
only a small subset due to an exogenous constraint. For example, consider a second
price auction where system constraints (e.g., costly retrieval or model computation)
allow the participation of only k out of n bidders, and the goal is to optimize the
expected efficiency (highest bid) or expected revenue (second highest bid).
We study the case where we are given an explicit description of each random
variable. We give a PTAS for the problem of maximizing the expected highest value.
For the second-highest value, we prove a hardness result: assuming the Planted
Clique Hypothesis, there is no constant factor approximation algorithm that runs
in polynomial time. Surprisingly, under the assumption that each random variable
has monotone hazard rate (MHR), a simple score-based algorithm, namely picking
the k random variables with the largest 1/\sqrt{k} top quantile value, is a constant
approximation to the expected highest and second highest value, simultaneously.
View details
Preview abstract
This paper analyzes a mechanism for selling items in auctions in which the auctioneer specifies a cap on the ratio between the maximum and minimum bids that bidders may use. Such a mechanism is widely used in the online advertising business through the caps that companies impose on the minimum and maximum bid multipliers that advertisers may use in targeting. When bidders’ values are independent and identically distributed, using this mechanism results in higher revenue than allowing bidders to condition their bids on the targeting information in an arbitrary way and also almost always results in higher revenue than not allowing bidders to target. Choosing the optimal cap on the ratio between the maximum bid and the minimum bid can also be more important than introducing additional competition in the auction. However, if bidders’ values are not identically distributed, pure-strategy equilibria may fail to exist.
View details
Ad auctions with data
Preview
Hu Fu
Patrick R. Jordan
Inbal Talgam-Cohen
INFOCOM Workshops (2012), pp. 184-189
On the Convergence of Regret Minimization Dynamics in Concave Games
Eyal Even-Dar
Yishay Mansour
41st Annual ACM Symposium on Theory of Computing, STOC, ACM (2009), pp. 523-532
Preview abstract
We consider standard regret minimization setting where at each time
step the decision maker has to choose a distribution over
k alternatives, and then observes the loss of each alternative. The
setting is very similar to the classical online job scheduling setting
with three major differences:
Information model:
in the regret minimization setting losses are only observed after the
actions (assigning the job to a machine) is performed and not observed
before the action selection, as assumed in the classical online job
scheduling setting,
The comparison class:
in regret minimization the comparison class is the best static
algorithm (i.e., distribution over alternatives) and not the optimal
offline solution.
Performance measure: In regret minimization we measure the
additive difference to the optimal solution in the comparison class, in
contrast to the ratio used in online job scheduling
setting.
Motivated by load balancing and job scheduling,
we consider a global cost function (over the losses incur by each
alternative/machine),
rather than simply a summation of the instantaneous losses as done
traditionally in regret minimization. Such global cost functions
include the makespan (the maximum over the alternatives/machines) and
the Ld norm (over the alternatives/machines).
The major contribution of this work is to design a novel regret minimization
algorithm based on calibration that guarantees a vanishing average
regret,
where the regret is measured with respect to the best static decision
maker, who selects the same distribution over alternatives at every
time step.
Our results hold for a wide class of global cost functions. which
include the makespan and the Ld norms, for d>1.
In contrast, we show that for concave global cost functions, such as
Ld norms for d<1,
the worst-case average regret does not vanish.
In addition to the general calibration based algorithm, we provide
simple and efficient algorithms for special interesting cases.
View details