Tight Lower Bounds for Multiplicative Weights Algorithmic Families

Nick Gravin
Yuval Peres
44th International Colloquium on Automata, Languages, and Programming, ICALP 2017

Abstract

We study the fundamental problem of prediction with expert advice and develop
regret lower bounds for a large family of algorithms for this problem. We
develop simple adversarial primitives, that lend themselves to various
combinations leading to sharp lower bounds for many algorithmic families. We use
these primitives to show that the classic Multiplicative Weights Algorithm (MWA)
has a regret of $\sqrt{\frac{T \ln k}{2}}$ (where T is the time horizon and
k is the number of experts), there by completely closing the gap between upper
and lower bounds. We further show a regret lower bound of
$\frac{2}{3}\sqrt{\frac{T\ln k}{2}}$ for a much more general family of
algorithms than MWA, where the learning rate can be arbitrarily varied over
time, or even picked from arbitrary distributions over time. We also use our
primitives to construct adversaries in the geometric horizon setting for MWA to
precisely characterize the regret at $\frac{0.391}{\sqrt{\delta}}$ for the case
of 2 experts and a lower bound of $\frac{1}{2}\sqrt{\frac{\ln k}{2\delta}}$
for the case of arbitrary number of experts k (here $\delta$ is the
probability that the game ends in any given round).