Google Research

Top-k Combinatorial Bandits with Full-Bandit Feedback

(2020) (to appear)


Combinatorial Bandits is a generalization of multi-armed bandits, where $k$ out of $n$ arms are chosen at each round and the sum of the rewards is gained. We address the full-bandit feedback, in which the agent observes only the sum of rewards, in contrast to the semi-bandit feedback, in which the agent observes also the individual arms' rewards. We present the \emph{Combinatorial Successive Accepts and Rejects} (CSAR) algorithm, which is a generalization of the SAR algorithm \cite{bubeck2013multiple} for the combinatorial setting. Our main contribution is an efficient sampling scheme that uses Hadamard matrices in order to estimate accurately the individual arms' expected rewards. We discuss two variants of the algorithm, the first minimizes the sample complexity and the second minimizes the regret. For the sample complexity we also prove a matching lower bound that shows it is optimal. For the regret minimization, we prove a lower bound which is tight up to factor $k$.

Research Areas

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work