Contextual Search via Intrinsic Volumes
Abstract
We study the problem of contextual search, a multidimensional generalization of bi- nary search that captures many problems in contextual decision-making. In contextual search, a learner is trying to learn the value of a hidden vector v ∈ [0, 1]d. Every round the learner is provided an adversarially-chosen context ut ∈ Rd, submits a guess pt for the value of ⟨ut, v⟩, learns whether pt < ⟨ut, v⟩, and incurs loss ⟨ut, v⟩. The learner’s goal is to minimize their total loss over the course of T rounds.
We present an algorithm for the contextual search problem for the symmetric loss function l(θ,p) = |θ−p| that achieves Od(1) total loss. We present a new algorithm for the dynamic pricing problem (which can be realized as a special case of the contextual search problem) that achieves Od(log log T ) total loss, improving on the previous best known upper bounds of Od(logT) and matching the known lower bounds (up to a polynomial dependence on d). Both algorithms make significant use of ideas from the field of integral geometry, most notably the notion of intrinsic volumes of a convex set. To the best of our knowledge this is the first application of intrinsic volumes to algorithm design.
We present an algorithm for the contextual search problem for the symmetric loss function l(θ,p) = |θ−p| that achieves Od(1) total loss. We present a new algorithm for the dynamic pricing problem (which can be realized as a special case of the contextual search problem) that achieves Od(log log T ) total loss, improving on the previous best known upper bounds of Od(logT) and matching the known lower bounds (up to a polynomial dependence on d). Both algorithms make significant use of ideas from the field of integral geometry, most notably the notion of intrinsic volumes of a convex set. To the best of our knowledge this is the first application of intrinsic volumes to algorithm design.