Jump to Content

Optimal Rates for Random Order Online Optimization

uri sherman
NeurIPS 2021 (2021) (to appear)
Google Scholar

Abstract

We study online convex optimization in the random order model, recently proposed by \citet{garber2020online}, where the loss functions may be chosen by an adversary, but are then presented to the online algorithm in a uniformly random order. We focus on the scenario where the cumulative loss function is (strongly) convex, yet individual loss functions are smooth but might be non-convex. Our algorithms achieve the optimal bounds and significantly outperform the results of \citet{garber2020online}, completely removing the dimension dependence and improving the dependence on the strong convexity parameter. Our analysis relies on novel connections between algorithmic stability and generalization for sampling without-replacement analogous to those studied in the with-replacement i.i.d.~setting, as well as on a refined average stability analysis of stochastic gradient descent.

Research Areas