Google Research

Affine-Invariant Online Optimization

  • Roi Livni
  • Tomer Koren
NIPS (2017) (to appear)


We present a new affine-invariant optimization algorithm called \emph{Online Lazy Newton}. The algorithm is a modification of the Online Newton Step algorithm. The convergence rate of Online Lazy Newton is independent of conditioning. Instead, the algorithm performance depends on the best possible preconditioning of the problem in retrospect and on its \emph{intrinsic} dimensionality. As an application, we show how the Online Lazy Newton can achieve the optimal $\widetildet{\Theta}\sqrt{rT})$ regret for the low rank experts problem, improving by a $\sqrt{r}$ factor over the previously best known bound and resolving an open problem posed by \citet{hazan2016online}. Our rate is also applicable in the \emph{supervised low rank expert} proposed in \cite{foster2017zigzag} that obtained regret of $O(\sqrt{rT}+\log N\log r)$

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work