- Liam Li
- Kevin Jamieson
- Giulia DeSalvo
- Afshin Rostamizadeh
- Ameet Talwalkar
Abstract
Performance of machine learning algorithms depends critically on identifying a good set of hyperparameters. While recent approaches use Bayesian optimization to adaptively select configurations, we focus on speeding up random search through adaptive resource allocation and early-stopping. We formulate hyperparameter optimization as a pure-exploration non-stochastic infinite-armed bandit problem where a predefined resource like iterations, data samples, or features is allocated to randomly sampled configurations. We introduce a novel algorithm, øuralg , for this framework and analyze its theoretical properties, providing several desirable guarantees. Furthermore, we compare øuralg with popular Bayesian optimization methods on a suite of hyperparameter optimization problems. We observe that øuralg can provide over an order-of-magnitude speedup over our competitor set on a variety of deep-learning and kernel-based learning problems.
Research Areas
Learn more about how we do research
We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work