# Pratik Worah

The focus of my research is on machine learning and mathematics (specifically randomized algorithms and probability theory). I have applied tools from these areas to problems in economics, optimization, and biology.

### Research Areas

Authored Publications

Sort By

The Landscape of Nonconvex-Nonconcave Minimax Optimization

Ben Grimmer

Haihao (Sean) Lu

Mathematical Programming (Springer Nature), na (2023), na

Preview abstract
Minimax optimization has become a central tool for modern machine learning with applications in robust optimization, game theory and training GANs. These applications are often nonconvex-nonconcave, but the existing theory is unable to identify and deal with the fundamental difficulties posed by nonconvex-nonconcave structures.
We break this historical barrier by identifying three regions of nonconvex-nonconcave bilinear minimax problems and characterizing their different solution paths. For problems where the interaction between the agents is sufficiently strong, we derive global linear convergence guarantees. Conversely when the interaction between the agents is fairly weak, we derive local linear convergence guarantees. Between these two settings, we characterize the types of cycles that can occur, preventing the convergence of the solution path.
View details

Learning Rate Schedules in the Presence of Distribution Shift

Adel Javanmard

Proceedings of the 40th International Conference on Machine Learning (2023), pp. 9523-9546

Preview abstract
We design learning rate schedules that minimize regret for SGD-based online learning in the presence of a changing data distribution. We fully characterize the optimal learning rate schedule for online linear regression via a novel analysis with stochastic differential equations. For general convex loss functions, we propose new learning rate schedules that are robust to distribution shift, and we give upper and lower bounds for the regret that only differ by constants. For non-convex loss functions, we define a notion of regret based on the gradient norm of the estimated models and propose a learning schedule that minimizes an upper bound on the total expected regret. Intuitively, one expects changing loss landscapes to require more exploration, and we confirm that optimal learning rate schedules typically increase in the presence of distribution shift. Finally, we provide experiments for high-dimensional regression models and neural networks to illustrate these learning rate schedules and their cumulative regret.
View details