Jump to Content

Minimax optimization with Smooth Algorithmic Adversaries

Chi Jin
Lillian Ratliff
Tanner Fiez
International Conference on Learning Representations (ICLR) 2022

Abstract

We consider two-player zero sum games or minimax optimization $\min_x \max_y f(x,y)$ in the Stackelberg/sequential setting where the $x$-player goes first and $y$-player goes second, and where $f$ is nonconvex in $x$ and nonconcave in $y$. While such problems are encountered in several popular machine learning paradigms such as in training generative adversarial networks (GANs) and adversarially robust models, there is very little theoretical understanding regarding efficiently computable notions of optimality in this setting. This paper sprouts from the intuition that since nonconcave maximization is NP-hard in general, the $x$-player should aim to play against the \emph{algorithm} employed by $y$-player for maximization, rather than against full maximization over $y$. The key \emph{conceptual} contribution of this paper is that, under the assumption that the $x$-player is aware of the algorithm employed by $y$-player for maximization, we can reformulate the given problem as a nonconvex-\textbf{concave} minimax optimization problem. The key \emph{technical} contributions of this paper are: 1. If the $y$-player employs some popular algorithms such as (stochastic) gradient ascent and Nesterov's accelerated gradient ascent, the resulting nonconvex-\emph{concave} minimax optimization problem is \emph{smooth}, and 2. We leverage recent results for smooth, nonconvex-concave minimax optimization problems to propose new algorithms that find an appropriate local optimum for these problems. Experiments confirm our theoretical findings and demonstrate that the proposed approach works well in practice.