Jump to Content

Generalization in Mean Field Games by Learning Master Policies

Sarah Perrin
Mathieu Lauriere
Julien Perolat
Romuald Elie
Matthieu Geist
Olivier Pietquin
AAAI (2022)
Google Scholar


In recent years, Mean Field Games (MFGs) have attracted a growing interest in Multi-Agent Reinforcement Learning as they allow to scale algorithms to millions of agents. However, existing reinforcement learning methods for MFGs are limited to learning an optimal policy for a single initial population distribution. Here, we study policies that enable a typical agent to react optimally to any population distribution. In reference to the Master equation in MFGs, we coin the term ``Master policies'' to describe them and we prove that a single Master policy leads to a Nash equilibrium, whatever the initial distribution is. Moreover, we propose a method to learn such Master policies. Our approach relies on three ingredients: an enlargement of the observation space by adding the current population distribution, a deep neural network-based approximation of the Master policy, and a training algorithm relying on reinforcement learning. We illustrate numerically on several numerical examples not only the correctness of the learned Master policy but also its generalization capabilities beyond the training set of population distributions.

Research Areas