- Mathieu Lou-roch Lauriere
- Sarah Perrin
- Sertan Girgin
- Paul Muller
- Ayush Jain
- Theophile Cabannes
- Georgios Piliouras
- Julien Perolat
- Romuald Elie
- Olivier Pietquin
- Matthieu Geist
Abstract
Mean Field Games (MFGs) have been introduced to efficiently approximate games with very large populations of strategic agents. Recently, the question of learning equilibria in MFGs has gained momentum, particularly using model-free reinforcement learning (RL) methods. One limiting factor to further scale up using RL is that existing algorithms to solve MFGs require the mixing of approximated quantities (such as strategies or $q$-values). This is non-trivial in the case of non-linear function approximation (\textit{e.g.} neural networks). We propose two methods to address this shortcoming. One learns a mixed strategy from distillation of historical data into a neural network and is applied to the Fictitious Play algorithm. The other is an online mixing method based on regularization that doesn't require memorizing historical data or previous estimates. It is used to extend Online Mirror Descent. We demonstrate numerically that these methods efficiently enable the use of Deep RL algorithms to solve various MFGs. In addition, we show that, thanks to generalization, the resulting algorithms outperform their SotA counterparts.
Research Areas
Learn more about how we do research
We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work