Jump to Content

Inequity aversion improves cooperation in intertemporal social dilemmas

Edward Hughes
Joel Z Leibo
Matthew Phillips
Karl Paul Tuyls
Antonio García Castañeda
Iain Robert Dunning
Tina Zhu
Kevin Robert McKee
Raphael Koster
Heather Roff
Thore Graepel
NeurIPS (2018)

Abstract

Groups of humans are often able to find ways to cooperate with one another in complex, temporally extended social dilemmas. Models based on behavioral economics are only able to explain this phenomenon for unrealistic stateless matrix games. Recently, multi-agent reinforcement learning has been applied to generalize social dilemma problems to temporally and spatially extended Markov games. However, this has not yet generated an agent that learns to cooperate in social dilemmas as humans do. A key insight is that many, but not all, human individuals have inequity averse social preferences. This promotes a particular resolution of the matrix game social dilemma wherein inequity-averse individuals are personally pro-social and punish defectors. Here we extend this idea to Markov games and show that it promotes cooperation in several types of sequential social dilemma, via a profitable interaction with policy learnability. In particular, we find that inequity aversion improves temporal credit assignment for the important class of intertemporal social dilemmas. These results help explain how large-scale cooperation may emerge and persist.