Google Research

Learning Social Learning

NeurIPS Workshop on Cooperative AI (2020)

Abstract

Social learning is a key component of human and animal intelligence. By taking cues from the behavior of experts, individual social learners can learn faster and rapidly adapt to new circumstances. This paper investigates the circumstances under which independent model-free reinforcement learning (RL) agents engage in social learning. We introduce a new multi-agent environment specifically designed to elicit social learning by controlling the cost associated with individual exploration. We find that in most circumstances vanilla model-free RL agents do not benefit from expert demonstrations in sparse reward environments, even when exploration is expensive. We analyze the reasons for this deficiency, and show that by introducing a model-based auxiliary loss we are able to train agents that use cues from expert behavior to solve hard exploration tasks. The generalized social learning policy learned by these agents allows them to not only outperform the experts from which they learned, but also achieve better zero-shot performance than solitary learners when deployed to a new environment with experts. In contrast, agents that have not learned to rely on social learning generalize poorly and do not succeed in the transfer task. Further, we find that social learners perform as well as solo learners when there are no experts present, showing that social learning has not impaired performance. Our results indicate that developing RL agents that can benefit from the knowledge of experts present in their environment can not only improve performance on the task at hand, but improve the ability of RL agents to generalize to new environments.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work