Offline Q-Learning on Diverse Multi-Task Data Both Scales And Generalizes
Abstract
Offline reinforcement learning (RL) on large, heterogeneous datasets with highcapacity models can, in principle, lead to agents that generalize broadly, analogously to similar advances in vision and NLP. However, recent works argue
that offline RL methods encounter unique challenges to scaling up model capacity. Drawing on the learnings from these works, we re-examine previous design choices and find that with appropriate choices: wider ResNets, cross-entropy
based distributional backups, and feature normalization, offline Q-learning algorithms exhibit strong performance that scales with model capacity. Using multitask Atari as a test-bed for scaling and generalization, we train a single policy on
40 games with near-human performance using up-to 100M parameter networks,
finding that model performance scales favorably with capacity. In contrast to prior
work, we substantially extrapolate beyond dataset performance even when trained
entirely on a large (400M transitions) but highly suboptimal dataset (51% humannormalized score). Compared to supervised approaches, offline RL scales similarly with model capacity and has better performance, especially when the dataset
is suboptimal. Finally, we show that such offline Q-functions learn powerful representations that facilitate rapid transfer to novel games and fast online learning on
new variations of a training game, improving over existing representation learning
approaches.
that offline RL methods encounter unique challenges to scaling up model capacity. Drawing on the learnings from these works, we re-examine previous design choices and find that with appropriate choices: wider ResNets, cross-entropy
based distributional backups, and feature normalization, offline Q-learning algorithms exhibit strong performance that scales with model capacity. Using multitask Atari as a test-bed for scaling and generalization, we train a single policy on
40 games with near-human performance using up-to 100M parameter networks,
finding that model performance scales favorably with capacity. In contrast to prior
work, we substantially extrapolate beyond dataset performance even when trained
entirely on a large (400M transitions) but highly suboptimal dataset (51% humannormalized score). Compared to supervised approaches, offline RL scales similarly with model capacity and has better performance, especially when the dataset
is suboptimal. Finally, we show that such offline Q-functions learn powerful representations that facilitate rapid transfer to novel games and fast online learning on
new variations of a training game, improving over existing representation learning
approaches.