Jump to Content

Metrics and continuity in reinforcement learning

Charline Le Lan
Marc G. Bellemare
AAAI 2021 (2021)
Google Scholar

Abstract

Reinforcement learning techniques are being applied to increasingly larger systems where it becomes untenable to maintain direct estimates for individual states, in particular for continuous-state systems. Instead, researchers often leverage state similarity (whether implicitly or explicitly) to build models that can generalize well from a limited set of samples. The notion of state similarity used is thus of crucial importance, as it will directly affect the quality of the approximations and performance of the algorithms. Indeed, there have been a number of works that investigate – both on a theoretical and an empirical basis – how best to construct these neighborhoods and topologies. However, the choice of metric is not always clear and is often not fully specified when new algorithms are introduced. In this paper we aim to clarify the landscape of existing metrics and provide guidelines for the choice of metric when designing or implementing algorithms. We do this by first introducing a unified formalism for specifying these topologies, through the lens of metrics or distance measures, and clarify the relationship between them. We establish a hierarchy amongst the different metrics and their theoretical implications on the Markov Decision Process (MDP) specifying the reinforcement learning problem. We complement our theoretical results with empirical evaluations showcasing the differences between the metrics considered.

Research Areas