Jump to Content

More Robust Doubly Robust Off-policy Evaluation

Mehrdad Farajtabar
Mohammad Ghavamzadeh
Yinlam Chow
ICML 2018 (2018)
Google Scholar


We study the problem of off-policy value evaluation in reinforcement learning (RL), where one aims to estimate the value of a new policy based on data collected by a different policy. In particular, we focus on studying the doubly robust (DR) estimator, a hybrid off-policy value estimator that is unbiased and oftentimes has lower variance than traditional importance sampling (IS) estimators, in sequential decision making. While (Jiang & Li, 2016) already proposed the use of this estimator in RL, the important problem of how to properly choose model parameters in the DR estimator remains unresolved. In this work, we propose a novel methodology to design the model parameters in DR estimation for the sake of variance minimization, for which the resulting DR estimator is termed as the more robust doubly robust (MRDR) estimator. We further show the asymptotically optimality of this estimator over the class of consistent and asymptotically normal estimators, and we finally illustrate the improved accuracy of the MRDR estimator in several contextual bandit and RL benchmark experiments.

Research Areas