Cold-Start Reinforcement Learning with Softmax Policy Gradients
Abstract
Policy-gradient approaches to reinforcement learning have two common and
undesirable overhead procedures, namely warm-start training and sample variance
reduction. In this paper, we describe a reinforcement learning method based on
a softmax policy that requires neither of these procedures. Our method combines
the advantages of policy-gradient methods with the efficiency and simplicity of
maximum-likelihood approaches. We apply this new cold-start reinforcement
learning method in training sequence generation models for structured output
prediction problems. Empirical evidence validates this method on automatic
summarization and image captioning tasks.
undesirable overhead procedures, namely warm-start training and sample variance
reduction. In this paper, we describe a reinforcement learning method based on
a softmax policy that requires neither of these procedures. Our method combines
the advantages of policy-gradient methods with the efficiency and simplicity of
maximum-likelihood approaches. We apply this new cold-start reinforcement
learning method in training sequence generation models for structured output
prediction problems. Empirical evidence validates this method on automatic
summarization and image captioning tasks.