Christoph Dann
My research focuses on fundamental questions in reinforcement learning and the design of efficient algorithmic solutions for decision making under uncertainty. For more information, see my website cdann.net.
Research Areas
Authored Publications
Sort By
Preview abstract
POMDPs capture a broad class of decision making problems, but hardness results suggest that learning is intractable even in simple settings due to the inherent partial observability. However, in many realistic problems, more information is either revealed or can be computed during some point of the learning process. Motivated by diverse applications ranging from robotics to data center scheduling, we formulate a Hindsight Observable Markov Decision Process (HOMDP) as a POMDP where the latent states are revealed to the learner in hindsight and only during training. We introduce new algorithms for the tabular and function approximation settings that are provably sample-efficient with hindsight observability, even in POMDPs that would otherwise be statistically intractable. We give a lower bound showing that the tabular algorithm is optimal in its dependence on latent state and observation cardinalities.
View details
Preview abstract
Blackwell's celebrated theory measures approachability using the $\ell_2$ (Euclidean) distance. In many applications such as regret minimization, it is often more useful to study approachability under other distance metrics, most commonly the $\ell_\infty$ metric. However, the time and space complexity of the algorithms designed for $\ell_\infty$ approachability depend on the dimension of the space of the vectorial payoffs, which is often prohibitively large. We present a framework for converting high-dimensional $\ell_\infty$ approachability problems to low-dimensional \emph{pseudonorm} approachability problems, thereby resolving such issues. We first show that the $\ell_\infty$ distance between the average payoff and the approachability set can be equivalently defined as a \emph{pseudodistance} between a lower-dimensional average vector payoff and a new convex convex set we define. Next, we develop an algorithmic theory of pseudonorm approachability analogous to previous work norm approachability showing that it can be achieved via online linear optimization (OLO) over a convex set given by the Fenchel dual of the unit pseudonorm ball. We then use that to show, modulo mild normalization assumptions, that there exists an $\ell_\infty$ approachability algorithm whose convergence is independent of the dimension of the original vector payoff. We further show that that algorithm admits a polynomial-time complexity, assuming that the original $\ell_\infty$-distance can be computed efficiently. We also give an $\ell_\infty$ approachability algorithm whose convergence is logarithmic in that dimension using an FTRL algorithm with a maximum-entropy regularizer. Finally, we illustrate the benefits of our framework by applying it to several problems in regret minimization.
View details
Preview abstract
There is often a great degree of freedom in the reward design when formulating a task as a reinforcement learning (RL) problem. The choice of reward function has significant impact on the learned policy and how fast the algorithm converges to it. Typically several iterations of specifying and learning with the reward function are necessary to find one that leads to sample-efficient learning of desired behavior. In this work, we instead propose to directly pass multiple alternate reward formulations of the task to the RL agent. We
show that natural extensions of action-elimination algorithms to multiple rewards achieve more favorable instance-dependent regret bounds than their single-reward counterparts, both in multi-armed bandits and in tabular Markov decision processes. Specifically our bounds scale for each state-action pair with the inverse of the most favorable gap among all reward functions. This suggests that learning with multiple rewards can indeed be more sample-efficient, as long as the rewards agree on an optimal policy. We further prove that when rewards do not agree on the optimal policy, multi-reward action elimination in multi-armed bandits still learns a policy that is good across all reward functions.
View details
Multiple-policy High-confidence Policy Evaluation
Mohammad Ghavamzadeh
International Conference on Artificial Intelligence and Statistics (2023), pp. 9470-9487
Preview abstract
In reinforcement learning applications, we often want to accurately estimate the return of several policies of interest. We study this problem, multiple-policy high-confidence policy evaluation, where the goal is to estimate the return of all given target policies up to a desired accuracy with as few samples as possible. The natural approaches to this problem, i.e., evaluating each policy separately or estimating a model of the MDP, scale with the number of policies to evaluate or the size of the MDP, respectively. We present an alternative approach based on reusing samples from on-policy Monte-Carlo estimators and show that it is more sample-efficient in favorable cases. Specifically, we provide guarantees in terms of a notion of overlap of the set of target policies and shed light on when such an approach is indeed beneficial compared to existing methods.
View details
Preview abstract
We study the setting of optimizing with bandit feedback with additional prior knowledge provided to the learner in the form of an initial hint of the optimal action. We present a novel algorithm for stochastic linear bandits that uses this hint to improve its regret to $\tilde O(\sqrt{T})$ when the hint is accurate, while maintaining a minimax-optimal $\tilde O(d\sqrt{T})$ regret independent of the quality of the hint. Furthermore, we provide a Pareto frontier of tight tradeoffs between best-case and worst-case regret, with matching lower bounds. Perhaps surprisingly, our work shows that leveraging a hint shows provable gains without sacrificing worst-case performance, implying that our algorithm adapts to the quality of the hint for free. We also provide an extension of our algorithm to the case of $m$ initial hints, showing that we can achieve a $\tilde O(m^{2/3}\sqrt{T})$ regret.
View details
A Model Selection Approach for Corruption Robust Reinforcement Learning
Chen-Yu Wei
33rd International Conference on Algorithmic Learning Theory (ALT 2022) (2022)
Preview abstract
We develop a model selection approach to tackle reinforcement learning with adversarial corruption in both transition and reward. For finite-horizon tabular MDPs, without prior knowledge on the total amount of corruption, our algorithm achieves a regret bound of O(min{1/∆,√T}+C)where T is the number of episodes, C is the total amount of corruption, and ∆ is the reward gap between the best and the second-best policies. This is the first worst-case optimal bound achieved without knowledge of C, improving previous results of Lykouris et al. (2021); Chen et al. (2021); Wu et al.(2021). For finite-horizon linear MDPs, we develop a computationally efficient algorithm with a regret bound of ̃O(√(1 +C)T), and another computationally inefficient one with O(√T+C),improving the result of Lykouris et al. (2021) and answering an open question by Zhang et al.(2021b). Finally, our model selection framework can be easily applied to other settings including linear bandits, linear contextual bandits, and MDPs with general function approximation, leading to several improved or new results.
View details
Preview abstract
Myopic exploration policies such as epsilon-greedy, softmax, or Gaussian noise fail to explore efficiently in some reinforcement learning tasks and yet, they perform well in many others. In fact, in practice, they are often selected as the top choices, due to their simplicity. But, for what tasks do such policies succeed? Can we give theoretical guarantees for their favorable performance? These crucial questions have been scarcely investigated, despite the prominent practical importance of these policies. This paper presents a theoretical analysis of such policies and provides the first regret and sample-complexity bounds for reinforcement learning with myopic exploration. Our results apply to value-function-based algorithms in episodic MDPs with bounded Bellman-Eluder dimension. We define an exploration-gap quantity, alpha, that captures a structural property of the MDP, the exploration policy and the given value function class. We show that the sample-complexity of myopic exploration scales quadratically with the inverse of this quantity, 1 / alpha^2. We further demonstrate through concrete examples that the exploration gap is indeed favorable in several tasks where myopic exploration succeeds, due to the corresponding dynamics and reward structure.
View details
Preview abstract
There have been many recent advances on provably efficient reinforcement learning in problems with rich observation spaces and general function classes. Unfortunately, common to all such approaches is a realizability assumption, that requires the function class to contain the optimal value function of true MDP model, that holds in hardly any real-world setting. In this work, we consider the more realistic setting of agnostic reinforcement learning with a policy class (that may not contain any near-optimal policy). We provide an algorithm for this setting and prove instance-dependent regret bounds when the MDP has small rank $d$. Our bounds scale exponentially with the rank $d$ in the worst case but importantly are polynomial in the horizon, number of actions and the log number of policies. We further show through a nearly matching lower bound that this dependency on horizon is unavoidable.
View details
Preview abstract
We study reinforcement learning in tabular MDPs where the agent receives additional side observations per step in the form of several transition samples -- e.g. from data augmentation. We formalize this setting using a feedback graph over state-action pairs and show that model-based algorithms can leverage side observations for more sample-efficient learning. We give a regret bound that predominantly depends on the size of the maximum acyclic subgraph of the feedback graph, in contrast with a polynomial dependency on the number of states and actions in the absence of side observations. Finally, we highlight challenges when leveraging a small dominating set of the feedback graph as compared to the well-studied bandit setting and propose a new algorithm that can use such a dominating set to learn a near-optimal policy faster.
View details
Policy Certificates: Towards Accountable Reinforcement Learning
Lihong Li
Wei Wei
Emma Brunskill
International Conference on Machine Learning (ICML) (2019)
Preview abstract
The performance of a reinforcement learning algorithm can vary drastically during learning because of exploration. Existing algorithms provide little information about the quality of their current policy before executing it, and thus have limited use in high-stakes applications, such as healthcare. We address this lack of accountability by proposing that algorithms output policy certificates. These certificates bound the sub-optimality and return of the policy in the next episode, allowing humans to intervene when the certified quality is not satisfactory. We further introduce two new algorithms with certificates and present a new framework for theoretical analysis that guarantees the quality of their policies and certificates. For tabular MDPs, we show that computing certificates can even improve the sample-efficiency of optimism-based exploration. As a result, one of our algorithms achieves regret and PAC bounds that are minimax up to lower-order terms.
View details