Private List Learnability vs. Online List Learnability

Hilla Schefler
Steve Hanneke
Iska Tsubari
Shay Moran
2025

Abstract

This work explores the connection between differential privacy (DP) and online learning in the context of PAC list learning. In this setting, a $k$-list learner outputs a list of $k$ potential predictions for an instance $x$ and incurs a loss if the true label of $x$ is not included in the list.
A basic result in the multiclass PAC framework with a finite number of labels states that private learnability is equivalent to online learnability [\citet*{AlonLMM19,BunLM20,JungKT20}].
Perhaps surprisingly, we show that this equivalence does not hold in the context of list learning.
Specifically, we prove that, unlike in the multiclass setting, a finite $k$-Littlestone dimension—a variant of the classical Littlestone dimension that characterizes online $k$-list learnability—is not a sufficient condition for DP $k$-list learnability.
However, similar to the multiclass case, we prove that it remains a necessary condition.

To demonstrate where the equivalence breaks down, we provide an example showing that the class of monotone functions with $k+1$ labels over $\mathbb{N}$ is online $k$-list learnable, but not DP $k$-list learnable.
This leads us to introduce a new combinatorial dimension, the \emph{$k$-monotone dimension}, which serves as a generalization of the threshold dimension.
Unlike the multiclass setting, where the Littlestone and threshold dimensions are finite together, for $k>1$, the $k$-Littlestone and $k$-monotone dimensions do not exhibit this relationship.
We prove that a finite $k$-monotone dimension is another necessary condition for DP $k$-list learnability, alongside finite $k$-Littlestone dimension.
Whether the finiteness of both dimensions implies private $k$-list learnability remains an open question.