Learnability and Complexity of Quantum Samples
Abstract
Given a quantum circuit, a quantum computer can sample the output distribution exponentially
faster in the number of bits than classical computers. A similar exponential separation has yet
to be established in generative models through quantum sample learning: given samples from
an n-qubit computation, can we learn the underlying quantum distribution using models with
training parameters that scale polynomial in n under a fixed training time? We study four kinds of
generative models: Deep Boltzmann machine (DBM), Generative Adversarial Networks (GANs),
Long Short-Term Memory (LSTM) and Autoregressive GAN, on learning quantum data set generated
by deep random circuits. We demonstrate the leading performance of LSTM in learning quantum
samples, and thus the autoregressive structure present in the underlying quantum distribution from
random quantum circuits. Both numerical experiments and a theoretical proof in the case of the
DBM show exponentially growing complexity of learning-agent parameters required for achieving
a fixed accuracy as n increases. Finally, we establish a connection between learnability and the
complexity of generative models by benchmarking learnability against different sets of samples drawn
from probability distributions of variable degrees of complexities in their quantum and classical
representations.
faster in the number of bits than classical computers. A similar exponential separation has yet
to be established in generative models through quantum sample learning: given samples from
an n-qubit computation, can we learn the underlying quantum distribution using models with
training parameters that scale polynomial in n under a fixed training time? We study four kinds of
generative models: Deep Boltzmann machine (DBM), Generative Adversarial Networks (GANs),
Long Short-Term Memory (LSTM) and Autoregressive GAN, on learning quantum data set generated
by deep random circuits. We demonstrate the leading performance of LSTM in learning quantum
samples, and thus the autoregressive structure present in the underlying quantum distribution from
random quantum circuits. Both numerical experiments and a theoretical proof in the case of the
DBM show exponentially growing complexity of learning-agent parameters required for achieving
a fixed accuracy as n increases. Finally, we establish a connection between learnability and the
complexity of generative models by benchmarking learnability against different sets of samples drawn
from probability distributions of variable degrees of complexities in their quantum and classical
representations.