Google Research

Hardness of Learning a Single Neuron with Adversarial Label Noise

AISTATS 2022

Abstract

We study the problem of distribution-free PAC learning a single neuron under adversarial label noise with respect to the squared loss. For a range of activation functions, including ReLUs and sigmoids, we prove strong computational hardness of learning results in the Statistical Query model and under a well-studied assumption on the complexity of refuting XOR formulas. Specifically, we establish that no polynomial-time learning algorithm, even improper, can approximate the optimal loss value within any constant factor.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work