- Daniel Kane
- Ilias Diakonikolas
- Lisheng Ren
- Pasin Manurangsi
AISTATS 2022
We study the problem of distribution-free PAC learning a single neuron under adversarial label noise with respect to the squared loss. For a range of activation functions, including ReLUs and sigmoids, we prove strong computational hardness of learning results in the Statistical Query model and under a well-studied assumption on the complexity of refuting XOR formulas. Specifically, we establish that no polynomial-time learning algorithm, even improper, can approximate the optimal loss value within any constant factor.
We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work