Google Research

The Complexity of Adversarially Robust Proper Learning of Halfspaces with Agnostic Noise

Advances in Neural Information Processing Systems (NeurIPS) (2020) (to appear)

Abstract

We study the computational complexity of adversarially robust proper learning of halfspaces in the distribution-independent agnostic PAC model, with a focus on L_p perturbations. We give a computationally efficient learning algorithm and a nearly matching computational hardness result for this problem. An interesting implication of our findings is that the L_∞ perturbations case is provably computationally harder than the case 2 ≤ p < ∞

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work