- Badih Ghazi
- Pasin Manurangsi
- Ravi Kumar Ravikumar
- Thao Nguyen
International Conference on Artificial Intelligence and Statistics (AISTATS) (2021), pp. 1603-1611
In this work, we study the trade-off between differential privacy and adversarial robustness under L2-perturbations in the context of learning halfspaces. We prove nearly tight bounds on the sample complexity of robust private learning of halfspaces for a large regime of parameters. A highlight of our results is that robust and private learning is harder than robust or private learning alone. We complement our theoretical analysis with experimental results on the MNIST and USPS datasets, for a learning algorithm that is both differentially private and adversarially robust.
We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work