Adversarial Examples as an Input-Fault Tolerance Problem

Angus Galloway
Anna Golubeva
Graham William Taylor
NeurIPS Workshop on Security in Machine Learning(2018)

Abstract

We analyze the adversarial examples problem in terms of a model’s fault tolerance with respect to its input. Whereas previous work focuses on arbitrarily strict threat models, i.e., -perturbations, we consider arbitrary valid inputs and propose an information-based characteristic for evaluating tolerance to diverse input faults.

Research Areas