Improving the Robustness of Deep Neural Networks via Stability Training
Abstract
In this paper we address the issue of output instability
of deep neural networks: small perturbations in the visual
input can significantly distort the feature embeddings and
output of a neural network. Such instability affects many
deep architectures with state-of-the-art performance on a
wide range of computer vision tasks. We present a general
stability training method to stabilize deep networks against
small input distortions that result from various types of common
image processing, such as compression, rescaling, and
cropping. We validate our method by stabilizing the stateof-the-art
Inception architecture [11] against these types of
distortions. In addition, we demonstrate that our stabilized
model gives robust state-of-the-art performance on largescale
near-duplicate detection, similar-image ranking, and
classification on noisy datasets.
of deep neural networks: small perturbations in the visual
input can significantly distort the feature embeddings and
output of a neural network. Such instability affects many
deep architectures with state-of-the-art performance on a
wide range of computer vision tasks. We present a general
stability training method to stabilize deep networks against
small input distortions that result from various types of common
image processing, such as compression, rescaling, and
cropping. We validate our method by stabilizing the stateof-the-art
Inception architecture [11] against these types of
distortions. In addition, we demonstrate that our stabilized
model gives robust state-of-the-art performance on largescale
near-duplicate detection, similar-image ranking, and
classification on noisy datasets.