Jump to Content

Edges to Shapes to Concepts: Adversarial Augmentation for Robust Vision

Aditay Tripathi
Rishubh Singh
Anirban Chakraborty
Computer Vision and Pattern Recognition (CVPR) 2023 (2023) (to appear)
Google Scholar


Recent work has shown that deep vision models tend to be overly dependent on low-level or “texture” features, leading to poor generalization. Various data augmentation strategies have been proposed to overcome this so-called texture bias in DNNs. We propose a simple, lightweight adversarial augmentation technique that explicitly incentivizes the network to learn holistic shapes for accurate prediction in an object classification setting. Our augmentations superpose automatically detected edgemaps from one image onto another image with shuffled patches, using a randomly determined mixing proportion, with the image label of the edgemap image. To classify these augmented images, the model needs to not only detect and focus on edges but distinguish between relevant and spurious edges. We show that our augmentations significantly improve classification accuracy and robustness measures on a range of datasets and neural architectures. As an example, VIT-large accuracy on ImageNet classification increases by up to 6%., and other similar metrics. Analysis using a range of probe datasets shows substantially increased shape sensitivity in our trained models, explaining the observed improvement in both classification accuracy and downstream tasks such as segmentation.