DermGAN: Synthetic Generation of Clinical Skin Images with Pathology

Amirata Gohorbani
Yuan Liu


Despite the recent success in applying supervised deep learning to medical imaging tasks, the problem of obtaining large, diverse and expert-annotated datasets required for development of high performant models remains particularly challenging. In this work, we explore the possibility of using Generative Adverserial Networks (GAN) to synthesize natural images with skin pathology. We propose DermGAN, an adaptation of the popular Pix2Pix architecture, to create synthetic images for a pre-specified skin condition, with varying size and location, and the underlying skin color. In a human turing test, we show that the synthetic images are not only visually similar to real images, but also embody the respective skin condition in dermatologists' eyes. Furthermore, when using synthetic images as a data augmentation technique for training a skin condition classifier, the model is non-inferior to the baseline model while demonstrating improved performance for rare conditions.