Jump to Content

Learned Perceptual Image Enhancement

ICCP (International Conference on Computational Photography 2018) (2018)
Google Scholar


Learning of a typical image enhancement pipeline involves minimization of a loss function between enhanced and reference images. While L 1 and L 2 losses are perhaps the most widely used functions for this purpose, they do not necessarily lead to perceptually compelling results. In this paper, we show that adding a learned no-reference image quality metric to the loss can significantly improve enhancement operators. This metric is a CNN (convolutional neural network) trained on a large-scale dataset labelled with aesthetic preference of human raters. This loss allows us to conveniently perform back-propagation in our learning framework to simultaneously optimize for similarity to a given ground truth reference and perceptual quality. This perceptual loss is only used to train parameters of image processing operators, and does not impose any extra complexity at inference time. Our experiments demonstrate that this loss can be effective for tuning a variety of operators such as local tone mapping and dehazing.

Research Areas