HDR photo editing with machine learning

October 16, 2024

Chloe LeGendre and Francois Bleibel, Software Engineers, Google Research

Our new machine learning model unlocks high dynamic range (HDR) image editing in Google Photos, promoting standard dynamic range image pixels to HDR image pixels.

High dynamic range (HDR) photography techniques can accurately capture a scene’s full range of brightness values — from those of its darkest shadows to its brightest light sources. However, when the time comes to view the resulting HDR photos, many displays are only capable of showing a limited range of brightness levels. This discrepancy means that HDR photos are often downgraded to fit within the physical limitations of these standard dynamic range (SDR) displays and the corresponding file formats that store a limited range of values.

However, an increasing share of mobile device and computer displays are HDR-capable, showing a greater range of brightness levels with brighter whites and darker blacks. Although several common HDR file formats capable of encoding this greater range exist, HDR-only formats can appear differently depending on the display capabilities. To optimally show images on any display — no matter the dynamic range — Android 14 introduced the new Ultra HDR image format in October 2023. Google Pixel 7 and newer devices adopted the format, capturing and saving HDR photographs as Ultra HDR and affording a more true-to-life rendition as compared to SDR formats, with greater contrast and brighter highlights.

Although the format unlocked HDR image display, its new metadata also increased the complexity of image editing operations. Until now, if you applied a more complex edit like Magic Eraser to an Ultra HDR photograph using Google Photos, your new edited image would save in SDR, losing its HDR rendition and the associated brightness and contrast.

Today, we introduce a new machine learning (ML) technique that enables complex image editing for HDR photographs, including those saved as Ultra HDR. On Pixel 8 and newer, our model runs behind the scenes in Google Photos to ensure that even if you apply complex edits to HDR images, your new edited images remain HDR. Core to our technique is predicting the HDR image metadata missing after editing, using an ML model trained on a large dataset of HDR images with complete metadata.

Dynamic range compression: From HDR to SDR

HDR burst or bracketed exposure capture methods, such as Pixel’s HDR+ burst photography system, preserve image detail across a wide range of brightness levels. To save such images for viewing on SDR displays, this range is compressed to fit the lower, 8-bit range of SDR displays and file formats. This dynamic range compression, also called tone mapping, reduces the number of gray levels in the image and thus reduces image contrast — the brightness difference between its darkest and brightest parts.

HDREditing1-comparison

Left: HDR photography captures dark shadows and bright highlights from real world scenes, while SDR displays and file formats compress brightness levels into a narrower range. Middle and right: a photograph before (middle) and after (right) tone mapping. The effect has been exaggerated to show the visual impact of reduced image contrast.

The Ultra HDR format: HDR and SDR rendition in one file

The new Ultra HDR image format encodes a dynamic range compressed SDR image alongside image metadata used to dynamically expand this range as needed. Since the SDR image is maintained, Ultra HDR images can be viewed as usual on legacy displays. However, the format’s metadata allows rendering the full dynamic range as authored by the camera on HDR-capable displays. It also enables a smooth transition between SDR and HDR renditions, even adapting to display capabilities dynamically as a display’s peak brightness automatically adjusts to the environment.

HDREditing2-exampleNew

Left: An HDR image with true-to-life dark shadows and bright highlights. Right: The same image saved with its dynamic range compressed for SDR display. Note: HDR effect has been simulated for SDR display. For true SDR/HDR differences, view this gallery in Google Chrome on an HDR display system.

The Gain Map: Encoding the HDR rendition

The Ultra HDR format includes metadata called a Gain Map[7bea7e] a log-encoded image stored alongside the SDR image that indicates how much to brighten each SDR pixel to produce the target HDR rendition. For example, a Gain Map pixel value of “0” may represent no difference between SDR and HDR renditions for a given pixel, while “1” may represent the maximum allowable brightness difference. The Gain Map allows for spatially-varying, local differences between the image renditions, so different regions can have varying amounts of added brightness or contrast when viewed in HDR. To generate an Ultra HDR image, a camera pipeline therefore must save the usual SDR image, the Gain Map, and its other metadata for HDR expansion.

HDREditing3-GainMap

Left: An HDR image. Center: The image authored for SDR display. Right: The Gain Map encoding the transition between the SDR and HDR renditions. Note: HDR effect has been simulated for SDR display. For true SDR/HDR differences, view this gallery in Google Chrome on an HDR display system.

Image editing with Ultra HDR

Although the format optimizes image rendition across diverse displays, it has a drawback: the Gain Map increases the complexity of image editing operations. As a trivial example, imagine we want to crop an HDR image. Now, we must crop the SDR image and the Gain Map.

HDREditing4-GainCrop

Left: An original SDR image with its Gain Map. Right: Now the Gain Map also requires cropping.

Basic HDR image edits like “crop” and “rotate” have simple Gain Map editing implementations, already part of Google Photos. However, the Google Photos editor includes many more editing features that are complex and often powered by ML and computational photography, including Magic Eraser, Photo Unblur, Magic Editor, Portrait Blur, and Motion Photos. How to best edit the Gain Map when using these tools is ambiguous because these features were developed for SDR images, with models expecting SDR images as inputs and producing SDR images as outputs.

Using Magic Eraser, for example, you can erase a distracting element of an image, applying image inpainting to generate new pixels in the erased region. For an HDR image, if the erased portion was bright, then it’s likely that the matching Gain Map area was bright as well. As with the crop example, now you’d also have to inpaint the Gain Map. Otherwise, you would see an undesirable “ghosting” effect in the HDR rendition — the original Gain Map appearing as an overlay on the edited SDR image. Thus, until now, if you edited an HDR photograph using an ML-powered tool like Magic Eraser, Google Photos would simply drop the Gain Map altogether, downgrading the image to SDR in the new edited image.

HDREditing5-GainGhost


Left to Right: An HDR image, its Gain Map, the SDR image after erasing two windows with Magic Editor, and the HDR rendition with visible “ghosting” in the window regions, using the original Gain Map with the edited SDR image. Note: HDR effect has been simulated for SDR display.

Gain map reconstruction

Inspired by how Magic Eraser reconstructs the missing parts of SDR images, we trained a new ML model to reconstruct the missing Gain Map regions after HDR image editing. Given the original SDR image, the edited SDR image, and the original image’s Gain Map, our model predicts a new Gain Map. We then blend the prediction with the original Gain Map — using a mask that indicates where SDR image pixels have been modified during editing — to show a uniform image.

HDREditing6-Diagram1

Left: We compute an image editing mask based on where SDR pixels have been modified during editing. Right: Our model takes the edited image, image editing mask, and the original Gain Map image as inputs, and produces a predicted Gain Map.

HDREditing7-Diagram2

We blend together the predicted Gain Map and original Gain Map to produce the blended edited Gain Map to display the original image in HDR.

Training the Gain Map reconstruction model

Training our model required the collection of several thousand HDR images: SDR images with Gain Maps and metadata for HDR rendering. We captured diverse photographs using the HDR+ burst photography pipeline, applying the dynamic range compression algorithms of the current Pixel camera to generate SDR and HDR image pairs. With this data and a dataset of random mask shapes, we trained a lightweight image-to-image translation model to predict the Gain Map given the edited SDR image and a masked version of the original Gain Map. It is even flexible enough to produce a Gain Map given only an SDR image as input — optimizing rendition for HDR displays.

Our Gain Map reconstruction model is under 1 MB and runs at interactive frame rates on mobile devices. It maintains HDR rendition during ML-powered image editing in Google Photos, on Pixel 8 and newer devices. Crucially, no ML-powered editing features require custom Gain Map editing workflows, as our model can work for any effect — even those not yet built. Here are some examples of our Gain Map reconstruction in action:

HDREditing8-Hero
HDREditing9-Rocks
HDREditing10-Vista
HDREditing11-Horse

Left: Original images with their Gain Maps. Right: Edited images, using a variety of machine learning powered editing tools in Google Photos, each with Gain Maps reconstructed by our new method. For HDR images, view this gallery in Google Chrome on an HDR display system.

We see our model as a first step towards enabling HDR image rendering to shine as often as possible.

Acknowledgements

This project is the result of a collaboration between Google Research, Google Photos, Pixel, and Android teams. Key collaborators include: Dhriti Wasan, Ishani Parekh, Nick Chusid, Nick Deakin, Andy Radin, Brandon Ruffin, James Adamson, Michael Milne, Steve Perry, Fares Alhassen, Sam Hasinoff, Kiran Murthy, Karl Rasche, Ryan Geiss, Krish Desai, Navin Sarma, Benyah Shaparenko, Matt Briselli, Patrick Shehane, Michael Specht, Christopher Cameron, and Alec Mouri.


  1. Gain Map technology used under license by Adobe and Gain Map ISO standardization is in progress.