- Chloe LeGendre
- Graham Fyffe
- Jay Busch
- John Flynn
- Laurent Charbonnel
- Paul Debevec
- Wan-Chun Alex Ma
Abstract
We present a learning-based method to infer plausible high dynamic range (HDR), omnidirectional illumination given an unconstrained, low dynamic range (LDR) image from a mobile phone camera with a limited field-of-view (FOV). For training data, we collect videos of various reflective spheres placed within the camera's FOV, but with most of the background unoccluded, leveraging that materials with diverse reflectance functions will reveal different lighting cues in a single exposure. We train a deep neural network to regress from the unoccluded part of the LDR background image to its HDR lighting by matching the LDR ground truth sphere images to those rendered with the predicted illumination using image-based relighting, which is differentiable. Our inference runs in real-time on a mobile device, enabling realistic rendering of virtual objects into real scenes for mobile mixed reality. Training on automatically exposed and white-balanced videos, we improve the realism of rendered objects compared to the state-of-the art methods for both indoor and outdoor scenes.
Research Areas
Learn more about how we do research
We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work