Chloe LeGendre
Chloe LeGendre is a senior software engineer in Google Research, currently working on computational photography, with interests in machine learning applied to problems in computer graphics, photography, imaging, and computer vision. Her research interests include scene lighting measurement and estimation, color science, and portrait photography manipulation. She graduated with a PhD in Computer Science in 2019 from the University of Southern California as a member of USC's Institute for Creative Technologies in the Vision and Graphics Lab.
Research Areas
Authored Publications
Sort By
Total Relighting: Learning to Relight Portraits for Background Replacement
Christian Haene
Sofien Bouaziz
Christoph Rhemann
Paul Debevec
SIGGRAPH and TOG (2021)
Preview abstract
We propose a novel system for portrait relighting and background replacement, which maintains high-frequency boundary details and accurately synthesizes the subject’s appearance as lit by novel illumination, thereby producing realistic composite images for any desired scene. Our technique includes foreground estimation via alpha matting, relighting, and compositing. We demonstrate that each of these stages can be tackled in a sequential pipeline without the use of priors (e.g. known background or known illumination) and with no specialized acquisition techniques, using only a single RGB portrait image and a novel, target HDR lighting environment as inputs. We train our model using relit portraits of subjects captured in a light stage computational illumination system, which records multiple lighting conditions, high quality geometry, and accurate alpha mattes. To perform realistic relighting for compositing, we introduce a novel per-pixel lighting representation in a deep learning framework, which explicitly models the diffuse and the specular components of appearance, producing relit portraits with convincingly rendered non-Lambertian effects like specular highlights. Multiple experiments and comparisons show the effectiveness of the proposed approach when applied to in-the-wild images.
View details
Learning Illumination from Diverse Portraits
Wan-Chun Alex Ma
Christoph Rhemann
Jason Dourgarian
Paul Debevec
SIGGRAPH Asia 2020 Technical Communications (2020)
Preview abstract
We present a learning-based technique for estimating high dynamic range (HDR), omnidirectional illumination from a single low dynamic range (LDR) portrait image captured under arbitrary indoor or outdoor lighting conditions. We train our model using portrait photos paired with their ground truth illumination. We generate a rich set of such photos by using a light stage to record the reflectance field and alpha matte of 70 diverse subjects in various expressions. We then relight the subjects using image-based relighting with a database of one million HDR lighting environments, compositing them onto paired high-resolution background imagery recorded during the lighting acquisition. We train the lighting estimation model using rendering-based loss functions and add a multi-scale adversarial loss to estimate plausible high frequency lighting detail. We show that our technique outperforms the state-of-the-art technique for portrait-based lighting estimation, and we also show that our method reliably handles the inherent ambiguity between overall lighting strength and surface albedo, recovering a similar scale of illumination for subjects with diverse skin tones. Our method allows virtual objects and digital characters to be added to a portrait photograph with consistent illumination. As our inference runs in real-time on a smartphone, we enable realistic rendering and compositing of virtual objects into live video for augmented reality.
View details
Deep Relightable Textures: Volumetric Performance Capture with Neural Rendering
Abhi Meka
Christian Haene
Peter Barnum
Philip Davidson
Daniel Erickson
Jonathan Taylor
Sofien Bouaziz
Wan-Chun Alex Ma
Ryan Overbeck
Thabo Beeler
Paul Debevec
Shahram Izadi
Christian Theobalt
Christoph Rhemann
SIGGRAPH Asia and TOG (2020)
Preview abstract
The increasing demand for 3D content in augmented and virtual reality has motivated the development of volumetric performance capture systems such as the Light Stage. Recent advances are pushing free viewpoint relightable videos of dynamic human performances closer to photorealistic quality. However, despite significant efforts, these sophisticated systems are limited by reconstruction and rendering algorithms which do not fully model complex 3D structures and higher order light transport effects such as global illumination and sub-surface scattering. In this paper, we propose a system that combines traditional geometric pipelines with a neural rendering scheme to generate photorealistic renderings of dynamic performances under desired viewpoint and lighting. Our system leverages deep neural networks that model the classical rendering process to learn implicit features that represent the view-dependent appearance of the subject independent of the geometry layout, allowing for generalization to unseen subject poses and even novel subject identity. Detailed experiments and comparisons demonstrate the efficacy and versatility of our method to generate high-quality results, significantly outperforming the existing state-of-the-art solutions.
View details
DeepLight: Learning Illumination for Unconstrained Mobile Mixed Reality
Graham Fyffe
John Flynn
Laurent Charbonnel
Paul Debevec
Wan-Chun Alex Ma
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2019), pp. 5918-5928
Preview abstract
We present a learning-based method to infer plausible high dynamic range (HDR), omnidirectional illumination given an unconstrained, low dynamic range (LDR) image from a mobile phone camera with a limited field-of-view (FOV). For training data, we collect videos of various reflective spheres placed within the camera's FOV, but with most of the background unoccluded, leveraging that materials with diverse reflectance functions will reveal different lighting cues in a single exposure. We train a deep neural network to regress from the unoccluded part of the LDR background image to its HDR lighting by matching the LDR ground truth sphere images to those rendered with the predicted illumination using image-based relighting, which is differentiable. Our inference runs in real-time on a mobile device, enabling realistic rendering of virtual objects into real scenes for mobile mixed reality. Training on automatically exposed and white-balanced videos, we improve the realism of rendered objects compared to the state-of-the art methods for both indoor and outdoor scenes.
View details