Jump to Content

Deep Relightable Textures: Volumetric Performance Capture with Neural Rendering

Abhi Meka
Christian Haene
Peter Barnum
Philip Davidson
Daniel Erickson
Jonathan Taylor
Sofien Bouaziz
Wan-Chun Alex Ma
Ryan Overbeck
Thabo Beeler
Paul Debevec
Shahram Izadi
Christian Theobalt
Christoph Rhemann
SIGGRAPH Asia and TOG (2020)

Abstract

The increasing demand for 3D content in augmented and virtual reality has motivated the development of volumetric performance capture systems such as the Light Stage. Recent advances are pushing free viewpoint relightable videos of dynamic human performances closer to photorealistic quality. However, despite significant efforts, these sophisticated systems are limited by reconstruction and rendering algorithms which do not fully model complex 3D structures and higher order light transport effects such as global illumination and sub-surface scattering. In this paper, we propose a system that combines traditional geometric pipelines with a neural rendering scheme to generate photorealistic renderings of dynamic performances under desired viewpoint and lighting. Our system leverages deep neural networks that model the classical rendering process to learn implicit features that represent the view-dependent appearance of the subject independent of the geometry layout, allowing for generalization to unseen subject poses and even novel subject identity. Detailed experiments and comparisons demonstrate the efficacy and versatility of our method to generate high-quality results, significantly outperforming the existing state-of-the-art solutions.

Research Areas