Google Research

Deep Relightable Textures: Volumetric Performance Capture with Neural Rendering

  • Abhi Meka
  • Rohit Kumar Pandey
  • Christian Haene
  • Sergio Orts Escolano
  • Peter Barnum
  • Philip Davidson
  • Daniel Erickson
  • Yinda Zhang
  • Jonathan Taylor
  • Sofien Bouaziz
  • Chloe LeGendre
  • Wan-Chun Alex Ma
  • Ryan Overbeck
  • Thabo Beeler
  • Paul Debevec
  • Shahram Izadi
  • Christian Theobalt
  • Christoph Rhemann
  • Sean Fanello
SIGGRAPH Asia and TOG (2020)


The increasing demand for 3D content in augmented and virtual reality has motivated the development of volumetric performance capture systems such as the Light Stage. Recent advances are pushing free viewpoint relightable videos of dynamic human performances closer to photorealistic quality. However, despite significant efforts, these sophisticated systems are limited by reconstruction and rendering algorithms which do not fully model complex 3D structures and higher order light transport effects such as global illumination and sub-surface scattering. In this paper, we propose a system that combines traditional geometric pipelines with a neural rendering scheme to generate photorealistic renderings of dynamic performances under desired viewpoint and lighting. Our system leverages deep neural networks that model the classical rendering process to learn implicit features that represent the view-dependent appearance of the subject independent of the geometry layout, allowing for generalization to unseen subject poses and even novel subject identity. Detailed experiments and comparisons demonstrate the efficacy and versatility of our method to generate high-quality results, significantly outperforming the existing state-of-the-art solutions.

Research Areas

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work