Google Research

Montage4D: Interactive seamless fusion of multiview video textures

  • Ruofei Du
  • Ming Chuang
  • Wayne Chang
  • Hugues Hoppe
  • Amitabh Varshney
Proceedings of ACM Interactive 3D Graphics (I3D) 2018

Abstract

The commoditization of virtual and augmented reality devices and the availability of inexpensive consumer depth cameras have catalyzed a resurgence of interest in spatiotemporal performance capture. Recent systems like Fusion4D and Holoportation address several crucial problems in the real-time fusion of multiple depth maps into volumetric and deformable representations. Nonetheless, stitching multiview video textures on dynamic meshes remains challenging due to imprecise geometries, occlusion seams, and critical time constraints. In this paper, we present a practical solution towards real-time seamless texture montage for dynamic multiview reconstruction. We build on the ideas of dilated depth discontinuities and majority voting from the Holoportation project to reduce ghosting effects when blending textures. In contrast to their approach, we determine the appropriate blend of textures per vertex using view-dependent rendering techniques, so as to avoid the fuzziness caused by normal-based blending. By making use of discrete-differential-geometry-guided geodesics and temporal texture fields, our algorithm mitigates spatial occlusion seams while maintaining temporal consistency. Experiments demonstrate significant enhancement in rendering quality, especially for detailed regions such as faces. We envision a wide range of applications for Montage4D, including immersive telepresence for business, training, and live entertainment.

Research Areas

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work