Kripasindhu Sarkar

Kripasindhu Sarkar

Kripasindhu Sarkar is a Research Scientist in the AR perception group at Google, where he works on photorealistic rendering of humans and human centric vision in the context of AR and VR. Prior to Google, he was a postdoctoral researcher in the Visual Computing and AI department of Prof. Christian Theobalt at Max-Planck Institute for Informatics, and he obtained his PhD under Prof. Didier Stricker at German Research Center for Artificial Intelligence (DFKI) Kaiserslautern. He got his bachelor and masters degrees from the Indian Institute of Technology Kharagpur, India (IIT Kharagpur).

Research Areas

Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Cafca: High-quality Novel View Synthesis of Expressive Faces from Casual Few-shot Captures
    Marcel Bühler
    Simon Li
    Erroll Wood
    Leonhard Helminger
    Xu Chen
    Tanmay Shah
    Daoye Wang
    Stephan Garbin
    Otmar Hilliges
    Dmitry Lagun
    Jérémy Riviere
    Paulo Gotardo
    Thabo Beeler
    Abhi Meka
    2024
    Preview abstract Volumetric modeling and neural radiance field representations have revolutionized 3D face capture and photorealistic novel view synthesis. However, these methods often require hundreds of multi-view input images and are thus inapplicable to cases with less than a handful of inputs. We present a novel volumetric prior on human faces that allows for high-fidelity expressive face modeling from as few as three input views captured in the wild. Our key insight is that an implicit prior trained on synthetic data alone can generalize to extremely challenging real-world identities and expressions and render novel views with fine idiosyncratic details like wrinkles and eyelashes. We leverage a 3D Morphable Face Model to synthesize a large training set, rendering each identity with different expressions, hair, clothing, and other assets. We then train a conditional Neural Radiance Field prior on this synthetic dataset and, at inference time, fine-tune the model on a very sparse set of real images of a single subject. On average, the fine-tuning requires only three inputs to cross the synthetic-to-real domain gap. The resulting personalized 3D model reconstructs strong idiosyncratic facial expressions and outperforms the state-of-the-art in high-quality novel view synthesis of faces from sparse inputs in terms of perceptual and photo-metric quality. View details
    Learning Personalized High Quality Volumetric Head Avatars from Monocular RGB Videos
    Ziqian Bai
    Danhang "Danny" Tang
    Di Qiu
    Abhimitra Meka
    Mingsong Dou
    Ping Tan
    Thabo Beeler
    2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE
    Preview abstract We propose a method to learn a high-quality implicit 3D head avatar from a monocular RGB video captured in the wild. The learnt avatar is driven by a parametric face model to achieve user-controlled facial expressions and head poses. Our hybrid pipeline combines the geometry prior and dynamic tracking of a 3DMM with a neural radiance field to achieve fine-grained control and photorealism. To reduce over-smoothing and improve out-of-model expressions synthesis, we propose to predict local features anchored on the 3DMM geometry. These learnt features are driven by 3DMM deformation and interpolated in 3D space to yield the volumetric radiance at a designated query point. We further show that using a Convolutional Neural Network in the UV space is critical in incorporating spatial context and producing representative local features. Extensive experiments show that we are able to reconstruct high-quality avatars, with more accurate expression-dependent details, good generalization to out-of-training expressions, and quantitatively superior renderings compared to other state-of-the-art approaches. View details
    LitNeRF: Intrinsic Radiance Decomposition for High-Quality View Synthesis and Relighting of Faces
    Marcel Bühler
    Simon Li
    Daoye Wang
    Delio Vicini
    Jérémy Riviere
    Paulo Gotardo
    Thabo Beeler
    Abhi Meka
    2023
    Preview abstract High-fidelity, photorealistic 3D capture of a human face is a long-standing problem in computer graphics – the complex material of skin, intricate geometry of hair, and fine scale textural details make it challenging. Traditional techniques rely on very large and expensive capture rigs to reconstruct explicit mesh geometry and appearance maps, and are limited by the accuracy of hand-crafted reflectance models. More recent volumetric methods (e.g., NeRFs) have enabled view-synthesis and sometimes relighting by learning an implicit representation of the density and reflectance basis, but suffer from artifacts and blurriness due to the inherent ambiguities in volumetric modeling. These problems are further exacerbated when capturing with few cameras and light sources. We present a novel technique for high-quality capture of a human face for 3D view synthesis and relighting using a sparse, compact capture rig consisting of 15 cameras and 15 lights. Our method combines a neural volumetric representation with traditional mesh reconstruction from multiview stereo. The proxy geometry allows us to anchor the 3D density field to prevent artifacts and guide the disentanglement of intrinsic radiance components of the face appearance such as diffuse and specular reflectance, and incident radiance (shadowing) fields. Our hybrid representation significantly improves the state-of-the-art quality for arbitrarily dense renders of a face from desired camera viewpoint as well as environmental, directional, and near-field lighting. View details
    ×