Jump to Content
Kripasindhu Sarkar

Kripasindhu Sarkar

Kripasindhu Sarkar is a Research Scientist in the AR perception group at Google, where he works on photorealistic rendering of humans and human centric vision in the context of AR and VR. Prior to Google, he was a postdoctoral researcher in the Visual Computing and AI department of Prof. Christian Theobalt at Max-Planck Institute for Informatics, and he obtained his PhD under Prof. Didier Stricker at German Research Center for Artificial Intelligence (DFKI) Kaiserslautern. He got his bachelor and masters degrees from the Indian Institute of Technology Kharagpur, India (IIT Kharagpur).

Research Areas

Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Learning Personalized High Quality Volumetric Head Avatars from Monocular RGB Videos
    Ziqian Bai
    Danhang "Danny" Tang
    Di Qiu
    Abhimitra Meka
    Mingsong Dou
    Ping Tan
    Thabo Beeler
    2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE
    Preview abstract We propose a method to learn a high-quality implicit 3D head avatar from a monocular RGB video captured in the wild. The learnt avatar is driven by a parametric face model to achieve user-controlled facial expressions and head poses. Our hybrid pipeline combines the geometry prior and dynamic tracking of a 3DMM with a neural radiance field to achieve fine-grained control and photorealism. To reduce over-smoothing and improve out-of-model expressions synthesis, we propose to predict local features anchored on the 3DMM geometry. These learnt features are driven by 3DMM deformation and interpolated in 3D space to yield the volumetric radiance at a designated query point. We further show that using a Convolutional Neural Network in the UV space is critical in incorporating spatial context and producing representative local features. Extensive experiments show that we are able to reconstruct high-quality avatars, with more accurate expression-dependent details, good generalization to out-of-training expressions, and quantitatively superior renderings compared to other state-of-the-art approaches. View details
    No Results Found