Google Research

Junfeng He


Short bio

Junfeng He is a research scientist in Google Research. He got his bachelor and master degree from Tsinghua University, and PhD from Columbia University.
His full publication list can be found in google scholar page

Research areas

His major research areas include computer vision, machine learning, search/retrieval/ranking, HCI, and health. He has >15 years research experience on image retrieval&classification, image editing and its detection, ranking, large scale (approximate) machine learning, etc.
His current research interest is the intersection of computer vision and human vision/perception, for instance
  • using computer vision techniques to model and understand human gaze/attention/perception
  • applications of human perception/attention modeling especially related to health/education/social-good/improving-user-experience
  • leveraging human vision/perception to inspire and improve computer vision models/systems
  • related trustworthy ML problems such as privacy/fairness/interpretation, etc.

    Recent research papers

    Modeling of gaze tracking and its applications

  • Accelerating eye movement research via accurate and affordable smartphone eye tracking, N Valliappan, N Dai, E Steinberg, J He, K Rogers…, Nature Communications, 2020
  • On-Device Few-Shot Personalization for Real-Time Gaze Estimation, Junfeng He, Khoi Pham, Nachiappan Valliappan, Pingmei Xu, Chase Roberts, Dmitry Lagun, Vidhya Navalpakkam, ICCV 2019 GAZE workshop , Best paper
  • Gazegan-unpaired adversarial image generation for gaze estimation, M Sela, P Xu, J He, V Navalpakkam, D Lagun, arXiv preprint arXiv:1711.09767, 2017

    Modeling of human attention/perception/vision and its applications

  • , Learning from Unique Perspectives: User-aware Saliency Modeling, Shi Chen, Nachiappan Valliappan, Shaolei Shen, Xinyu Ye, Kai J Kohlhoff, Junfeng He+, CVPR 2023
  • Deep Saliency Prior for Reducing Visual Distraction, Kfir Aberman*, Junfeng He*, Yossi Gandelsman, Inbar Mosseri, David E Jacobs, Kai Kohlhoff, Yael Pritch, Michael Rubinstein, CVPR 2022

    Leverage human vision/perception to improve computer vision

  • , Teacher-generated spatial-attention labels boost robustness and accuracy ofcontrastive models, Yushi Yao*, Chang Ye*, Junfeng He+, Gamaleldin Fathy Elsayed+, CVPR 2023
  • Teacher-generated pseudo human spatial-attention labels boost contrastive learning models>, Yushi Yao, CHANG YE, Junfeng He, Gamaleldin Fathy Elsayed, SVRHM Workshop@ NeurIPS 2022

    Related trustworthy ML problems

  • Differentially Private Heatmaps, Badih Ghazi, Junfeng He, Kai Kohlhoff, Ravi Kumar, Pasin Manurangsi, Vidhya Navalpakkam, Nachiappan Valliappan, AAAI 2023

    Media coverage

    Leverage human attention/saliency models to improve JPEG XL compression

  • Google Opensource Blogpost for using saliency in JPEG XL
  • Google Opensource Blogpost for oepn sourcing attention center model (and its application in JPEG XL)


  • Publication&OpenSourcing Excellence Award , Perira org, Google Research, 2021
  • Best Paper Award , ICCV GAZE workshop, 2019
  • Learn more about how we do research

    We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work