Junfeng He

Junfeng He

Short bio

Junfeng He is a research scientist in Google Research. He got his bachelor and master degree from Tsinghua University, and PhD from Columbia University.
His full publication list can be found in google scholar page

Research areas

His major research areas include computer vision, machine learning, search/retrieval/ranking, HCI, and health. He has about 20 years research experience on image retrieval&classification, image synthesis/editing and their detection, ranking, large scale (approximate) machine learning, etc.

His current research interests include
  • User foundation model to model user feedback/behavior/interaction on visual content
  • Apply user foundation model to improve content generation and design
  • Generative models, espically evaluation and learning from human feedback (LHF) for generative models
  • Computer vision with human in the loop, the intersection of computer vision and human vision/perception

    Recent research papers (*: co-first author +: corresponding author)

    User foundation model

  • , Rich Human Feedback for Text-to-Image Generation, Youwei Liang*, Junfeng He*+, Gang Li*+, Peizhao Li, Arseniy Klimovskiy, Nicholas Carolan, Jiao Sun, Jordi Pont-Tuset, Sarah Young, Feng Yang, Junjie Ke, Krishnamurthy Dj Dvijotham, Katherine M Collins, Yiwen Luo, Yang Li, Kai J Kohlhoff, Deepak Ramachandran, Vidhya Navalpakkam, CVPR 2024 (Best Paper)
  • , UniAR: Unifying Human Attention and Response Prediction on Visual Content, Peizhao Li*, Junfeng He*+, Gang Li*+, Rachit Bhargava, Shaolei Shen, Nachiappan Valliappan, Youwei Liang, Hongxiang Gu, Venky Ramachandran, Golnaz Farhadi, Yang Li, Kai J Kohlhoff, Vidhya Navalpakkam, arXiv

    LHF for generative model

  • , Parrot: Pareto-optimal Multi-Reward Reinforcement Learning Framework for Text-to-Image Generation, Seung Hyun Lee, Yinxiao Li, Junjie Ke, Innfarn Yoo, Han Zhang, Jiahui Yu, Qifei Wang, Fei Deng, Glenn Entis, Junfeng He, Gang Li, Sangpil Kim, Irfan Essa, Feng Yang, arXiv

    Modeling of human behavior and its applications

  • Deep Saliency Prior for Reducing Visual Distraction, Kfir Aberman*, Junfeng He*, Yossi Gandelsman, Inbar Mosseri, David E Jacobs, Kai Kohlhoff, Yael Pritch, Michael Rubinstein, CVPR 2022
  • , Learning from Unique Perspectives: User-aware Saliency Modeling, Shi Chen, Nachiappan Valliappan, Shaolei Shen, Xinyu Ye, Kai J Kohlhoff, Junfeng He+, CVPR 2023
  • , Teacher-generated spatial-attention labels boost robustness and accuracy ofcontrastive models, Yushi Yao*, Chang Ye*, Junfeng He+, Gamaleldin Fathy Elsayed+, CVPR 2023
  • Teacher-generated pseudo human spatial-attention labels boost contrastive learning models>, Yushi Yao, CHANG YE, Junfeng He, Gamaleldin Fathy Elsayed, SVRHM Workshop@ NeurIPS 2022
  • Smartphone‐based gaze estimation for in‐home autism research, Na Yeon Kim, Junfeng He, Qianying Wu, Na Dai, Kai Kohlhoff, Jasmin Turner, Lynn K Paul, Daniel P Kennedy, Ralph Adolphs, Vidhya Navalpakkam Autism Research, 2024
  • Accelerating eye movement research via accurate and affordable smartphone eye tracking, N Valliappan, N Dai, E Steinberg, J He, K Rogers…, Nature Communications, 2020
  • On-Device Few-Shot Personalization for Real-Time Gaze Estimation, Junfeng He, Khoi Pham, Nachiappan Valliappan, Pingmei Xu, Chase Roberts, Dmitry Lagun, Vidhya Navalpakkam, ICCV 2019 GAZE workshop , Best paper
  • Gazegan-unpaired adversarial image generation for gaze estimation, M Sela, P Xu, J He, V Navalpakkam, D Lagun, arXiv preprint arXiv:1711.09767, 2017
  • Differentially Private Heatmaps, Badih Ghazi, Junfeng He, Kai Kohlhoff, Ravi Kumar, Pasin Manurangsi, Vidhya Navalpakkam, Nachiappan Valliappan, AAAI 2023

    Awards

  • Best Paper Award , CVPR, 2024
  • Publication&OpenSourcing Excellence Award , Perira org, Google Research, 2021
  • Best Paper Award , ICCV GAZE workshop, 2019

    Media coverage

    Leverage human attention/saliency models to improve JPEG XL compression

  • Google Opensource Blogpost for using saliency in JPEG XL
  • Google Opensource Blogpost for oepn sourcing attention center model (and its application in JPEG XL)
  • Authored Publications
    Sort By
    • Title
    • Title, descending
    • Year
    • Year, descending
      Preview abstract Recent Text-to-Image (T2I) generation models such as Stable Diffusion and Imagen have made significant progress in generating high-resolution images based on text descriptions. However, many generated images still suffer from issues such as artifacts/implausibility, misalignment with text descriptions, and low aesthetic quality. Inspired by the success of Reinforcement Learning with Human Feedback (RLHF) for large language models, prior work collected human-provided scores as feedback on generated images and trained a reward model to improve the T2I generation. In this paper, we enrich the feedback signal by (i) marking image regions that are implausible or misaligned with the text, and (ii) annotating which keywords in the text prompt are not represented in the image. We collect such rich human feedback on 18K generated images and train a multimodal transformer to predict these rich feedback automatically. We show that the predicted rich human feedback can be leveraged to improve image generation, for example, by selecting high-quality training data to finetune and improve the generative models, or by creating masks with predicted heatmaps to inpaint the problematic regions. Notably, the improvements generalize to models (Muse) beyond those used to generate the images on which human feedback data were collected (Stable Diffusion variants). View details
      Preview abstract Eye tracking has been widely used for decades in vision research, language and usability. However, most prior research has focused on large desktop displays using specialized eye trackers that are expensive and cannot scale. Little is known about eye movement behavior on phones, despite their pervasiveness and large amount of time spent. We leverage machine learning to demonstrate accurate smartphone-based eye tracking without any additional hardware. We show that the accuracy of our method is comparable to state-of-the-art mobile eye trackers that are 100x more expensive. Using data from over 100 opted-in users, we replicate key findings from previous eye movement research on oculomotor tasks and saliency analyses during natural image viewing. In addition, we demonstrate the utility of smartphone-based gaze for detecting reading comprehension difficulty. Our results show the potential for scaling eye movement research by orders-of-magnitude to thousands of participants (with explicit consent), enabling advances in vision research, accessibility and healthcare. View details
      Preview abstract Recent research has demonstrated the ability to estimate user’s gaze on mobile devices, by performing inference from an image captured with the phone’s front-facing camera, and without requiring specialized hardware. Gaze estimation accuracy is known to improve with additional calibration data from the user. However, most existing methods require either significant number of calibration points or computationally intensive model fine-tuning that is practically infeasible on a mobile device. In this paper, we overcome limitations of prior work by proposing a novel few-shot personalization approach for 2D gaze estimation. Compared to the best calibration-free model [11], the proposed method yields substantial improvements in gaze prediction accuracy (24%) using only 3 calibration points in contrast to previous personalized models that offer less improvement while requiring more calibration points. The proposed model requires 20x fewer FLOPS than the state-of-the-art personalized model [11] and can be run entirely on-device and in real-time, thereby unlocking a variety of important applications like accessibility, gaming and human-computer interaction. View details