Junfeng He

Junfeng He

Short bio

Junfeng He is a tech lead and research scientist in Google Research. He got his bachelor and master degree from Tsinghua University, and PhD from Columbia University.
His full publication list can be found in google scholar page

Research areas

His major research areas include computer vision, machine learning, search/retrieval/ranking, HCI, and health. He has about 20 years research experience on image retrieval&classification, image generation/editing and their detection, ranking, large scale (approximate) machine learning, etc.

His current research interests include
  • User foundation model to model user feedback/behavior/interaction on visual content, and its application to improve content generation and design
  • Generative models, especially LHF, post-training improvement, evaluation and behavior understanding for generative models
  • Computer vision with human in the loop, the intersection of computer vision and human vision/perception

    Recent research papers (*: co-first author +: corresponding author)



    User foundation models, evaluating/optimizing generative models and content creation with user foundation models

  • Rich Human Feedback for Text-to-Image Generation, Youwei Liang*, Junfeng He*+, Gang Li*+, Peizhao Li, Arseniy Klimovskiy, Nicholas Carolan, Jiao Sun, Jordi Pont-Tuset, Sarah Young, Feng Yang, Junjie Ke, Krishnamurthy Dj Dvijotham, Katherine M Collins, Yiwen Luo, Yang Li, Kai J Kohlhoff, Deepak Ramachandran, Vidhya Navalpakkam, CVPR 2024 (Best Paper)
  • Parrot: Pareto-optimal Multi-Reward Reinforcement Learning Framework for Text-to-Image Generation, Seung Hyun Lee, Yinxiao Li, Junjie Ke, Innfarn Yoo, Han Zhang, Jiahui Yu, Qifei Wang, Fei Deng, Glenn Entis, Junfeng He, Gang Li, Sangpil Kim, Irfan Essa, Feng Yang, ECCV 2024
  • Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback for Text-to-Image Generation, Katherine M. Collins, Najoung Kim, Yonatan Bitton, Verena Rieser, Shayegan Omidshafiei, Yushi Hu, Sherol Chen, Senjuti Dutta, Minsuk Chang, Kimin Lee, Youwei Liang, Georgina Evans, Sahil Singla, Gang Li, Adrian Weller, Junfeng He, Deepak Ramachandran, Krishnamurthy Dj Dvijotham, AIES 2024
  • ALOHA: from Attention to Likes – a unified mOdel for understanding HumAn responses to diverse visual content, Peizhao Li*, Junfeng He*+, Gang Li*+, Rachit Bhargava, Shaolei Shen, Nachiappan Valliappan, Youwei Liang, Hongxiang Gu, Venky Ramachandran, Golnaz Farhadi, Yang Li, Kai J Kohlhoff, Vidhya Navalpakkam, arXiv
  • Deep Saliency Prior for Reducing Visual Distraction, Kfir Aberman*, Junfeng He*, Yossi Gandelsman, Inbar Mosseri, David E Jacobs, Kai Kohlhoff, Yael Pritch, Michael Rubinstein, CVPR 2022

    Modeling of human attention & behavior and its applications

  • Learning from Unique Perspectives: User-aware Saliency Modeling, Shi Chen, Nachiappan Valliappan, Shaolei Shen, Xinyu Ye, Kai J Kohlhoff, Junfeng He+, CVPR 2023
  • Teacher-generated spatial-attention labels boost robustness and accuracy ofcontrastive models, Yushi Yao*, Chang Ye*, Junfeng He+, Gamaleldin Fathy Elsayed+, CVPR 2023
  • Teacher-generated pseudo human spatial-attention labels boost contrastive learning models>, Yushi Yao, CHANG YE, Junfeng He, Gamaleldin Fathy Elsayed, SVRHM Workshop@ NeurIPS 2022
  • Smartphone‐based gaze estimation for in‐home autism research, Na Yeon Kim, Junfeng He, Qianying Wu, Na Dai, Kai Kohlhoff, Jasmin Turner, Lynn K Paul, Daniel P Kennedy, Ralph Adolphs, Vidhya Navalpakkam Autism Research, 2024
  • Accelerating eye movement research via accurate and affordable smartphone eye tracking, N Valliappan, N Dai, E Steinberg, J He, K Rogers…, Nature Communications, 2020
  • On-Device Few-Shot Personalization for Real-Time Gaze Estimation, Junfeng He, Khoi Pham, Nachiappan Valliappan, Pingmei Xu, Chase Roberts, Dmitry Lagun, Vidhya Navalpakkam, ICCV 2019 GAZE workshop , Best paper
  • Gazegan-unpaired adversarial image generation for gaze estimation, M Sela, P Xu, J He, V Navalpakkam, D Lagun, arXiv preprint arXiv:1711.09767, 2017
  • Differentially Private Heatmaps, Badih Ghazi, Junfeng He, Kai Kohlhoff, Ravi Kumar, Pasin Manurangsi, Vidhya Navalpakkam, Nachiappan Valliappan, AAAI 2023

    Awards

  • Best Paper Award , CVPR, 2024
  • Publication&OpenSourcing Excellence Award , Perira org, Google Research, 2021
  • Best Paper Award , ICCV GAZE workshop, 2019

    Google Blogpost

  • Blogpost for "Rich human feedback for text-to-image generation"
  • Blogpost for "Enabling delightful user experiences via predictive models of human attention"
  • Blogpost for using saliency in JPEG XL
  • Authored Publications
    Sort By
    • Title
    • Title, descending
    • Year
    • Year, descending
      Google
    UniAR: A Unified model for predicting human Attention and Responses on visual content
    Peizhao Li
    Gang Li
    Rachit Bhargava
    Shaolei Shen
    Youwei Liang
    Hongxiang Gu
    Venky Ramachandran
    Golnaz Farhadi
    Kai Kohlhoff
    2024
    Rich Human Feedback for Text to Image Generation
    Katherine Collins
    Nicholas Carolan
    Youwei Liang
    Peizhao Li
    Dj Dvijotham
    Gang Li
    Sarah Young
    Jiao Sun
    Kai Kohlhoff
    Arseniy Klimovskiy
    2024
    Accelerating eye movement research via accurate and affordable smartphone eye tracking
    Na Dai
    Ethan Steinberg
    Kantwon Rogers
    Venky Ramachandran
    Mina Shojaeizadeh
    Li Guo
    Kai Kohlhoff
    Nature Communications, 11 (2020)