Yanqi Zhou

Yanqi Zhou is a research scientist at Google Brain, Mountain View, working with James Laudon. She pursued her Ph.D. degree at Princeton University, advised by David Wentzlaff. During her Ph.D. study (2011-2017), she also collaborated extensively with Doug Burger and Karin Strauss at Microsoft Research. She obtained her bachelor degree from the University of Michigan (2009-2011), and Shanghai Jiao Tong (2007-2009). Her research interest lies in computer systems and machine learning. More specifically, Yanqi applies machine learning to design more efficient computer systems and builds large-scale deep learning models for speech and language tasks.
Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Google
Learning Large Graph Property Prediction via Graph Segment Training
Kaidi Cao
Mangpo Phothilimthana
Charith Mendis
Jure Leskovec
Advances in Neural Information Processing Systems (2023)
Searching for Efficient Neural Architectures for On-Device ML on Edge TPUs
Anton Spiridonov
Hao Xu
Marie Charisse White
Ping Zhou
Suyog Gupta
Yun Long
Zhuo Wang
IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2022)
LaMDA: Language Models for Dialog Applications
Aaron Daniel Cohen
Alena Butryna
Alicia Jin
Apoorv Kulshreshtha
Ben Zevenbergen
Chung-ching Chang
Cosmo Du
Daniel De Freitas Adiwardana
Dehao Chen
Dmitry (Dima) Lepikhin
Erin Hoffman-John
Igor Krivokon
James Qin
Jamie Hall
Joe Fenton
Johnny Soraker
Kathy Meier-Hellstern
Maarten Paul Bosma
Marc Joseph Pickett
Marcelo Amorim Menegali
Marian Croak
Maxim Krikun
Noam Shazeer
Rachel Bernstein
Ravi Rajakumar
Ray Kurzweil
Romal Thoppilan
Steven Zheng
Taylor Bos
Toju Duke
Tulsee Doshi
Vincent Y. Zhao
Will Rusch
Yuanzhong Xu
Zhifeng Chen
arXiv (2022)
Sparsely Activated Language Models are Efficient In-Context Learners
Barret Richard Zoph
Dmitry (Dima) Lepikhin
Emma Wang
Kathy Meier-Hellstern
Kun Zhang
Liam B. Fedus
Maarten Paul Bosma
Marie Pellat
Maxim Krikun
Nan Du
Simon Tong
Tao Wang
Toju Duke
Yonghui Wu
Yuanzhong Xu
Zhifeng Chen
Zongwei Zhou
(2022)
A Flexible Approach to Autotuning Multi-Pass Machine Learning Compilers
Berkin Ilbeyi
Bjarke Roune
Blake Hechtman
Emma Wang
Karthik Srinivasa Murthy
Mangpo Phothilimthana
Mike Burrows
Nikhil Sarda
Rezsa Farahani
Samuel J. Kaufman
Shen Wang
Sudip Roy
Yuanzhong Xu
PACT (2021)
A Learned Performance Model for Tensor Processing Units
Charith Mendis
Mangpo Phothilimthana
Mike Burrows
Samuel J. Kaufman
Sudip Roy
MLSys (2021)
A Learned Performance Model for the Tensor Processing Unit
Mangpo Phothilimthana
Mike Burrows
Sam Kaufman
arXiv (2020)
Graph Transformer: A Generalized Method for Computation Graph Optimizations
Amirali Abdolrashidi
Azalia Mirhoseini
Daniel Wong
Hanxiao Liu
Mangpo Phothilimthana
Qiumin Xu
Shen Wang
Sudip Roy
(2020)
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel
Michael Matena
Noam Shazeer
Peter J. Liu
Sharan Narang
Wei Li
Google (2019)