
Vinh Q. Tran
I’m a Staff Research Engineer at Google Research NY where I work methods for sequence modeling / NLP and machine learning. My research interests are in all things related to improving, expanding, or rethinking the functionality of Transformers and other state-of-the-art sequence models.
Research Areas
Authored Publications
Sort By
Google
UL2: Unifying Language Learning Paradigms
Yi Tay
Xavier Garcia
Jason Wei
Hyung Won Chung
Steven Zheng
Neil Houlsby
ICLR (2023)
Understanding Generative Retrieval at Scale
Ronak Pradeep
Jimmy Lin
EMNLP 2023
DSI++: Updating Transformer Memory with New Documents
Yi Tay
Jinfeng Rao
Emma Strubell
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Dense Feature Memory Augmented Transformers for COVID-19 Vaccination Search Classification
Yi Tay
Chaitanya Kamath
Shailesh Bavadekar
Evgeniy Gabrilovich
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (2022)
A New Generation of Perspective API: Efficient Multilingual Character-level Transformers
Alyssa Whitlock Lees
Yi Tay
Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (2022)
ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning
Vamsi Aribandi
Yi Tay
Jinfeng Rao
Steven Zheng
Jianmo Ni
Sebastian Ruder
ICLR 2022
Confident Adaptive Language Modeling
Adam Fisch
Yi Tay
NeurIPS 2022
Charformer: Fast Character Transformers via Gradient-based Subword Tokenization
Yi Tay
Sebastian Ruder
Hyung Won Chung
Cong Yu
ICLR (2022)
Transformer Memory as a Differentiable Search Index
Yi Tay
Jianmo Ni
Harsh Mehta
Zhe Zhao
NeurIPS 2022
Attributed Question Answering: Evaluation and Modeling for Attributed Large Language Models
Pat Verga
Jianmo Ni
arXiv (2022)