
Xingchen Wan
I am a Research Scientist at Google Cloud AI Research. My primary research interest is on large language models (LLMs), and I've worked on topics such as improving their usability through automatic prompt optimization, agentic retrieval-augmented generation, and methods allowing LLMs to self-improve their long-context reasoning capabilities. My other research interests include automated machine learning (AutoML) and Bayesian optimization.
Previously, I completed both my undergrad studies and PhD at University of Oxford.
Research Areas
Authored Publications
Sort By
Google
From Few to Many: Self-Improving Many-Shot Reasoners Through Iterative Optimization and Generation
Ke Jiang
International Conference on Learning Representations (ICLR) (2025) (to appear)
Batch Calibration: Rethinking Calibration For In-Context Learning And Prompt Engineering
Lev Proleev
Diana Mincu
International Conference on Learning Representations (ICLR) (2024)
Teach Better or Show Smarter? On Instructions and Exemplars in Automatic Prompt Optimization
Advances in Neural Information Processing Systems (NeurIPS) (2024)
UQE: A Query Engine for Unstructured Databases
Hanjun Dai
Bethany Wang
Sherry Yang
Phitchaya Mangpo Phothilimthana
Advances in Neural Information Processing Systems (NeurIPS) (2024)
Universal Self-adaptive Prompting
Hanjun Dai
Empirical Methods in Natural Language Processing (EMNLP) (2023)
Better Zero-Shot Reasoning with Self-Adaptive Prompting
Hanjun Dai
Findings of the Association for Computational Linguistics: ACL 2023 (2023)