
Xingchen Wan
I am a Research Scientist at Google Cloud. My primary research interest is on large language models (LLMs), and I've worked on topics such as improving their usability through automatic prompt optimization, agentic retrieval-augmented generation, and methods allowing LLMs to self-improve their long-context reasoning capabilities. My other research interests include automated machine learning (AutoML) and Bayesian optimization.
Previously, I completed both my undergrad studies and PhD at University of Oxford.
Research Areas
Authored Publications
Sort By
Google
From Few to Many: Self-Improving Many-Shot Reasoners Through Iterative Optimization and Generation
Han Zhou
Ke Jiang
International Conference on Learning Representations (ICLR) (2025)
Astute RAG: Overcoming Imperfect Retrieval Augmentation and Knowledge Conflicts for Large Language Models
Fei Wang
The Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025) (2025) (to appear)
UQE: A Query Engine for Unstructured Databases
Hanjun Dai
Bethany Wang
Sherry Yang
Phitchaya Mangpo Phothilimthana
Advances in Neural Information Processing Systems (NeurIPS) (2024)
Teach Better or Show Smarter? On Instructions and Exemplars in Automatic Prompt Optimization
Advances in Neural Information Processing Systems (NeurIPS) (2024)
Batch Calibration: Rethinking Calibration For In-Context Learning And Prompt Engineering
Han Zhou
Lev Proleev
Diana Mincu
International Conference on Learning Representations (ICLR) (2024)
Better Zero-Shot Reasoning with Self-Adaptive Prompting
Hanjun Dai
Findings of the Association for Computational Linguistics: ACL 2023 (2023)
Universal Self-adaptive Prompting
Hanjun Dai
Empirical Methods in Natural Language Processing (EMNLP) (2023)