PROMPTS: Performance Optimization via Multi-Agent Planning for LLM Training and Serving

Yuran Ding
Ruobing Han
Xinwei Chen
2026

Abstract

Optimizing large-language model (LLM) training and serving on large-sacle distributed systems with hundreds and thousands of accelerators is always a challenging task due to the fast evloving LLMs, strong domain expertise required, and various optimization goals from different worklaods. Existing methods rely on either handcrafted optimization performed by human experts, which is tedious and time-consuming or resource-intensive black-box searches, which lack the extensibility to keep pace with evolving models and hardware. To address this, we introduce PROMPTS, a novel multi-agent framework that complements traditional search methods with expert-informed reasoning. It automates the diagnosis of performance bottlenecks by synthesizing profiler data and leverages a knowledge base to propose optimized sharding configurations with detailed justifications.

Across eight real-world production workloads, PROMPTS demonstrated remarkable efficiency and accuracy, delivering performance improvements of up to 434%. These workloads spanned diverse model architectures, hardware platforms, computational scales, and various stages of the machine learning lifecycle (pre-training, serving, and post-training). In every case, the configuration adopted by human engineers was identified within the agent's top three proposals from a single invocation. Furthermore, the agent's top-ranked recommendation was the one ultimately adopted in 87.5% of cases, showcasing its ability to not only find optimized solutions, but also to correctly prioritize them. Our work establishes PROMPTS as a scalable, extensible, and explainable methodology for AI-assisted performance engineering in large-scale ML systems.

Research Areas

Follow us

×