Ioannis Zarkadas
I am broadly interested in systems and performance. I currently work on performance optimizations for the XLA compiler. Previously, I obtained my PhD from Columbia University and my BS from the National Technical University of Athens.
Research Areas
Authored Publications
Sort By
Snap & Replay: A new way to analyze uarch-scale performance bottlenecks for ML accelerators
Amanda Tomlinson
Asaf Cidon
Baris Kasikci
Ofir Weisse
Proceedings of the 2025 ACM Symposium on Cloud Computing, Association for Computing Machinery, 283–298
Preview abstract
As models become larger, ML accelerators are a scarce resource whose performance must be continually optimized to improve efficiency. Existing performance analysis tools are coarse grained, and fail to capture model performance at the machine-code level. In addition, these tools often do not provide specific recommendations for optimizations. We present SnR, a fine-grained methodology for analyzing ML models at the machine-code level that provides actionable optimization suggestions. Our core insight is to use a hardware-level simulator, an artifact of the hardware design process that we can re-purpose for performance analysis. SnR captures traces from production deployments running on accelerators and replays them in a modified microarchitecture simulator to gain low-level insights into the model’s performance. We implement SnR for our in-house accelerator and used it to analyze the performance of several of our production LLMs, revealing several previously-unknown microarchitecture inefficiencies. Based on these findings, we implement optimizations that have decreased token generation latency for our already highly-optimized production LLMs by up to 4.1%.
(*) Ioannis Zarkadas and Amanda Tomlinson are equal-contribution co-authors to this work.
View details