Jump to Content
Chandu Thekkath

Chandu Thekkath

Chandu Thekkath has been working as a Research Scientist in Google since March 2020. Before that he was a Distinguished Engineer at Microsoft. He worked at Microsoft for 18 years: 15 in Microsoft Research Silicon Valley and Bangalore, and 3 in the Azure Machine Learning product group. Prior to Microsoft he was a Consulting Engineer at the DEC Systems Research Center. Before his research career, Thekkath worked as a software development engineer at Hewlett Packard and at Monolithic Memories (now part of AMD). Thekkath received a BTech in Electronics from IIT Madras, where he was awarded the Governor’s Prize, an MSEE from UC Santa Barbara, an MS in Computer Science from Stanford, and a PhD in Computer Science from the University of Washington. He is a fellow of the ACM and has published a number of influential papers in operating systems, distributed systems, networks, and computer architecture.
Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Preview abstract We present the design of a new large scale orchestration layer for accelerators. Our system, Pathways, is explicitly designed to enable exploration of new systems and ML research ideas, while retaining state of the art performance for current models. Pathways uses a sharded dataflow graph of asynchronous operators that consume and produce futures, and efficiently gang-schedules heterogeneous parallel computations on thousands of accelerators while coordinating data transfers over their dedicated interconnects. Pathways makes use of a novel asynchronous distributed dataflow design that lets the control plane execute in parallel despite dependencies in the data plane. This design, with careful engineering, allows Pathways to adopt a single-controller model that makes it easier to express complex new parallelism patterns. We demonstrate that Pathways can achieve performance parity (~100% accelerator utilization) with state-of-the-art systems when running SPMD computations over 2048 TPUs, while also delivering throughput comparable to the SPMD case for Transformer models that are pipelined across 16 stages, or sharded across two islands of accelerators connected over a data center network. View details
    No Results Found