Network infrastructure

We design and build the world's most innovative and efficient datacenter networks and end-host networking stacks, to enable compute and storage not available anywhere else.

About the team

Our team brings together experts in networking, distributed systems, kernel and systems programming, end-host stacks, and advanced algorithms to create the datacenter networks that power Google. Our networks are among the world’s largest and fastest, and we design them to be reliable, cheap, and easy to evolve. We often use new technologies unavailable outside Google.

We exemplify Google’s Hybrid Approach to Research: we deploy real-world systems at global scale. Many members of our team have extensive research experience, we publish papers in conferences such as SIGCOMM, NSDI, SOSP, and OSDI, and we work closely with interns and faculty from leading universities.

Every Google product relies on the technologies we develop. Our networks support complex, highly-available, planetary-scale distributed systems with billions of users. We constantly evolve our networks to meet the requirements of, and create opportunities for, new and better Google products, especially the rapidly-growing Google Cloud.

Our team works in many locations: Sunnyvale CA, New York City, Madison WI, Boulder CO, Reston VA, and Seattle WA.

Team focus summaries

Featured publications

Orion: Google’s Software-Defined Networking Control Plane
Amr Sabaa
Henrik Muehe
Joon Suan Ong
Karthik Swaminathan Nagaraj
KondapaNaidu Bollineni
Lorenzo Vicisano
Mike Conley
Min Zhu
Rich Alimi
Shawn Chen
Shidong Zhang
Waqar Mohsin
(2021)
1RMA: Re-Envisioning Remote Memory Access for Multi-Tenant Datacenters
Aditya Akella
Arjun Singhvi
Joel Scherpelz
Monica C Wong-Chan
Moray Mclaren
Prashant Chandra
Rob Cauble
Sean Clark
Simon Sabato
Thomas F. Wenisch
Proceedings of the Annual Conference of the ACM Special Interest Group on Data Communication on the Applications, Technologies, Architectures, and Protocols for Computer Communication, Association for Computing Machinery, New York, NY, USA (2020), 708–721
Andromeda: Performance, Isolation, and Velocity at Scale in Cloud Network Virtualization
Mike Dalton
David Schultz
Ahsan Arefin
Alex Docauer
Anshuman Gupta
Brian Matthew Fahs
Dima Rubinstein
Enrique Cauich Zermeno
Erik Rubow
Jake Adriaens
Jesse L Alpert
Jing Ai
Jon Olson
Kevin P. DeCabooter
Nan Hua
Nathan Lewis
Nikhil Kasinadhuni
Riccardo Crepaldi
Srinivas Krishnan
Subbaiah Venkata
Yossi Richter
15th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2018
Snap: a Microkernel Approach to Host Networking
Jacob Adriaens
Sean Bauer
Carlo Contavalli
Mike Dalton
William C. Evans
Nicholas Kidd
Roman Kononov
Carl Mauer
Emily Musick
Lena Olson
Mike Ryan
Erik Rubow
Kevin Springborn
Valas Valancius
In ACM SIGOPS 27th Symposium on Operating Systems Principles, ACM, New York, NY, USA (2019) (to appear)
Nines are Not Enough: Meaningful Metrics for Clouds
Proc. 17th Workshop on Hot Topics in Operating Systems (HoTOS) (2019)
Minimal Rewiring: Efficient Live Expansion for Clos Data Center Networks
Shizhen Zhao
Joon Ong
Proc. 16th USENIX Symposium on Networked Systems Design and Implementation (NSDI 2019), USENIX Association (to appear)
BBR: Congestion-Based Congestion Control
C. Stephen Gunn
Van Jacobson
Communications of the ACM, 60 (2017), pp. 58-66

Join our team

Internships

We have a vigorous internship program, with a strong focus on PhD-level students who would like to understand how large-scale networks are designed, built, and operated. We also hire Bachelors and Masters interns. Most of our internship projects are focused on building software, especially distributed systems and kernels, and do not necessarily require a prior background in networking.

Please check again in September or October 2024 to find out about internships for 2025.

Open role(s)

  • Software Engineer, Systems and Infrastructure, PhD University Graduate : Learn more
    • PhD-level software engineers in Network Infrastructure apply their research training to the toughest problems of designing and building large-scale, high-performance, high-availability distributed systems to design, manage, measure, and control our datacenter, WAN, and peering-edge SDN networks (each of which has been the subject of at least one SIGCOMM paper). We're also creating innovative end-host stacks, to support CPU-efficient, low-latency, congestion-aware communication, with secure isolation between users. You'll work with other skillful, creative people, including people who wrote research papers you've read, and you'll keep connected with the academic research community.
    • Note that this job opening covers teams besides Network Infrastructure; we have several teams looking for a candidates with a mix of various "Systems" skills.