Global networking

The Global Networking (GN) team is responsible for the design, development, build and operation of the networks that connect our data centers to our customers.

About the team

The Global Networking (GN) team is responsible for the design, development, build and operation of Google’s global network that every Google service, including the Google Cloud Platform, runs on. We develop cutting-edge networking technologies that allow Google's global WAN to be zero touch, builds out some of the largest scale Software Defined Networks (SDNs) infrastructure ever deployed (B4, Espresso), scales Google's global Content Delivery Networks (CDNs) that supports Google services, develops sophisticated software systems for network capacity forecasting, planning and optimization.

We continuously expand the reach of Google's network across the world laying new optical fibers and building hundreds of points of presence worldwide. This global footprint allows us to optimize the end-to-end speed and reliability of the traffic that we carry for our users and for Google Cloud customers, delivering optimal performance.

In doing all this, we develop and rely on the most advanced techniques in network hardware and software, traffic engineering, and network management to deliver unprecedented scale, availability and performance at industry leading cost points. Additionally, we are also advancing the state of the art in data analytics and machine learning to drive network efficiency and optimization at scale.

Google has a long history of fundamental research in networking, and we have recently engaged in a collaborative research effort with NSF and other industrial partners to launch a $40 million program in academic research for Resilient and Intelligent Next-Generation (NextG) Systems, or RINGS. In addition to funding, Google will offer expertise, research collaborations, infrastructure, and in-kind support for researchers and students as they advance knowledge and progress in the field. See our blog post for additional information.

Congestion control and traffic management

Team focus summaries

All networks are subject to congestion; we want to operate ours at high utilization levels while meeting strict performance objectives. We’re inventing new congestion avoidance protocols, and improving our global-scale, near-real-time, automated traffic engineering system. We’re building better ways to measure our networks, accurately and at scale, to drive our evaluation of congestion-control techniques, and as real-time input to automated traffic management.

Data mining and telemetry

We collect traffic statistics all around our network infrastructure to track performance, detect quickly unusual events, and compute SLA compliance. We rely on the most advanced data science techniques, machine learning in particular, to reduce the time it takes to detect and root cause events. We use predictive analytics to anticipate some types of problems and adjust our traffic engineering (e.g. traffic surge), or to plan capacity increase.

Network management

We’re building automated network management systems, enabling us to rapidly repair and improve our networks with little or no downtime. We’re using techniques such as formal modeling of network topologies and highly-available distributed systems, while working closely with Google’s network engineers and operators to implement automated workflows.

Optical networking - terrestrial and submarine

We work on developing and deploying cutting-edge optical solutions to scale cost-effectively and to increase network availability. These include new coherent transmission technologies, disaggregated line systems, high-capacity submarine wet plant, subsea switching technologies, transport SDN configurations, and sophisticated physical and logical layer design and optimization tools.

Programmable packet processing

We’re developing new mechanisms for low-latency, CPU-efficient communication. We want our network switches and endpoints to implement novel packet-processing functions without compromising on cost or performance. We’re exploring hardware and software techniques for fast, flexible, safe packet processing, including onload, offload, RDMA, P4, and more.

Rapid and reactive development and testing

To introduce network innovations into production as rapidly as possible, without compromising availability, we test our designs and implementations early, often, and extensively. We’re developing advanced software validation techniques, we embrace automation in all aspects of testing and qualification, and we build powerful infrastructure for testing, debugging, and root-causing, in both physical and emulated testbeds.

Software-Defined networking (SDN)

We employ SDN extensively. We were early users of, and contributors to, OpenFlow, and continue, with the P4 network processor programming language, to raise the level of abstraction for silicon-agnostic switching. We are developing SDN controller platforms that can handle Google’s needs for scale and reliability, and SDN applications for routing, traffic management, and other functions.

WAN design

We’ve developed one of the world’s largest, most cost-effective wide area networks, and we continue to increase its scale and reliability, while extracting the best possible performance from WAN hardware and fiber links. We’re employing Google-designed and vendor hardware, SDN controllers, and global-scale automated traffic engineering to address these challenges.

Featured publications

Orion: Google’s Software-Defined Networking Control Plane
Amr Sabaa
Henrik Muehe
Joon Suan Ong
Karthik Swaminathan Nagaraj
KondapaNaidu Bollineni
Lorenzo Vicisano
Mike Conley
Min Zhu
Rich Alimi
Shawn Chen
Shidong Zhang
Waqar Mohsin
(2021)
Preview abstract We present Orion, a distributed Software-Defined Networking platform deployed globally in Google’s datacenter (Jupiter) as well as Wide Area (B4) networks. Orion was designed around a modular, micro-service architecture with a central publish-subscribe database to enable a distributed, yet tightly-coupled, software-defined network control system. Orion enables intent-based management and control, is highly scalable and amenable to global control hierarchies. Over the years, Orion has matured with continuously improving performance in convergence (up to 40x faster), throughput (handling up to 1.16 million network updates per second), system scalability (supporting 16x larger networks), and data plane availability (50x, 100x reduction in unavailable time in Jupiter and B4, respectively) while maintaining high development velocity with bi-weekly release cadence. Today, Orion robustly enables all of Google’s Software-Defined Networks defending against failure modes that are both generic to large scale production networks as well as unique to SDN systems. View details
Preview abstract Network management is becoming increasingly automated, and automation depends on detailed, explicit representations of data about both the state of a network, and about an operator’s intent for its networks. In particular, we must explicitly represent the desired and actual topology of a network; almost all other network-management data either derives from its topology, constrains how to use a topology, or associates resources (e.g., addresses) with specific places in a topology. We describe MALT, a Multi-Abstraction-Layer Topology representation, which supports virtually all of our network management phases: design, deployment, configuration, operation, measurement, and analysis. MALT provides interoperability across software systems, and its support for abstraction allows us to explicitly tie low-level network elements to high-level design intent. MALT supports a declarative style that simplifies what-if analysis and testbed support. We also describe the software base that supports efficient use of MALT, as well as numerous, sometimes painful lessons we have learned about curating the taxonomy for a comprehensive, and evolving, representation for topology. View details
Classification of load balancing in the Internet
Darryl Veitch
Italo Cunha
rafael almeida
renata cruz teixeira
Proceedings of IEEE INFOCOM, IEEE, Beijing, China (2020)
Preview abstract Abstract—Recent advances in programmable data planes, software-defined networking, and the adoption of IPv6, support novel, more complex load balancing strategies. We introduce the Multipath Classification Algorithm (MCA), a probing algorithm that extends traceroute to identify and classify load balancing in Internet routes. MCA extends existing formalism and techniques to consider that load balancers may use arbitrary combinations of bits in the packet header for load balancing. We propose optimizations to reduce probing cost that are applicable to MCA and existing load balancing measurement techniques. Through large-scale measurement campaigns, we characterize and study the evolution of load balancing on the IPv4 and IPv6 Internet with multiple transport protocols. Our results show that load balancing is more prevalent and that load balancing strategies are more mature than previous characterizations have found. View details
Open Optical Communication Systems at a Hyperscale Operator
Matt Newland
Rene Marcel Schmogrow
Vijay Vusirikala
Journal of Optical Communications (2020)
Preview abstract Open optical networks present a variety of benefits such as single vendor independence and the opportunity to select best in class devices for each individual role. In this paper we review two degrees of open optical networks, namely ones with transponder-line system and line system-line system interoperability. In this context we discuss Google's experiences with respect to optical link design, software, and controls, deployment, and operation. View details
Network Error Logging: Client-side measurement of end-to-end web service reliability
Ben Jones
Brian Rogan
Charles Stahl
Douglas Creager
Harsha V. Madhyastha
Ilya Grigorik
Julia Elizabeth Tuttle
Lily Chen
Misha Efimov
17th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2020
Preview abstract We present NEL (Network Error Logging), Google’s planet scale, client-side, network reliability measurement system. NEL is implemented in Chrome and has been proposed as a new W3C standard, letting any web site operator collect reports of clients’ successful and failed requests to their sites. These reports are similar to web server logs, but include information about failed requests that never reach serving infrastructure. Reports are uploaded via redundant failover paths, reducing the likelihood of shared-fate failures of report uploads. We have used NEL to monitor all of Google’s domains since 2014, allowing us to detect and investigate instances of DNS hijacking, BGP route leaks, protocol deployment bugs, and other problems where packets might never reach our servers. This paper presents the design of NEL, case studies of real outages, and deployment lessons for other operators who choose to use NEL to monitor their traffic. View details
The subsea fiber as a Shannon channel
Alexei Pilipetskii
Dmitry Kovsh
Eduardo Mateo
Elizabeth Rivera Hartling
Georg Mohs
Massimiliano Salsi
Maxim Bolshtyansky
Olivier Courtois
Olivier Gautheron
Omar Ait Sab
Pascal Pecci
Priyanth Mehta
Stephen Grubb
Takanori Inoue
Valey Kamalov
Vijay Vusirikala
Vincent Letellier
Yoshihisa Inada
SubOptic 2019
Preview abstract Since many years, the Q-budget table (normalized by the ITU-T G.977) has been widely used to characterize the transmission performance of subsea cables: this table detailed the margin allowance breakdown for any modulated wavelength. The fiber achievable transmission capacity was then deduced from the wavelength spacing and the system operating bandwidth. However, the emergence of coherent detection and Digital Signal Processing (DSP) capabilities has enabled the deployment of a wide range of modulation schemes featuring various bit rate, FEC encoding, constellation and spectral shaping, non-linear effect mitigation, thus leading to a transponder-dependent fiber transmission capacity. Combined to the recent trend of the industry to deploy “open” cables it is now time to define a new method to characterize the subsea fiber performance independently of the transponder type. This is emphasized by the introduction of Space Division Multiplexing (SDM) systems equipped with a high fiber pairs count, bringing the granularity at the fiber level: easy to swap, to sell and to manage. Cable capacity will be evaluated via the sum of fiber capacities deduced from any SLTE (Submarine Line Terminal Equipment) at any time with any margin. The proposed method for non-dispersion-managed undersea systems, relies on the General Signal to Noise ratio (GSNR) to remove the effect of baud rate, which is changing rapidly in each generation of SLTE. These have been metrics already widely debated at conferences/publications. Topics such as accuracy, Gaussian Noise (GN) model, assumptions, and measurability, are discussed to clarify definitions and a methodology. Finally, the paper reviews and discusses fiber capacity based on a given GSNR-based performance budget and various transponder types. View details
B4 and After: Managing Hierarchy, Partitioning, and Asymmetry for Availability and Scale in Google's Software-Defined WAN
Min Zhu
Rich Alimi
Kondapa Naidu Bollineni
Chandan Bhagat
Sourabh Jain
Jay Kaimal
Jeffrey Liang
Kirill Mendelev
Faro Thomas Rabe
Saikat Ray
Malveeka Tewari
Monika Zahn
Joon Ong
SIGCOMM'18 (2018)
Preview abstract Private WANs are increasingly important to the operation of enterprises, telecoms, and cloud providers. For example, B4, Google’s private software-defined WAN, is larger and growing faster than our connectivity to the public Internet. In this paper, we present the five-year evolution of B4. We describe the techniques we employed to incrementally move from offering best-effort content-copy services to carrier-grade availability, while concurrently scaling B4 to accommodate 100x more traffic. Our key challenge is balancing the tension introduced by hierarchy required for scalability, the partitioning required for availability, and the capacity asymmetry inherent to the construction and operation of any large-scale network. We discuss our approach to managing this tension: i) we design a custom hierarchical network topology for both horizontal and vertical software scaling, ii) we manage inherent capacity asymmetry in hierarchical topologies using a novel traffic engineering algorithm without packet encapsulation, and iii) we re-architect switch forwarding rules via two-stage matching/hashing to deal with asymmetric network failures at scale. View details
Preview abstract Modern networks have significantly outpaced the monitoring capabilities of SNMP and command-line scraping. Over the last three years we at Google have been working with members of the networking industry via the OpenConfig.net effort to redefine network monitoring. We have now deployed Streaming Telemetry in production to monitor devices from multiple vendors. We will talk about the experience and highlight the open source components we are providing to the community to accelerate industry-wide adoption. View details
Taking the Edge off with Espresso: Scale, Reliability and Programmability for Global Internet Peering
Matthew Holliman
Gary Baldus
Marcus Hines
TaeEun Kim
Ashok Narayanan
Victor Lin
Colin Rice
Brian Rogan
Bert Tanaka
Manish Verma
Puneet Sood
Mukarram Tariq
Dzevad Trumic
Vytautas Valancius
Calvin Ying
Mahesh Kallahalla
Sigcomm (2017)
Preview abstract We present the design of Espresso, Google’s SDN-based Internet peering edge routing infrastructure. This architecture grew out of a need to exponentially scale the Internet edge cost-effectively and to enable application-aware routing at Internet-peering scale. Espresso utilizes commodity switches and host-based routing/packet processing to implement a novel fine-grained traffic engineering capability. Overall, Espresso provides Google a scalable peering edge that is programmable, reliable, and integrated with global traffic systems. Espresso also greatly accelerated deployment of new networking features at our peering edge. Espresso has been in production for two years and serves over 22% of Google’s total traffic to the Internet. View details
An Internet-Wide Analysis of Traffic Policing
Tobias Flach
Luis Pedrosa
Tayeb Karim
Ethan Katz-Bassett
Ramesh Govindan
SIGCOMM (2016)
Preview abstract Large flows like videos consume significant bandwidth. Some ISPs actively manage these high volume flows with techniques like policing, which enforces a flow rate by dropping excess traffic. While the existence of policing is well known, our contribution is an Internet-wide study quantifying its prevalence and impact on video quality metrics. We developed a heuristic to identify policing from server-side traces and built a pipeline to deploy it at scale on hundreds of servers worldwide within one of the largest online content providers. Using a dataset of 270 billion packets served to 28,400 client ASes, we find that, depending on region, up to 7% of lossy transfers are policed. Loss rates are on average 6× higher when a trace is policed, and it impacts video playback quality. We show that alternatives to policing, like pacing and shaping, can achieve traffic management goals while avoiding the deleterious effects of policing. View details