Ankur Jain
Ankur Jain is a Distinguished Engineer and works in the Office of the CEO on cross-Google programs. His current areas of focus include 5G, Privacy.
Ankur was previously working on Google’s connectivity and communication products where he led the infrastructure teams running Fi, Loon, Station, RCS, Google Voice and CBRS-based shared networks and worked with some of the largest wireless operators globally in modernizing their networks. Before that he was instrumental in bringing software defined networking and disaggregation to Google’s edge network. He was one of the first engineers and later led Google’s content delivery network as it grew into the largest in the world, deployed by several hundred operators globally. He is currently on the Technical Leadership Team of Open Network Foundation bringing his experience in building and operating large-scale software-defined, automated, cloud-based networks to the open-source world.
Ankur holds a masters degree in computer science and engineering from University of Washington Seattle and a bachelors degree in the same from Indian Institute of Technology Delhi. He has a few dozen patents and conference papers filed/granted/published. His closest shot at stardom though was when he went to Los Angeles in 2013 as part of the team that collected the 65th Primetime Emmy Engineering Award for YouTube; but a couple of years later he is still happily at Google.
Ankur was previously working on Google’s connectivity and communication products where he led the infrastructure teams running Fi, Loon, Station, RCS, Google Voice and CBRS-based shared networks and worked with some of the largest wireless operators globally in modernizing their networks. Before that he was instrumental in bringing software defined networking and disaggregation to Google’s edge network. He was one of the first engineers and later led Google’s content delivery network as it grew into the largest in the world, deployed by several hundred operators globally. He is currently on the Technical Leadership Team of Open Network Foundation bringing his experience in building and operating large-scale software-defined, automated, cloud-based networks to the open-source world.
Ankur holds a masters degree in computer science and engineering from University of Washington Seattle and a bachelors degree in the same from Indian Institute of Technology Delhi. He has a few dozen patents and conference papers filed/granted/published. His closest shot at stardom though was when he went to Los Angeles in 2013 as part of the team that collected the 65th Primetime Emmy Engineering Award for YouTube; but a couple of years later he is still happily at Google.
Research Areas
Authored Publications
Sort By
Taking the Edge off with Espresso: Scale, Reliability and Programmability for Global Internet Peering
Matthew Holliman
Gary Baldus
Marcus Hines
TaeEun Kim
Ashok Narayanan
Victor Lin
Colin Rice
Brian Rogan
Bert Tanaka
Manish Verma
Puneet Sood
Mukarram Tariq
Dzevad Trumic
Vytautas Valancius
Calvin Ying
Mahesh Kallahalla
Sigcomm (2017)
Preview abstract
We present the design of Espresso, Google’s SDN-based Internet peering edge routing infrastructure. This architecture grew out of a need to exponentially scale the Internet edge cost-effectively and to
enable application-aware routing at Internet-peering scale. Espresso utilizes commodity switches and host-based routing/packet processing to implement a novel fine-grained traffic engineering capability.
Overall, Espresso provides Google a scalable peering edge that is programmable, reliable, and integrated with global traffic systems. Espresso also greatly accelerated deployment of new networking features at our peering edge. Espresso has been in production for two years and serves over 22% of Google’s total traffic to the Internet.
View details
CQIC: Revisiting Cross-Layer Congestion Control f or Cellular Networks
Feng Lu
Hao Du
Geoffrey M. Voelker
Alex C. Snoeren
Proceedings of The 16th International Workshop on Mobile Computing Systems and Applications (HotMobile), ACM (2015), pp. 45-50
Preview abstract
With the advent of high-speed cellular access and the overwhelming popularity of smartphones, a large percent of today’s Internet content is being delivered via cellular links. Due to the nature of long-range wireless signal propagation, the capacity of the last hop cellular link can vary by orders of magnitude within a short period of time (e.g., a few seconds). Unfortunately, TCP does not perform well in such fast-changing environments, potentially leading to poor spectrum utilization and high end-to-end packet delay.
In this paper we revisit seminal work in cross-layer optimization the context of 4G cellular networks. Specifically, we leverage the rich physical layer information exchanged between base stations (NodeB) and mobile phones (UE) to predict the capacity of the underlying cellular link, and propose CQIC, a cross-layer congestion control design. Experiments on real cellular networks confirm that our capacity estimation method is both accurate and precise. A CQIC sender uses these capacity estimates to adjust its packet sending behavior. Our preliminary evaluation reveals that CQIC improves throughput over TCP by 1.08–2.89
× for small and medium flows. For large flows, CQIC attains throughput comparable to TCP while reducing the average RTT by 2.38–2.65x.
View details
Reducing Web Latency: the Virtue of Gentle Aggression
Tobias Flach
Barath Raghavan
Shuai Hao
Ethan Katz-Bassett
Ramesh Govindan
Proceedings of the ACM Conference of the Special Interest Group on Data Communication (SIGCOMM '13), ACM (2013)
Preview abstract
To serve users quickly, Web service providers build infrastructure closer to clients and use multi-stage transport connections. Although these changes reduce client-perceived round-trip times, TCP's current mechanisms fundamentally limit latency improvements. We performed a measurement study of a large Web service provider and found that, while connections with no loss complete close to the ideal latency of one round-trip time, TCP's timeout-driven recovery causes transfers with loss to take five times longer on average.
In this paper, we present the design of novel loss recovery mechanisms for TCP that judiciously use redundant transmissions to minimize timeout-driven recovery. Proactive, Reactive, and Corrective are three qualitatively different, easily-deployable mechanisms that (1) proactively recover from losses, (2) recover from them as quickly as possible, and (3) reconstruct packets to mask loss. Crucially, the mechanisms are compatible both with middleboxes and with TCP's existing congestion control and loss recovery. Our large-scale experiments on Google's production network that serves billions of flows demonstrate a 23% decrease in the mean and 47% in 99th percentile latency over today's TCP.
View details
Trickle: Rate Limiting YouTube Video Streaming
Monia Ghobadi
Matt Mathis
Proceedings of the USENIX Annual Technical Conference (2012), pp. 6
Preview abstract
YouTube traffic is bursty. These bursts trigger packet
losses and stress router queues, causing TCP’s
congestion-control algorithm to kick in. In this paper,
we introduce Trickle, a server-side mechanism that
uses TCP to rate limit YouTube video streaming. Trickle
paces the video stream by placing an upper bound on
TCP’s congestion window as a function of the streaming
rate and the round-trip time. We evaluated Trickle on
YouTube production data centers in Europe and India
and analyzed its impact on losses, bandwidth, RTT, and
video buffer under-run events. The results show that
Trickle reduces the average TCP loss rate by up to 43%
and the average RTT by up to 28% while maintaining
the streaming rate requested by the application.
View details