
Ranjita Bhagwan
Ranjita Bhagwan is currently a Principal Engineer at Google, working on observability in the Google Global Networking team. Prior to this, she was a Senior Principal Researcher at Microsoft Research India. Here interests are in using data-driven approaches to improve systems and network reliability. She is an ACM Distinguished Member, INAE Fellow, and is the recipient of the 2020 ACM India Outstanding Contributions to Computing by a Woman Award. She received her PhD and MS in Computer Engineering from University of California, San Diego and a BTech in Computer Science and Engineering from the Indian Institute of Technology, Kharagpur.
Research Areas
Authored Publications
Sort By
OPPerTune: Post-Deployment Configuration Tuning of Services Made Easy
Mayukh Das
Nagarajan Natarajan
Karan Tandon
Anshul Gandhi
Anush Kini
Gagan Somasekhar
Petr Husak
2024
Preview abstract
Real-world application deployments have hundreds of inter-dependent configuration parameters, many of which significantly influence performance and efficiency. With today's complex and dynamic services, operators need to continuously monitor and set the right configuration values (configuration tuning) well after a service is widely deployed. This is challenging since experimenting with different configurations post-deployment may reduce application performance or cause disruptions. While state-of-the-art ML approaches do help to automate configuration tuning, they do not fully address the multiple challenges in end-to-end configuration tuning of deployed applications.
This paper presents OpperTune, a service that enables configuration tuning of applications in deployment at Microsoft. OpperTune reduces application interruptions while maximizing the performance of deployed applications as and when the workload or the underlying infrastructure changes. It automates three essential processes that facilitate post-deployment configuration tuning: (a) determining which configurations to tune, (b) automatically managing the scope at which to tune the configurations, and (c) using a novel reinforcement learning algorithm to simultaneously and quickly tune numerical and categorical configurations, thereby keeping the overhead of configuration tuning low. We deploy OpperTune on two enterprise applications in Microsoft Azure's clusters. Our experiments show that OpperTune reduces the end-to-end P95 latency of microservice applications by more than 50% over expert configuration choices made ahead of deployment. The code and datasets used are made available at https://aka.ms/OPPerTune.
View details