Sanjeev Dhanda

Sanjeev Dhanda

Sanjeev Dhanda is a Software Engineer at Google specializing in hyperscale Build & Test systems. A contributor to Google’s Test Automation Platform, he currently leads the infrastructure shift toward non-deterministic and agentic testing for Gemini. Sanjeev co-authored the chapter on Continuous Integration in the O'Reilly book Software Engineering at Google and the paper Taming Google-Scale Continuous Testing (ICSE 2017), which has been cited over 450 times. His work bridges the gap between traditional infrastructure and AI reliability, formalized in a portfolio of nearly 10 patents, including a recent application for Monitoring Generative Model Quality.

Previously, as a Technical Lead Manager at Tesla, he designed the "sim-in-the-cloud" platform to virtualize hardware testing for the autonomous vehicle fleet and architected the platform for persistent vehicle applications and in-car voice systems.

He holds an Honors Bachelor of Software Engineering—jointly granted by the Faculties of Mathematics and Engineering—with an Option in Cognitive Science from the University of Waterloo.

Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Taming Google-Scale Continuous Testing
    Atif Memon
    Eric Nickell
    John Micco
    Rob Siemborski
    Zebao Gao
    ICSE '17:Proceedings of the 39th International Conference on Software Engineering (2017) (to appear)
    Preview abstract Growth in Google’s code size and feature churn rate has seen increased reliance on continuous integration (CI) and testing to maintain quality. Even with enormous resources dedicated to testing, we are unable to regression test each code change individually, resulting in increased lag time between code check-ins and test result feedback to developers. We report results of a project that aims to reduce this time by: (1) controlling test workload without compromising quality, and (2) distilling test results data to inform developers, while they write code, of the impact of their latest changes on quality. We model, empirically understand, and leverage the correlations that exist between our code, test cases, developers, programming languages, and code-change and test-execution frequencies, to improve our CI and development processes. Our findings show: very few of our tests ever fail, but those that do are generally “closer” to the code they test; certain frequently modified code and certain users/tools cause more breakages; and code recently modified by multiple developers (more than 3) breaks more often. NOTE: You can find the anonymized dataset for our paper on Google drive: https://drive.google.com/open?id=0B5_QHWCtac81VGNKYnhrQkJBZGM View details
    ×