Manushree Vijayvergiya

Research Areas

Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Preview abstract Modern code review is a process in which incremental code contributions made by one software developer are reviewed by one or more peers before it is committed to the version control system. An important element of modern code review is verifying that the code under review adheres to style guidelines and best practices of the corresponding programming language. Some of these rules are universal and can be checked automatically or enforced via code formatters. Other rules, however, are context-dependent and the corresponding checks are commonly left to developers who are experts in the given programming language and whose time is expensive. Many automated systems have been developed that attempt to detect various rule violations without any human intervention. Historically, such systems implement targeted analyses and were themselves expensive to develop. This paper presents AutoCommenter, a system that uses a state of the art large language model to automatically learn and enforce programming language best practices. We implemented AutoCommenter for four programming languages: C++, Java, Python and Go. We evaluated its performance and adoption in a large industrial setting. Our evaluation shows that a model that automatically learns language best practices is feasible and has a measurable positive impact on the developer workflow. Additionally, we present the challenges we faced when deploying such a model to tens of thousands of developers and provide lessons we learned for any practitioners that would like to replicate the work or build on top of it. View details
    MuRS: Suppressing and Ranking Mutants with IdentifierTemplates
    Malgorzata (Gosia) Salawa
    René Just
    Zimin Chen
    ESEC/FSE 2023: Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (2023), 1798–1808
    Preview abstract Diff-based mutation testing is a mutation testing approach thatonly generates mutants in changed lines. At Google we executemore than 150 000 000 tests and submit more than40 000 commits per day. We have successfully integrated mutation testing into ourcode review process. Over the years, we have continuously gath-ered developer feedback on the surfaced mutants and measuredthe negative feedback rate. To enhance the developer experience,we manually implemented a large number of static rules, whichare used to suppress certain mutants. In this paper, we proposeMuRS, an automatic tool that finds patterns in the source codeunder test, and uses these patterns to rank and suppress futuremutants based on historical performance of similar mutants. Because MuRS learns mutant suppression rules fully automatically,it significantly reduces the build and maintenance cost of the mu-tation testing system. To evaluate the effectiveness of MuRS, we conducted an A/B experiment, where mutants in the experimentgroup were ranked and suppressed byMuRS, and mutants in thecontrol group were randomly shuffled. The experiment showeda statistically significant negative feedback rate of 11.45% in the experiment group versus 12.41% in the control group. Furthermore,we found that statement removal mutants received both most positive and negative developer feedback, suggesting a need for furtherinvestigation to identify valuable statement removal mutants. View details