Jump to Content

An Industrial Application of Mutation Testing: Lessons, Challenges, and Research Directions

Robert Kurtz
Paul Ammann
René Just
Proceedings of the 13th International Workshop on Mutation Analysis (Mutation 2018)
Google Scholar

Abstract

Mutation analysis evaluates a testing or debugging technique by measuring how well it detects mutants, which are systematically seeded, artificial faults. Mutation analysis is inherently expensive due to the large number of mutants it generates and due to the fact that many of these generated mutants are not effective; they are redundant, equivalent, or simply uninteresting and waste computational resources. A large body of research has focused on improving the scalability of mutation analysis and proposed numerous optimizations to, e.g., select effective mutants or efficiently execute a large number of tests against a large number of mutants. However, comparatively little research has focused on the costs and benefits of mutation testing, in which mutants are presented as testing goals to a developer, in the context of an industrial-scale software devel- opment process. This paper aims to fill that gap. Specifically, it first reports on a case study from an open source context, which quantifies the costs of achieving a mutation adequate test set. The results suggest that achieving mutation adequacy is neither practical nor desirable. This paper then draws on an industrial application of mutation testing, involving more than 30,000+ developers and 1,890,442 change sets, written in 4 programming languages. It shows that mutation testing does not add a significant overhead to the software development process and reports on mutation testing benefits perceived by developers. Finally, this paper describes lessons learned from these studies, highlights the current challenges of efficiently and effectively applying mutation testing in an industrial-scale software development process, and outlines research directions.