Explaining Failures of Program Analyses
Abstract
With programs getting larger and often more complex with each new release, programmers need all the help they can get in understanding and transforming programs. Fortunately, modern development environments, such as Eclipse, incorporate tools for understanding, navigating, and transforming programs. These tools often use program analyses to extract relevant properties of programs.
These tools are often invaluable to developers; for example, many programmers use refactoring tools regularly. However, poor results by the underlying analyses can compromise a tool's usefulness. For example, a bug finding tool may produce too many false positives if the underlying analysis is overly conservative, and thus overwhelm the user with too many possible errors in the program. In such cases it would be invaluable for the tool to explain to the
user \textit{why} it believes that each bug exists. Armed with this knowledge, the user can decide which bugs are worth pursing and which are false positives.
The contributions of this paper are as follows: (i) We describe requirements
on the structure of an analysis so that we can produce reasons when the
analysis fails; the user of the analysis determines whether or an analysis's
results constitute failure. We also describe a simple language that enforces
these requirements; (ii) We describe how to produce necessary and sufficient
reasons for analysis failure; (iii) We evaluate our system with respect to a
number of analyses and programs and find that most reasons are small (and thus
usable) and that our system is fast enough for interactive use.