Amer Diwan

Amer Diwan

Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Meaningful availability
    Dan Ardelean
    Philipp Emanuel Hoffmann
    Tamás Hauer
    17th USENIX Symposium on Networked Systems Design and Implementation (NSDI'20)(2020)
    Preview abstract Accurate measurement of service availability is the cornerstone of good service management: it quantifies the gap between user expectation and system performance, and provides actionable data to prioritize development and operational tasks. We propose a novel metric, user-uptime, which is event- based but is time-sensitive and which approximates aggregated user-perceived reliability better than current metrics. For a holistic view of availability across timescales from minutes to months or quarters, we augment user-uptime with a novel aggregation and visualization paradigm: windowed uptime. Using an example from G Suite we demonstrate its effectiveness in differentiating between unreliability caused by flakiness and an extended outage. View details
    Why you should care about quantile regression
    Augusto Born De Oliveira
    Sebastian Fischmeister
    Matthias Hauswirth
    Peter Sweeney
    Architectural Support for Programming Languages and Operating Systems, ASPLOS '13, ACM(2013), pp. 207-218
    Preview
    Measuring Enforcement Windows with Symbolic Trace Interpretation: What Well-Behaved Programs Say
    Devin Coughlin
    Bor-Yuh Evan Chang
    Jeremy Siek
    Proceedings of the 2012 International Symposium on Software Testing and Analysis, ACM, pp. 276-286
    Preview
    Preview abstract Modern object-oriented languages have complex features that cause programmers to overspecify their programs. This overspecification hinders automatic optimizers, since they must preserve the overspecified semantics. If an optimizer knew which semantics the programmer intended, it could do a better job. Making a programmer clarify his intentions by placing assumptions into the program is rarely practical. This is because the programmer does not know which parts of the programs' overspecified semantics hinder the optimizer. Therefore, the programmer has to guess which assumption to add. Since the programmer can add many different assumptions to a large program, he will need to place many such assumptions before he guesses right and helps the optimizer. We present IOpt, a practical optimizer that uses a specification of the programmers' intended semantics to enable additional optimizations. That way, our optimizer can significantly improve the performance of a program. We present case studies in which we use IOpt to speed up two programs by over 50%. To make specifying the intended semantics practical, IOpt communicates with the programmer. IOpt identifies which assumptions the programmer textit{should} place, and where he should place them. IOpt ranks each assumption by (i) the likelyhood that the assumption conforms to the programmers' intended semantics and (ii) how much the assumption will help IOpt improve the programs' performance. IOpt proposes ranked assumptions to the programmer, who just picks those that conform to his intended semantics. With this approach, IOpt keeps the programmers' specification burden low. Our case studies show that the programmer just needs to add a few assumptions to realize the 50% speedup. View details
    Explaining Failures of Program Analyses
    Proceedings of PLDI(2008), pp. 10
    Preview abstract With programs getting larger and often more complex with each new release, programmers need all the help they can get in understanding and transforming programs. Fortunately, modern development environments, such as Eclipse, incorporate tools for understanding, navigating, and transforming programs. These tools often use program analyses to extract relevant properties of programs. These tools are often invaluable to developers; for example, many programmers use refactoring tools regularly. However, poor results by the underlying analyses can compromise a tool's usefulness. For example, a bug finding tool may produce too many false positives if the underlying analysis is overly conservative, and thus overwhelm the user with too many possible errors in the program. In such cases it would be invaluable for the tool to explain to the user \textit{why} it believes that each bug exists. Armed with this knowledge, the user can decide which bugs are worth pursing and which are false positives. The contributions of this paper are as follows: (i) We describe requirements on the structure of an analysis so that we can produce reasons when the analysis fails; the user of the analysis determines whether or an analysis's results constitute failure. We also describe a simple language that enforces these requirements; (ii) We describe how to produce necessary and sufficient reasons for analysis failure; (iii) We evaluate our system with respect to a number of analyses and programs and find that most reasons are small (and thus usable) and that our system is fast enough for interactive use. View details
    Simplifying Web Traversals by Recognizing Behavior Patterns
    Christian Doer
    Proceedings of Hypertext 2007, pp. 105-114
    Preview abstract Web sites must often service a wide variety of clients. Thus, it is inevitable that a web site will allow some visitors to find their information quickly while other visitors have to follow many links to get to the information that they need. Worse, as web sites evolve, they may get worse over time so that all visitors have to follow many links to find the information that they need. This paper describes an extensible system that analyzes web logs to find and exploit opportunities for improving the navigation of a web site. The system is extensible in that the inefficiencies that it finds and eliminates are not predetermined; to search for a new kind of inefficiency, web site admininstrators can provide a pattern (in a language designed specifically for this) that finds and eliminates the new inefficiency. View details
    Fast Online Pointer Analysis
    Martin Hirzel
    Michael Hind
    ACM Transactions on Programming Languages and Systems (TOPLAS), 29(2007), pp. 11
    Preview abstract Pointer analysis benefits many useful clients, such as compiler optimizations and bug finding tools. Unfortunately, common programming language features such as dynamic loading, reflection, and foreign language interfaces, make pointer analysis difficult. This article describes how to deal with these features by performing pointer analysis online during program execution. For example, dynamic loading may load code that is not available for analysis before the program starts. Only an online analysis can analyze such code, and thus support clients that optimize or find bugs in it. This article identifies all problems in performing Andersen's pointer analysis for the full Java language, presents solutions to these problems, and uses a full implementation of the solutions in a Java virtual machine for validation and performance evaluation. Our analysis is fast: On average over our benchmark suite, if the analysis recomputes points-to results upon each program change, most analysis pauses take under 0.1 seconds, and add up to 64.5 seconds. View details
    Understanding the Behavior of Compiler Optimizations
    Han Lee
    J. Elliot B. Moss
    Software: Practice and Experience, 36(2006), pp. 835-844
    Preview abstract Compiler writers usually follow some known rules of thumb on the effectiveness of optimizations when implementing compilers. While many of these rules may be correct, it is a far better methodology to base implementation decisions on a scientific evaluation of optimizations. To this end, we present an exploration of the costs and benefits of optimizations implemented in Jikes RVM, a research virtual machine that includes an aggressive optimizing compiler. We measure and report the performance impact due to optimizations, both when the optimizations are used by themselves and when they are used with other optimizations. To understand why optimizations do or do not improve performance, we wrote kernel programs to test and explore the behavior of each optimization. To increase the generality of our results, we report measurements on two architectures (IA32 and PowerPC). Based on our findings, we present a set of recommendations for compiler writers. View details
    The DaCapo Benchmarks: Java Benchmarking Development and Analysis
    Robin Garner
    Chris Hoffmann
    Asjad M. Khan
    Kathryn S. McKinley
    Rotem Bentzu
    Daniel Feinberg
    Daniel Frampton
    Samuel Z. Guyer
    Martin Hirzel
    Antony Hosking
    Maria Jump
    Han Lee
    J. Elliot B. Moss
    Aashish Phansalkar
    Darko Stefanovic
    Thomas VanDrunen
    Ben Wiedermann
    Proceedings of OOSPLA, ACM(2006)
    Preview abstract Since benchmarks drive computer science research and industry product development, which ones we use and how we evaluate them are key questions for the community. Despite complex runtime tradeoffs due to dynamic compilation and garbage collection required for Java programs, many evaluations still use methodologies developed for C, C++, and Fortran. SPEC, the dominant purveyor of benchmarks, compounded this problem by institutionalizing these methodologies for their Java benchmark suite. This paper recommends benchmarking selection and evaluation methodologies, and introduces the DaCapo benchmarks, a set of open source, client-side Java benchmarks. We demonstrate that the complex interactions of (1) architecture, (2) compiler, (3) virtual machine, (4) memory management, and (5) application require more extensive evaluation than C, C++, and Fortran which stress (4) much less, and do not require (3). We use and introduce new value, time-series, and statistical metrics for static and dynamic properties such as code complexity, code size, heap composition, and pointer mutations. No benchmark suite is definitive, but these metrics show that DaCapo improves over SPEC Java in a variety of ways, including more complex code, richer object behaviors, and more demanding memory system requirements. This paper takes a step towards improving methodologies for choosing and evaluating benchmarks to foster innovation in system design and implementation for Java and other managed languages. View details
    SUIF Explorer: An Interactive and Interprocedural Parallelizer
    Shih-Wei Liao
    Robert P. Bosch Jr.
    Anwar M. Ghuloum
    Monica S. Lam
    PPOPP(1999), pp. 37-48