![Daniel von Dincklage](https://storage.googleapis.com/gweb-research2023-media/pubtools/504.png)
Daniel von Dincklage
Research Areas
Authored Publications
Google Publications
Other Publications
Sort By
Uncanny Valleys in Declarative Language Design
Mark S. Miller
Vuk Ercegovac
Brian Chin
SNAPL 2017, Summit on Advances in Programming Languages (to appear)
Preview abstract
When people write programs in conventional programming languages, they over-specify how to solve the problem they have in mind. Over-specification prevents the language's implementation from making many optimization decisions, leaving programmers with this burden. In more declarative languages, programmers over-specify less, enabling the implementation to make more choices for them. As these decisions improve, programmers shift more attention from implementation to their real problems. This process easily overshoots. When under-specified programs almost always work well enough, programmers rarely need to think about implementation details. As their understanding of implementation choices atrophies, the controls provided so they can override these decisions become obscure.
Our declarative language project, Yedalog, is in the midst of this dilemma. The improvements in question make our users more productive, so we cannot simply retreat back towards over-specification. To proceed forward instead, we must meet some of the expectations we prematurely provoked, and our implementation's behavior must help users learn expectations more aligned with our intended semantics.
These are general issues. Discussing their concrete manifestation in Yedalog should help other declarative systems that come to face these issues.
View details
Yedalog: Exploring Knowledge at Scale
Brian Chin
Vuk Ercegovac
Peter Hawkins
Mark S. Miller
Franz Och
Chris Olston
1st Summit on Advances in Programming Languages (SNAPL 2015), Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik, Dagstuhl, Germany, pp. 63-78
Preview abstract
With huge progress on data processing frameworks, human programmers are frequently the bottleneck when analyzing large repositories of data. We introduce Yedalog, a declarative programming language that allows programmers to mix data-parallel pipelines and computation seamlessly in a single language. By contrast, most existing tools for data-parallel computation embed a sublanguage of data-parallel pipelines in a general-purpose language, or vice versa. Yedalog extends Datalog, incorporating not only computational features from logic programming, but also features for working with data structured as nested records. Yedalog programs can run both on a single machine, and distributed across a cluster in batch and interactive modes, allowing programmers to mix different modes of execution easily.
View details
Preview abstract
Modern object-oriented languages have complex features that cause
programmers to overspecify their programs. This overspecification
hinders automatic optimizers, since they must preserve the
overspecified semantics. If an optimizer knew which semantics the
programmer intended, it could do a better job.
Making a programmer clarify his intentions by placing assumptions
into the program is rarely practical. This is because the programmer
does not know which parts of the programs' overspecified semantics
hinder the optimizer. Therefore, the programmer has to guess
which assumption to add. Since the programmer can add many
different assumptions to a large program, he will need to place
many such assumptions before he guesses right and helps the optimizer.
We present IOpt, a practical optimizer that uses a
specification of the programmers' intended semantics to enable
additional optimizations. That way, our optimizer can significantly
improve the performance of a program. We present case studies in which
we use IOpt to speed up two programs by over 50%.
To make specifying the intended semantics practical, IOpt
communicates with the programmer. IOpt identifies which
assumptions the programmer textit{should} place, and where he should
place them. IOpt ranks each assumption by (i) the likelyhood that
the assumption conforms to the programmers' intended semantics and
(ii) how much the assumption will help IOpt improve the programs'
performance. IOpt proposes ranked assumptions to the programmer,
who just picks those that conform to his intended semantics. With
this approach, IOpt keeps the programmers' specification burden
low. Our case studies show that the programmer just needs to add a few
assumptions to realize the 50% speedup.
View details
Explaining Failures of Program Analyses
Preview abstract
With programs getting larger and often more complex with each new release, programmers need all the help they can get in understanding and transforming programs. Fortunately, modern development environments, such as Eclipse, incorporate tools for understanding, navigating, and transforming programs. These tools often use program analyses to extract relevant properties of programs.
These tools are often invaluable to developers; for example, many programmers use refactoring tools regularly. However, poor results by the underlying analyses can compromise a tool's usefulness. For example, a bug finding tool may produce too many false positives if the underlying analysis is overly conservative, and thus overwhelm the user with too many possible errors in the program. In such cases it would be invaluable for the tool to explain to the
user \textit{why} it believes that each bug exists. Armed with this knowledge, the user can decide which bugs are worth pursing and which are false positives.
The contributions of this paper are as follows: (i) We describe requirements
on the structure of an analysis so that we can produce reasons when the
analysis fails; the user of the analysis determines whether or an analysis's
results constitute failure. We also describe a simple language that enforces
these requirements; (ii) We describe how to produce necessary and sufficient
reasons for analysis failure; (iii) We evaluate our system with respect to a
number of analyses and programs and find that most reasons are small (and thus
usable) and that our system is fast enough for interactive use.
View details
Simplifying Web Traversals by Recognizing Behavior Patterns
Preview abstract
Web sites must often service a wide variety of clients. Thus, it is inevitable that a web site will allow some visitors to find their information quickly while other visitors have to follow many links to get to the information that they need. Worse, as web sites evolve, they may get worse over time so that all visitors have to follow many links to find the information that they need.
This paper describes an extensible system that analyzes web logs to find and exploit opportunities for improving the navigation of a web site. The system is extensible in that the inefficiencies that it finds and eliminates are not predetermined; to search for a new kind of inefficiency, web site admininstrators can provide a pattern (in a language designed specifically for this) that finds and eliminates the new inefficiency.
View details
Fast Online Pointer Analysis
Martin Hirzel
Michael Hind
ACM Transactions on Programming Languages and Systems (TOPLAS), 29(2007), pp. 11
Preview abstract
Pointer analysis benefits many useful clients, such as compiler optimizations and bug finding tools. Unfortunately, common programming language features such as dynamic loading, reflection, and foreign language interfaces, make pointer analysis difficult. This article describes how to deal with these features by performing pointer analysis online during program execution. For example, dynamic loading may load code that is not available for analysis before the program starts. Only an online analysis can analyze such code, and thus support clients that optimize or find bugs in it. This article identifies all problems in performing Andersen's pointer analysis for the full Java language, presents solutions to these problems, and uses a full implementation of the solutions in a Java virtual machine for validation and performance evaluation. Our analysis is fast: On average over our benchmark suite, if the analysis recomputes points-to results upon each program change, most analysis pauses take under 0.1 seconds, and add up to 64.5 seconds.
View details
The DaCapo Benchmarks: Java Benchmarking Development and Analysis
Robin Garner
Chris Hoffmann
Asjad M. Khan
Kathryn S. McKinley
Rotem Bentzu
Daniel Feinberg
Daniel Frampton
Samuel Z. Guyer
Martin Hirzel
Antony Hosking
Maria Jump
Han Lee
J. Elliot B. Moss
Aashish Phansalkar
Darko Stefanovic
Thomas VanDrunen
Ben Wiedermann
Proceedings of OOSPLA, ACM(2006)
Preview abstract
Since benchmarks drive computer science research and industry
product development, which ones we use and how we evaluate
them are key questions for the community. Despite complex runtime
tradeoffs due to dynamic compilation and garbage collection
required for Java programs, many evaluations still use methodologies
developed for C, C++, and Fortran. SPEC, the dominant purveyor
of benchmarks, compounded this problem by institutionalizing
these methodologies for their Java benchmark suite. This paper
recommends benchmarking selection and evaluation methodologies,
and introduces the DaCapo benchmarks, a set of open source,
client-side Java benchmarks. We demonstrate that the complex interactions
of (1) architecture, (2) compiler, (3) virtual machine, (4)
memory management, and (5) application require more extensive
evaluation than C, C++, and Fortran which stress (4) much less, and
do not require (3). We use and introduce new value, time-series,
and statistical metrics for static and dynamic properties such as
code complexity, code size, heap composition, and pointer mutations.
No benchmark suite is definitive, but these metrics show that
DaCapo improves over SPEC Java in a variety of ways, including
more complex code, richer object behaviors, and more demanding
memory system requirements. This paper takes a step towards improving
methodologies for choosing and evaluating benchmarks to
foster innovation in system design and implementation for Java and
other managed languages.
View details
Understanding the Behavior of Compiler Optimizations
Han Lee
J. Elliot B. Moss
Software: Practice and Experience, 36(2006), pp. 835-844
Preview abstract
Compiler writers usually follow some known rules of thumb on the effectiveness of optimizations when
implementing compilers. While many of these rules may be correct, it is a far better methodology to
base implementation decisions on a scientific evaluation of optimizations. To this end, we present an
exploration of the costs and benefits of optimizations implemented in Jikes RVM, a research virtual machine
that includes an aggressive optimizing compiler. We measure and report the performance impact due to
optimizations, both when the optimizations are used by themselves and when they are used with other
optimizations. To understand why optimizations do or do not improve performance, we wrote kernel
programs to test and explore the behavior of each optimization. To increase the generality of our results, we
report measurements on two architectures (IA32 and PowerPC). Based on our findings, we present a set of
recommendations for compiler writers.
View details
Converting Java Classes to Use Generics
Proceedings of OOSPLA, ACM Press(2004), pp. 1-14
Preview abstract
Generics offer significant software engineering benefits since
they provide code reuse without compromising type safety.
Thus generics will be added to the Java language in the next
release. While this extension to Java will help programmers
when they are writing new code, it will not help legacy code
unless it is rewritten to use generics. In our experience, manually
modifying existing programs to use generics is complex
and can be error prone and labor intensive.
We describe a system, Ilwith, that (i) converts non-generic
classes to generic classes and (ii) rewrites their clients to use
the newly generified classes. Our experiments with a number
of Java container classes show that our system is effective in
modifying legacy code to use generics.
View details
Making Patterns Explicit with Metaprogramming
Lecture Notes in Computer Science / Proceedings of GPCE, Springer(2003), pp. 287-306
Preview abstract
Design patterns have been a useful tool for a better understanding
of the collaboration between several classes and objects in a
program.
One drawback of this approach is the lack of an explicit representation
of the patterns used in a program, as the collaboration between classes
is normally expressed in the code of the class itself.
In this paper, we present a method for explicitly representing patterns in
a program with the help of metaprogramming techniques. The method
presented has benefits compared to traditional approaches with respect
to documentation and reusability of the program, as well as providing a
better separation of the protocol contained in the pattern.
View details