Albert Cohen
Albert is a research scientist at Google. An alumnus of École Normale Supérieure de Lyon and the University of Versailles, he has been a research scientist at Inria, a visiting scholar at the University of Illinois, an invited professor at Philips Research, and a visiting scientist at Facebook Artificial Intelligence Research. Albert Cohen works on parallelizing and optimizing compilers, machine learning compilers, parallel and synchronous programming languages, with applications to high-performance computing, artificial intelligence and reactive control.
Authored Publications
Sort By
Code Generation for Data-Dependent Stencils
Mohammed Essadki
Bertrand Michel
Bruno Maugars
Oleksandr Zinenko
Nicolas Vasilache
CGO, IEEE (2023)
Preview abstract
Numerical simulation often resorts to iterative in-place stencils such as the Gauss-Seidel or Successive Overrelaxation (SOR) methods. Writing high performance implementations of such stencils requires significant effort and time; it also involves non-local transformations beyond the stencil kernel itself. While automated code generation is a mature technology for image processing stencils, convolutions and out-of place iterative stencils (such as the Jacobi method), the optimization of in-place stencils requires manual craftsmanship. Building on recent advances in tensor compiler construction, we propose the first domain-specific code generator for iterative in-place stencils. Starting from a generic tensor compiler implemented in the MLIR framework, tensor abstractions are incrementally refined and lowered down to parallel, tiled, fused and vectorized code. We used our generator to implement a realistic, implicit solver for structured meshes, and demonstrate results competitive with an industrial computational fluid dynamics framework. We also compare with stand-alone stencil kernels for dense tensors.
View details
RL4ReAl: Reinforcement Learning for Register Allocation
S. VenkataKeerthy
Siddharth Jain
Anilava Kundu
Rohit Aggarwal
Ramakrishna Upadrasta
CC 2023, ACM
Preview abstract
We aim to automate decades of research and experience in register allocation, leveraging machine learning. We tackle this problem by embedding a multi-agent reinforcement learning algorithm within LLVM, training it with the state of the art techniques. We formalize the constraints that precisely define the problem for a given instruction-set architecture, while ensuring that the generated code preserves semantic correctness. We also develop a gRPC-based framework providing a modular and efficient compiler interface for training and inference. Our approach is architecture independent: we show experimental results targeting Intel x86 and ARM AArch64. Our results match or out-perform the
heavily tuned, production-grade register allocators of LLVM.
View details
Structured Operations: Modular Design of Code Generators for Tensor Compilers
Nicolas Vasilache
Oleksandr Zinenko
Aart Bik
Mahesh Ravishankar
Thomas Raoux
Alexander Belyaev
Matthias Springer
Tobias Gysi
Diego Caballero
Stephan Herhut
Stella Laurenzo
LCPC 2022, Springer (2023)
Preview abstract
The performance of machine learning systems heavily relies on code generators tailored to tensor computations.
We propose an approach to the design and implementation of such code generators leveraging the natural structure of tensor algebra and illustrating the progressive lowering of domain-specific abstractions in the MLIR infrastructure.
View details
Autotuning Convolutions is Easier Than You Think
Nicolas Tollenaere
Guillaume Iooss
Stéphane Pouget
Hugo Brunie
Christophe Guillon
P. Sadayappan
Fabrice Rastello
ACM TACO (2022)
Preview abstract
A wide range of scientific and machine learning applications depend on highly optimized implementations
of tensor computations. Exploiting the full capacity of a given processor architecture remains a challenging
task, due to the complexity of the microarchitectural features that come into play when seeking near-peak
performance. Among the state-of-the-art techniques for loop transformations for performance optimization,
AutoScheduler tends to outperform other systems. It often yields higher performance as
compared to vendor libraries, but takes a large number of runs to converge, while also involving a complex
training environment.
In this paper, we define a structured configuration space that enables much faster convergence to highperformance code versions, using only random sampling of candidates. We focus on two-dimensional convolutions on CPUs. Compared to state-of-the-art libraries, our structured search space enables higher performance
for typical tensor shapes encountered in convolution stages in deep learning pipelines. Compared to autotuning code generators like AutoScheduler, it prunes the search space while increasing the density of efficient
implementations. We analyze the impact on convergence speed and performance distribution, on two Intel x86
processors and one ARM AArch64 processor. We match or outperform the performance of the state-of-the-art
oneDNN library and TVM’s AutoScheduler, while reducing the autotuning effort by at least an order of
magnitude.
View details
Preview abstract
This paper considers the correctness of domain-specific compilers for tensor programming languages through the study of Halide, a popular representative. It describes a translation validation algorithm for affine Halide specifications, independently of the scheduling language. The algorithm relies on “prophetic” annotations added by the compiler to the generated array assignments. The annotations provide a refinement mapping [Abadi and Lamport 1988] from assignments in the generated code to the tensor definitions from the specification. Our implementation leverages an affine solver and a general SMT solver, and scales to complete Halide benchmarks.
View details
Preview abstract
We propose a novel solution for the Register Allocation problem, leveraging multi-agent hierarchical Reinforcement Learning. We formalize the constraints that precisely define the problem for a given instruction-set architecture, while ensuring that the generated code preserves semantic correctness. We also develop a gRPC based framework providing a modular and efficient compiler interface for training and inference. Experimental results match or outperform the LLVM register allocators, targeting Intel x86 and ARM AArch64.
View details
Preview abstract
We investigate the programming of reactive systems combining closed-loop control with performance-
intensive components such as Machine Learning (ML). Reactive control systems are often safety-
critical and associated with real-time execution requirements, a domain of predilection for syn-
chronous programming languages. Extending the high levels of assurance found in reactive control
systems to computationally-intensive code remains an open issue. We tackle it by unifying concepts
and algorithms from synchronous languages with abstractions commonly found in general-purpose
and ML compilers. This unification across embedded and high-performance computing enables a high
degree of reuse of compiler abstractions and code. We first recall commonalities between dataflow
synchronous languages and the static single assignment (SSA) form of general-purpose/ML compilers.
We highlight the key mechanisms of synchronous languages that SSA does not cover—denotational
concepts such as synchronizing computations with an external time base, cyclic and reactive I/O, as
well as the operational notions of relaxing control flow dominance and the modeling of absent values.
We discover that initialization-related static analyses and code generation aspects can be fully
decoupled from other aspects of synchronous semantics such as memory management and causality
analysis, the latter being covered by existing dominance-based algorithms of SSA-form compilers.
We show how the SSA form can be seamlessly extended to enable all SSA-based transformations
and optimizations on reactive programs with synchronous concurrency. We derive a compilation
flow suitable for both high-performance and reactive aspects of a control application, by embedding
the Lustre dataflow synchronous language into the SSA-based MLIR/LLVM compiler infrastructure.
This allows the modeling of signal processing and deep neural network inference in the (closed) loop
of feedback-directed control systems. With only a minor efforts leveraging the MLIR infrastructure,
the generated code matches or outperforms state-of-the-art synchronous language compilers on
computationally-intensive ML applications.
View details
Progressive Raising in Multi-level IR
Lorenzo Chelini
Andi Drebes
Alex Zinenko
Nicolas Vasilache
Tobias Grosser
Henk Corporaal
International Conference on Code Generation and Optimization (CGO), ACM, February 27th - March 3rd, 2021, Virtual Conference (2021)
Preview abstract
Multi-level intermediate representation (IR) rewriting promises to lower the cost of designing domain-specific compilers by providing a non-opinionated IR, thus enabling to model the right abstraction level for the problem at hand. High-level abstractions are then lowered to low-level IR using
progressive lowering (i.e., from higher-level representations down to the lowest in small steps across the abstraction levels). But progressive lowering works in a single direction: high-level operations can be transformed into operations with lower-level of abstraction, but low-level operations are never raised to high-level ones. Thus, the entry point into the lowering pipeline defines the highest level of abstraction for all subsequent transformations, potentially limiting the set of applicable optimizations. This is especially true for general-purpose languages that are not semantically rich enough to
enter the higher parts of the lowering pipeline precluding aggressive domain-specific optimizations. To enable effective domain-specific compilation via progressive lowering in a multi-level IR compiler, we propose Multi-Level Tactics.
Multi-Level Tactics allows us to describe computational patterns and raise them to high-level abstractions declaratively. It enables a complementary path to progressive lowering, which we call progressive raising, hence extending the set of optimizations that can be performed on general-purpose languages in a multi-level IR compiler.
View details
Reconciling Optimization With Secure Compilation
Son Tuan Vu
Arnaud De Grandmaison
Christophe Guillon
Karine Heydemann
Proceedings of the ACM (PACMPL) (2021)
Preview abstract
Software protections against side-channel and physical attacks are essential to the development of secure applications. Such protections are meaningful at machine code or micro-architectural level, but they typically do not carry observable semantics at source level. This renders them susceptible to miscompilation, and security engineers embed input/output side-effects to prevent optimizing compilers from altering them. Yet these side-effects are error-prone and compiler-dependent. The current practice involves analyzing the generated machine code to make sure security or privacy properties are still enforced. They may also be too expensive in fine-grained protections such as control-flow integrity. We introduce observations of the program state that are intrinsic to the correct execution of security protections, along with means to specify and preserve observations across the compilation flow. Such observations complement the input/output semantics-preservation contract of compilers. We introduce an opacification mechanism to preserve and enforce a partial ordering of observations. This approach is compatible with a production compiler and does not incur any modification to its optimization passes. We validate the effectiveness and performance of our approach on a range of benchmarks, expressing the secure compilation of these applications in terms of observations to be made at specific program points.
View details
Preview abstract
Floating Point (FP) units in processors are generally limited to supporting a subset of formats defined by the IEEE 754 standard. As a result, high-efficiency languages and optimizing compilers for high-performance computing only support IEEE standard types and applications needing higher precision involve cumbersome memory management and calls to external libraries. Furthermore, numerical computations often involve iterative solvers where the residual error is a function of the input data, or where dynamically adaptive precision can accelerate convergence; numerical analysts have to resort to explicit conversions and multi-versioning, resulting in code bloat and making the intent of the program even less clear. We present an extension of the C type system that can represent generic FP operations and formats, supporting both static and dynamically variable precision. We design and implement a compilation flow bridging the abstraction gap between this type system and low-level FP instructions or software libraries. This flow enables classical optimizations as well as multi-precision-specific ones associated with memory management and target-specific implementation. The effectiveness of our solution is demonstrated through an LLVM-based implementation, leveraging aggressive optimizations in LLVM including the Polly loop nest optimizer, and leveraging two alternative backend code generators: one that targets the ISA of a variable precision FP arithmetic Co-processor, and one targeting the MPFR multi-precision floating point library. Both targets support the statically and dynamically adaptable precision and size of our language extension. On the PolyBench suite, our optimizing compilation flow targeting MPFR outperforms the Boost programming interface for the MPFR library by a factor of 1.84x.
View details