Jump to Content

Compiler Support for Sparse Tensor Computations in MLIR

LLVM, https://llvm.swoogo.com/2021devmtg/ (2021)

Abstract

Sparse vectors, matrices, and their multidimensional generalization into tensors arise in many problems in science, engineering, machine learning, and data analytics. Software that operates on such tensors can exploit the sparsity to reduce both storage requirements and computational time by only storing and computing on nonzero elements. This exploitation comes at a cost, though, since developing and maintaining sparse software by hand is tedious and error-prone. Therefore, it makes sense to treat sparsity merely as a property, not a tedious implementation detail, and let the compiler generate sparse code automatically from a sparsity-agnostic definition of the computation. This idea was pioneered in the MT1 project for linear algebra and formalized to tensor algebra in the TACO (Sparse Tensor Algebra Compiler) project. In this technical talk, we discuss how compiler support for sparse tensor computations was added to MLIR (LLVM’s extensible infrastructure for building domain specific compilers). We discuss the concept of sparse tensor types as first class citizens and show how this simplifies the introduction of new front-ends and back-ends for systems that want to add sparse tensor support. We also show how MLIR can be used for rapid sparse library development, driven by either exhaustively searching for suitable sparse storage formats or using ML to find such formats quicker, or even for end-to-end solutions mapping sparse agnostic specification of kernels to efficient sparse code at runtime. Finally, we discuss how you can contribute to this new sparse tensor support in MLIR.

Research Areas