Giter Club home page Giter Club logo

gsoc-2020's Introduction

SPIR-V to LLVM IR dialect conversion in MLIR

A final summary of my Google Summer of Code 2020 experience with MLIR under LLVM Compiler Infrastructure.

Background

Instead of a single intermediate representation (IR) with a closed set of operations and types MLIR uses dialects - different flavours of IR that form a grouping of operations and types under some common functionality. These dialects can be converted one to another in a progressive way, as well as translated to outside-MLIR IRs such as LLVM IR or SPIR-V.

My project particularly focuses on two dialects: SPIR-V and LLVM dialects, and aims at implementing a missing conversion path from SPIR-V dialect to LLVM dialect.

Motivation

The motivation behind this project is that the SPIR-V to LLVM conversion enables us to embrace SPIR-V into LLVM's ecosystem via MLIR dialect conversion interface. As a result, we can convert SPIR-V dialect to LLVM, then into CPU machine code and JIT-compile it. Also, the SPIR-V to LLVM conversion path helps with performance checks when designing new conversions or benchmarking execution on different hardware.

More practical benefits include supporting SwiftShader, a CPU-based Vulkan implementation, as well as LLVM-based GPU hardware driver compilers such as AMDVLK.

Aims

In my proposal I have originally outlined the following aims for the project:

  • Support commonly used types: scalars, vectors, arrays, pointers and structs.

  • Support SPIR-V scalar operations conversion (e.g. arithmetic or bitwise operations).

  • Support operations from GLSL extended instruction set.

  • Support important operations such as spv.func and spv.module, as well as SPIR-V's control flow.

  • Model SPIR-V specific operations such as entry points or specialization constants in LLVM dialect.

I have also added a stretch goal - mlir-spirv-cpu-runner - a tool that would allow executing the host code and a GPU kernel fully on CPU via the conversion path that I would have implemented.

Results

By the end of my Google Summer of Code project, I have achieved the following results.

  • Conversion coverage

    In terms of the conversion coverage, the SPIR-V to LLVM conversion fully supports nearly all scalar and GLSL operations, all control flow operations (without loop and selection control), and SPIR-V functions and modules. It also supports all common types that were outlined in the Aims section.

    To indicate more precise information of how the conversion works for various types and operations, I have created a conversion manual. This document also describes any limitations of the current conversion, and types and operations that have not been implemented yet.

  • mlir-spirv-cpu-runner prototype

    During my project, I have realised that the conversion for some specific SPIR-V operations may not be relevant for LLVM. For example, specialization constants (spv.specConstant in the SPIR-V dialect) are used to inject constant values to half-compiled shader code prior to the final compilation stage, and therefore are not fully relevant to the conversion flow. Similarly, in the CPU world a program's entry point is a "main" function. Hence, spv.EntryPoint operation conversion is mostly important for keeping the metadata associated with the given entry-point function (e.g. the workgroup size) in the kernel.

    As a result, my mentors and I have decided that it would be more reasonable to implement mlir-spirv-cpu-runner (originally my stretch goal).

    I have created a prototype of this runner tool, and all necessary passes for it. Please note that patches for mlir-spirv-cpu-runner have not been committed and pushed to the main repository as for now.

    There is no multi-threading/parallelism involved in the conversion and therefore a case of a single-threaded GPU kernel with scalar code is considered. Currently, the pipeline for mlir-spirv-cpu-runner can be described as follows:

    • Convert the GPU kernel into SPIR-V dialect and apply all necessary transformations in order to get a valid SPIR-V module.

    • Emulate the kernel call by converting the launching operation into a normal function call. The data from the host side to the device is passed via copying to global variables. These are created in both the host and the kernel code and later linked when nested modules are folded.

    • Convert SPIR-V to LLVM via the new conversion path.

    runner's pipeline

    After these passes the IR transforms into a nested LLVM module - a main one that represents the host code and a kernel module. These modules are linked and executed.

Challenges

There was a number of challenges that I have encountered while working on the project.

  • LLVM and MLIR have a wide range of APIs that help working with data structures, etc. Sometimes it was not that easy to find the one that suits the best in a particular situation. However, thanks to advices from the community this problem was usually promptly resolved.

  • MLIR is a young and rapidly evolving project, and I found that sometimes the current infrastructure may not be enough.

    Particularly, this was the case for mlir-spirv-cpu-runner. After all transformations have been applied, the IR became a nested LLVM dialect module that had to be translated to proper LLVM IR and linked with the nested modules inside it. Unfortunately, the current infrastructure of the MLIR's JitRunner and ExecutionEngine did not support the translation and the linking of multiple MLIR modules.

    The intermediate solution I have implemented was to pass an optional function callback to a custom LLVM IR module builder. This allowed to fold the nested MLIR module into a single LLVM IR module. This is not a final solution as mlir::JitRunner and mlir::ExecutionEngine may need to be refined to get closer to or to reuse the functionaity of their LLVM counterparts).

Patches and the work done

I was working on the project incrementally, submitting patches separately for each group of operations or feature. Below is the list of patches that I have submitted during the project, as well some noteworthy discussions within the community that I have participated in. I decided to group patches logically - they are based on the common functionality or features. All patches have been commited and pushed to the main LLVM repository unless stated otherwise.

  1. Setting up the core infrastructure required for the SPIR-V to LLVM dialect conversion

    Patches:

  2. Main conversion patterns for scalar operations

    These patches include implementations and tests for arithmetic, bitwise, cast, comparison, and logical operationss.

    Patches:

  3. Extra type conversions

    Since SPIR-V reuses Standard dialect types, there was no need initially to add support for scalar or vector types conversion to the LLVMTypeConverter. Later, to support structs, arrays, runtime arrays and pointers additional patterns were added.

    Patches:

  4. SPIR-V function and module conversions

    These patches allow to convert spv.func and its control attributes, return operations and include a basic conversion of spv.module operation.

    Patches:

  5. Control flow operations conversion

    These patches implement conversions for branch, function call and structured control flow operations.

    Patches:

  6. Memory related operations conversion

    A number of patches that implement conversion patterns for spv.Load, spv.Variable and other operations that involve memory handling.

    Patches:

  7. GLSL operations conversion

    These patches introduce conversion patterns for operations from GLSL extended instruction set.

    Patches:

  8. Other operations conversions

    These patches include conversion for operations that do not fall into other categoris (like spv.constant or spv.Undef for example).

    Patches:

  9. mlir-spirv-cpu-runner patches

    In order to implement mlir-spirv-cpu-runner, I had to submit extra patches to deal with spv.EntryPoint or spv.AccessChain conversion, support array strides and struct offsets, as well as to create a pass that emulates the GPU kernel call in LLVM dialect.

    Patches:

    Related discussions:

  10. Documentation and style fixes

    These patches contain the conversion's manual updates, as well as some style/bug fixes.

    Patches:

  11. Patches outside SPIR-V to LLVM conversion

    I have also submitted a couple of patches outside the scope of the project. These, for example, include improving SPIR-V documentation, fixing bugs or adding support for particular operations.

Important links

The original proposal can be found in this repository under proposal directory. In addition, an online public version can be found here.

During my project, I was keeping a document where I interacted with my mentors and tracked what has been done, what challenges I have encountered, and my plans. The public copy is availiable here (The comments from the original document have not been saved in the copy).

I have also created a manual to document what is supported in the SPIR-V to LLVM dialect conversion at the moment. This can be found on the official MLIR website under SPIR-V Dialect to LLVM Dialect conversion manual.

All conversion code that I wrote can be found in SPIRVToLLVM directory in LLVM repository, particularly:

  • Conversion headers and implementation can be found here and here.

  • Tests are located here.

Since the mlir-spirv-cpu-runner's code has not been landed to yet, I have not included the location of the code for it. However, this can be found in the related patches: https://reviews.llvm.org/D86112 and https://reviews.llvm.org/D86108.

Future work

Personally, I plan to continue working on the SPIR-V to LLVM dialect conversion. However, any contributions are welcome. The work on the conversion may be continued in the following ways:

  1. Land the mlir-spirv-cpu-runner

    The current version of mlir-spirv-cpu-runner uses a custom function callback to propagate information about how to construct an LLVM IR module. This is not a nice way and needs a better solution. A possible approach would be to improve mlir::ExecutionEngine. More can be found in related revision and discussion.

  2. Add more type/operations conversions or scale existing conversion patterns

    A great way to continue the current work would be adding new conversion patterns, e.g. for atomic operations. Also, more types like spv.matrix can be supported.

    Another possible contribution is to scale some of the existing patterns, including but not limited to having a spv.constant to support arrays and structs or map spv.loop's control to LLVM IR metadata.

    Note that what has not been done can be easily deduced from the conversion manual described above.

  3. Model SPIR-V decorations in LLVM dialect

    This project did not intend to add support for SPIR-V decoration attributes. However, they can be mapped to LLVM IR metadata/flags/etc. A good starting point would be a post on modelling some of these decorations.

  4. Map GPU-level multi-threading/parallelism to LLVM

    A very interesing next step is to find a way of how to represent GPU's workgroups, blocks and threads on the CPU level. This requires a major discussion within the community, so it can be considered as a long-term goal.

Acknowledgement

I would like to thank my mentors Lei Zhang and Mahesh Ravishankar for their guidance and support along the project. I have learnt a lot about MLIR's ecosystem, SPIR-V, Vulkan and GPU programming. Also, I would like to thank Alex Zinenko for his help on the LLVM side, and River Riddle for his help with code reviews and C++/LLVM/MLIR APIs.

Special thanks to all LLVM/MLIR community members for their advice and comments.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.