Incorporate tosa/ tests into our test infrastructure (#1501)
In #1466, we ran into some issues when adding the newly introduced tosa/
suite to relevant test targets, so this PR revamps this system to
properly integrate tosa/ as well as simplify similar work in the future.
Instead of the state of the art with one `check-stablehlo` custom target
and a bunch of suites becoming its dependencies, we now have three
custom targets:
1) check-stablehlo-ci whose name makes it clear that this is what
runs in CI.
2) check-stablehlo-slow for slow-running tests like the testdata/
suite that we'd like to separate from the other suites, so that
humans don't have to run them every time.
3) check-stablehlo-quick for everything else.
Each suite becomes a dependency of either -slow or -quick, making its
nature explicit and clearly documented.
Also, it looks like the tosa/ suite got broken by one of the LLVM bumps
that happened between when its PR got created and when it got merged. I
fixed those breakages too.StableHLO is an operation set for high-level operations (HLO) in machine learning (ML) models. Essentially, it's a portability layer between different ML frameworks and ML compilers: ML frameworks that produce StableHLO programs are compatible with ML compilers that consume StableHLO programs.
Our goal is to simplify and accelerate ML development by creating more interoperability between various ML frameworks (such as TensorFlow, JAX and PyTorch) and ML compilers (such as XLA and IREE).
StableHLO is based on the MHLO dialect and enhances it with additional functionality, including serialization and versioning. We use MLIR bytecode as serialization format and provide backward and forward compatibility guarantees. This ensures compatibility between frameworks and compilers, even as StableHLO continues to evolve.
This repository includes the StableHLO specification along with an MLIR-based implementation in C++ and Python, which you can use to define StableHLO programs for consumption by compilers such as XLA and IREE.
Here's how to build the StableHLO repo on Linux or macOS:
CMake is our primary build tool, so before you begin make sure that you have CMake and Ninja installed.
If you're using Linux, we recommend installing lld as well - we have observed it to be noticeably faster than alternatives on our typical software and hardware configurations.
# On Linux sudo apt install cmake ninja-build lld # On macOS brew install cmake ninja
Set the LLVM_ENABLE_LLD shell variable depending on your preferences. We recommend setting it to ON on Linux and to OFF on macOS.
[[ "$(uname)" != "Darwin" ]] && LLVM_ENABLE_LLD="ON" || LLVM_ENABLE_LLD="OFF"
Clone the StableHLO repo and the LLVM repository:
git clone https://github.com/openxla/stablehlo
cd stablehlo && git clone https://github.com/llvm/llvm-project.git
Cloning the LLVM repository may take a few minutes.
Make sure you check out the correct commit in the LLVM repository:
(cd llvm-project && git fetch && git checkout $(cat ../build_tools/llvm_version.txt))
You need to do this every time llvm_version.txt changes.
Configure and build MLIR:
build_tools/build_mlir.sh ${PWD}/llvm-project/ ${PWD}/llvm-build
This will take a considerable amount of time. For example, on a MacBook Pro with an M1 Pro chip, building MLIR took around 10 minutes at the moment of writing.
Again, you need to do this every time llvm_version.txt changes.
Build StableHLO as a standalone library:
mkdir -p build && cd build cmake .. -GNinja \ -DLLVM_ENABLE_LLD="$LLVM_ENABLE_LLD" \ -DCMAKE_BUILD_TYPE=Release \ -DLLVM_ENABLE_ASSERTIONS=On \ -DMLIR_DIR=${PWD}/../llvm-build/lib/cmake/mlir
Now you can make sure it works by running some tests:
ninja check-stablehlo-tests
You should see results like this:
Testing Time: 5.99s
Passed: 47
This runs all the tests in stablehlo/tests/.
Building an amazing portability layer between ML frameworks and ML compilers requires collaboration across the whole ML industry, so we're happy to have your help on the StableHLO project.
We're using GitHub issues / pull requests to organize development and openxla-discuss to have longer discussions. We also have a #stablehlo channel on the OpenXLA Discord server.