commit | 6294cf984c9d5cd40c309c914d1075dbee76944d | [log] [tgz] |
---|---|---|
author | Aiden Grossman <39388941+boomanaiden154@users.noreply.github.com> | Tue Jun 20 08:06:16 2023 -0700 |
committer | GitHub <noreply@github.com> | Tue Jun 20 08:06:16 2023 -0700 |
tree | 044a3c420f6e8f954accfc3511fd8f1001447035 | |
parent | dafc34757e1138199cc8c165f48f06cefe8ba4a0 [diff] |
Refactor combine_training_corpus.py into script and library (#268) This patch refactors combine_training_corpus.py into a library file that can easily be imported in downstream projects (and other utilities). This also makes unit testing slightly easier (as no special accomodations have to be made for CLI flags). This patch adds in two unittests for combining training corpora as well.
MLGO is a framework for integrating ML techniques systematically in LLVM. It replaces human-crafted optimization heuristics in LLVM with machine learned models. The MLGO framework currently supports two optimizations:
The compiler components are both available in the main LLVM repository. This repository contains the training infrastructure and related tools for MLGO.
We currently use two different ML algorithms: Policy Gradient and Evolution Strategies to train policies. Currently, this repository only support Policy Gradient training. The release of Evolution Strategies training is on our roadmap.
Check out this demo for an end-to-end demonstration of how to train your own inlining-for-size policy from the scratch with Policy Gradient, or check out this demo for a demonstration of how to train your own regalloc-for-performance policy.
For more details about MLGO, please refer to our paper MLGO: a Machine Learning Guided Compiler Optimizations Framework.
For more details about how to contribute to the project, please refer to contributions.
We occasionally release pretrained models that may be used as-is with LLVM. Models are released as github releases, and are named as [task]-[major-version].[minor-version].The versions are semantic: the major version corresponds to breaking changes on the LLVM/compiler side, and the minor version corresponds to model updates that are independent of the compiler.
When building LLVM, there is a flag -DLLVM_INLINER_MODEL_PATH
which you may set to the path to your inlining model. If the path is set to download
, then cmake will download the most recent (compatible) model from github to use. Other values for the flag could be:
# Model is in /tmp/model, i.e. there is a file /tmp/model/saved_model.pb along # with the rest of the tensorflow saved_model files produced from training. -DLLVM_INLINER_MODEL_PATH=/tmp/model # Download the most recent compatible model -DLLVM_INLINER_MODEL_PATH=download
Currently, the assumptions for the system are:
Training assumes a clang build with ML ‘development-mode’. Please refer to:
The model training - specific prerequisites are:
Pipenv:
pip3 install pipenv
The actual dependencies:
pipenv sync --system
Note that the above command will only work from the root of the repository since it needs to have Pipfile.lock
in the working directory at the time of execution.
If you plan on doing development work, make sure you grab the development and CI categories of packages as well:
pipenv sync --system --categories "dev-packages ci"
Optionally, to run tests (run_tests.sh), you also need:
sudo apt-get install virtualenv
Note that the same tensorflow package is also needed for building the ‘release’ mode for LLVM.
An end-to-end demo using Fuchsia as a codebase from which we extract a corpus and train a model.