commit | 45a45dda61689562a5000fbb6a2b72a905bbc93c | [log] [tgz] |
---|---|---|
author | Aiden Grossman <39388941+boomanaiden154@users.noreply.github.com> | Thu Jul 20 16:27:09 2023 -0700 |
committer | GitHub <noreply@github.com> | Thu Jul 20 16:27:09 2023 -0700 |
tree | 1970afa617f6922903c82d390cc8d1c46ff86aa8 | |
parent | b476595db5e85ee3c63f727bcec062bd3e3e8904 [diff] |
Add logging verbosity level flag to extract_ir.py (#278) * Add logging verbosity level flag to extract_ir.py This commit adds a logging verbosity level flag to extract_ir.py and the appropriate plumbing/implementation in extract_ir_lib.py. This is primarily motivated by these errors coming up quite often in non-trivial builds (as it is fairly often there is some assembly or some flags don't get passed around somewhere) and not providing a very high signal to noise ratio, especially when used as a library against a bunch of projects at once. * Address reviewer feedback - Don't change logging verbosity back to info to print the status message if the verbosity has been set lower. * Address reviewer feedback - Log subprocess output directly rather than having subprocess write directly to STDOUT/STDERR. * Address reviewer feedback - Switch to using subprocess.check_output
MLGO is a framework for integrating ML techniques systematically in LLVM. It replaces human-crafted optimization heuristics in LLVM with machine learned models. The MLGO framework currently supports two optimizations:
The compiler components are both available in the main LLVM repository. This repository contains the training infrastructure and related tools for MLGO.
We currently use two different ML algorithms: Policy Gradient and Evolution Strategies to train policies. Currently, this repository only support Policy Gradient training. The release of Evolution Strategies training is on our roadmap.
Check out this demo for an end-to-end demonstration of how to train your own inlining-for-size policy from the scratch with Policy Gradient, or check out this demo for a demonstration of how to train your own regalloc-for-performance policy.
For more details about MLGO, please refer to our paper MLGO: a Machine Learning Guided Compiler Optimizations Framework.
For more details about how to contribute to the project, please refer to contributions.
We occasionally release pretrained models that may be used as-is with LLVM. Models are released as github releases, and are named as [task]-[major-version].[minor-version].The versions are semantic: the major version corresponds to breaking changes on the LLVM/compiler side, and the minor version corresponds to model updates that are independent of the compiler.
When building LLVM, there is a flag -DLLVM_INLINER_MODEL_PATH
which you may set to the path to your inlining model. If the path is set to download
, then cmake will download the most recent (compatible) model from github to use. Other values for the flag could be:
# Model is in /tmp/model, i.e. there is a file /tmp/model/saved_model.pb along # with the rest of the tensorflow saved_model files produced from training. -DLLVM_INLINER_MODEL_PATH=/tmp/model # Download the most recent compatible model -DLLVM_INLINER_MODEL_PATH=download
Currently, the assumptions for the system are:
Training assumes a clang build with ML ‘development-mode’. Please refer to:
The model training - specific prerequisites are:
Pipenv:
pip3 install pipenv
The actual dependencies:
pipenv sync --system
Note that the above command will only work from the root of the repository since it needs to have Pipfile.lock
in the working directory at the time of execution.
If you plan on doing development work, make sure you grab the development and CI categories of packages as well:
pipenv sync --system --categories "dev-packages ci"
Optionally, to run tests (run_tests.sh), you also need:
sudo apt-get install virtualenv
Note that the same tensorflow package is also needed for building the ‘release’ mode for LLVM.
An end-to-end demo using Fuchsia as a codebase from which we extract a corpus and train a model.