Fuchsia is using Clang as the official compiler.
You need CMake version 3.8.0 and newer to execute these commands. This was the first version to support Fuchsia.
While CMake supports different build systems, we recommend using Ninja which installed to be present on your system.
The example commands below use ${LLVM_SRCDIR}
to refer to the root of your LLVM source tree checkout and assume the monorepo layout. When using this layout, each sub-project has its own top-level directory.
The https://fuchsia.googlesource.com/third_party/llvm-project repository emulates this layout via Git submodules and is updated automatically by Gerrit. You can use the following command to download this repository including all the submodules after setting the ${LLVM_SRCDIR}
variable:
LLVM_SRCDIR=${HOME}/llvm-project git clone --recurse-submodules https://fuchsia.googlesource.com/third_party/llvm-project ${LLVM_SRCDIR}
To update the repository including all the submodules, you can use:
git pull --recurse-submodules
Alternatively, you can use the semi-official monorepo https://github.com/llvm-project/llvm-project-20170507 maintained by the LLVM community. This repository does not use submodules which means you can use the standard Git workflow:
git clone https://github.com/llvm-project/llvm-project-20170507 ${LLVM_SRCDIR}
Before building the runtime libraries that are built along with the toolchain, you need a Fuchsia SDK. We expect that the SDK is located in the directory pointed to by the ${SDK_DIR}
variable:
SDK_DIR=${HOME}/sdk/garnet
To download the latest SDK, you can use the following:
./buildtools/cipd install fuchsia/sdk/linux-amd64 latest -root ${SDK_DIR}
Alternatively, you can build the Garnet SDK from source using the following commands:
gn gen --args='target_cpu="x64" base_package_labels=["//garnet/packages/sdk:garnet"]' out/x64 ninja -C out/x64 gn gen --args='target_cpu="arm64" base_package_labels=["//garnet/packages/sdk:garnet"]' out/arm64 ninja -C out/arm64 ./scripts/sdk/create_layout.py --manifest out/x64/gen/garnet/public/sdk/garnet_molecule.sdk --output ${SDK_DIR} ./scripts/sdk/create_layout.py --manifest out/arm64/gen/garnet/public/sdk/garnet_molecule.sdk --output ${SDK_DIR} --overlay
The Clang CMake build system supports bootstrap (aka multi-stage) builds. We use two-stage bootstrap build for the Fuchsia Clang compiler.
The first stage compiler is a host-only compiler with some options set needed for the second stage. The second stage compiler is the fully optimized compiler intended to ship to users.
Setting up these compilers requires a lot of options. To simplify the configuration the Fuchsia Clang build settings are contained in CMake cache files which are part of the Clang codebase.
You can build Clang toolchain for Fuchsia using the following commands. These must be run in a separate build directory, which you must create. This directory can be a subdirectory of ${LLVM_SRCDIR}
so that you use LLVM_SRCDIR=..
or it can be elsewhere, with LLVM_SRCDIR
set to an absolute or relative directory path from the build directory.
cmake -GNinja \ -DLLVM_ENABLE_PROJECTS="clang;lld;clang-tools-extra" \ -DLLVM_ENABLE_RUNTIMES="compiler-rt;libcxx;libcxxabi;libunwind" \ -DSTAGE2_FUCHSIA_SDK=${SDK_DIR} \ -C ${LLVM_SRCDIR}/clang/cmake/caches/Fuchsia.cmake \ ${LLVM_SRCDIR}/llvm ninja stage2-distribution
To include compiler runtimes and C++ library for Linux, you need to use LINUX_<architecture>_SYSROOT
flag to point at the sysroot and specify the correct host triple. For example, to build the runtimes for x86_64-linux-gnu
using the sysroot from your Fuchsia checkout, you would use:
-DBOOTSTRAP_LLVM_DEFAULT_TARGET_TRIPLE=x86_64-linux-gnu \ -DSTAGE2_LINUX_x86_64-linux-gnu_SYSROOT=${FUCHSIA}/buildtools/linux-x64/sysroot \
To install the compiler just built into /usr/local
, you can use the following command:
ninja stage2-install-distribution
To use the compiler just built without installing it into a system-wide shared location, you can just refer to its build directory explicitly as ${LLVM_OBJDIR}/tools/clang/stage2-bins/bin/
(where LLVM_OBJDIR
is your LLVM build directory).
When developing Clang, you may want to use a setup that is more suitable for incremental development and fast turnaround time.
The simplest way to build LLVM is to use the following commands:
cmake -GNinja \ -DCMAKE_BUILD_TYPE=Debug \ -DLLVM_ENABLE_PROJECTS="clang;lld" \ ${LLVM_SRCDIR}/llvm ninja
You can enable additional projects using the LLVM_ENABLE_PROJECTS
variable. To enable all common projects, you would use:
-DLLVM_ENABLE_PROJECTS="clang;lld;compiler-rt;libcxx;libcxxabi;libunwind"
Similarly, you can also enable some projects to be built as runtimes which means these projects will be built using the just-built rather than the host compiler:
-DLLVM_ENABLE_PROJECTS="clang;lld" \ -DLLVM_ENABLE_RUNTIMES="compiler-rt;libcxx;libcxxabi;libunwind" \
Clang is a large project and compiler performance is absolutely critical. To reduce the build time, we recommend using Clang as a host compiler, and if possible, LLD as a host linker. These should be ideally built using LTO and for best possible performance also using Profile-Guided Optimizations (PGO).
To set the host compiler to Clang and the host linker to LLD, you can use the following extra flags:
-DCMAKE_C_COMPILER=${CLANG_TOOLCHAIN_PREFIX}clang \ -DCMAKE_CXX_COMPILER=${CLANG_TOOLCHAIN_PREFIX}clang++ \ -DLLVM_ENABLE_LLD=ON
This assumes that ${CLANG_TOOLCHAIN_PREFIX}
points to the bin
directory of a Clang installation, with a trailing slash (as this Make variable is used in the Zircon build). For example, to use the compiler from your Fuchsia checkout (on Linux):
CLANG_TOOLCHAIN_PREFIX=${FUCHSIA}/buildtools/linux-x64/clang/bin/
Note: that Fuchsia Clang installation only contains static libc++ host library (on Linux), so you will need the following two flags to avoid linker errors:
-DCMAKE_EXE_LINKER_FLAGS="-ldl -lpthread" \ -DCMAKE_SHARED_LINKER_FLAGS="-ldl -lpthread"
Most sanitizers can be used on LLVM tools by adding LLVM_USE_SANITIZER=<sanitizer name>
to your cmake invocation. MSan is special however because some LLVM tools trigger false positives. To build with MSan support you first need to build libc++ with MSan support. You can do this in the same build. To set up a build with MSan support first run CMake with LLVM_USE_SANITIZER=Memory
and LLVM_ENABLE_LIBCXX=ON
.
cmake -GNinja \ -DCMAKE_BUILD_TYPE=Debug \ -DCMAKE_C_COMPILER=${CLANG_TOOLCHAIN_PREFIX}clang \ -DCMAKE_CXX_COMPILER=${CLANG_TOOLCHAIN_PREFIX}clang++ \ -DLLVM_ENABLE_PROJECTS="clang;lld;libcxx;libcxxabi;libunwind" \ -DLLVM_USE_SANITIZER=Memory \ -DLLVM_ENABLE_LIBCXX=ON \ -DLLVM_ENABLE_LLD=ON \ ${LLVM_SRCDIR}/llvm
Normally you would run Ninja at this point but we want to build everything using a sanitized version of libc++ but if we build now it will use libc++ from ${CLANG_TOOLCHAIN_PREFIX}
which isn't sanitized. So first we build just the cxx and cxxabi targets. These will be used in place of the ones from ${CLANG_TOOLCHAIN_PREFIX}
when tools dynamically link against libcxx
ninja cxx cxxabi
Now that we have a sanitized version of libc++ we can have our build use it instead of the one from ${CLANG_TOOLCHAIN_PREFIX}
and then build everything.
ninja
Putting that all together:
cmake -GNinja \ -DCMAKE_BUILD_TYPE=Debug \ -DCMAKE_C_COMPILER=${CLANG_TOOLCHAIN_PREFIX}clang \ -DCMAKE_CXX_COMPILER=${CLANG_TOOLCHAIN_PREFIX}clang++ \ -DLLVM_USE_SANITIZER=Address \ -DLLVM_ENABLE_LIBCXX=ON \ -DLLVM_ENABLE_LLD=ON \ ${LLVM_SRCDIR}/llvm ninja libcxx libcxxabi ninja
Ensure Goma is installed on your machine for faster builds; Goma accelerates builds by distributing compilation across many machines. If you have Goma installed in ${GOMA_DIR}
(by default ${HOME}/goma
), you can enable Goma use with the following extra flags:
-DCMAKE_C_COMPILER_LAUNCHER=${GOMA_DIR}/gomacc \ -DCMAKE_CXX_COMPILER_LAUNCHER=${GOMA_DIR}/gomacc \ -DLLVM_PARALLEL_LINK_JOBS=${LINK_JOBS}
The number of link jobs is dependent on RAM size, for LTO build you will need at least 10GB for each job.
To build Clang with Goma, use:
ninja -j${JOBS}
Use -j100
for Goma on macOS and -j1000
for Goma on Linux. You may need to tune the job count to suit your particular machine and workload.
To verify your compiler is available on Goma, you can set GOMA_USE_LOCAL=0 GOMA_FALLBACK=0
environment variables. If the compiler is not available, you will see an error.
When developing Clang for Fuchsia, you can also use the cache file to test the Fuchsia configuration, but run only the second stage, with LTO disabled, which gives you a faster build time suitable even for incremental development, without having to manually specify all options:
cmake -G Ninja -DCMAKE_BUILD_TYPE=Debug \ -DCMAKE_C_COMPILER=${CLANG_TOOLCHAIN_PREFIX}clang \ -DCMAKE_CXX_COMPILER=${CLANG_TOOLCHAIN_PREFIX}clang++ \ -DLLVM_ENABLE_LTO=OFF \ -DLLVM_ENABLE_PROJECTS="clang;lld" \ -DLLVM_ENABLE_RUNTIMES="compiler-rt;libcxx;libcxxabi;libunwind" \ -DLLVM_DEFAULT_TARGET_TRIPLE=x86_64-linux-gnu \ -DLINUX_x86_64-linux-gnu_SYSROOT=${FUCHSIA}/buildtools/linux-x64/sysroot \ -DFUCHSIA_SDK=${SDK_DIR} \ -C ${LLVM_SRCDIR}/clang/cmake/caches/Fuchsia-stage2.cmake \ ${LLVM_SRCDIR}/llvm ninja distribution
With Goma for even faster turnaround time:
cmake -G Ninja -DCMAKE_BUILD_TYPE=Debug \ -DCMAKE_C_COMPILER=${CLANG_TOOLCHAIN_PREFIX}clang \ -DCMAKE_CXX_COMPILER=${CLANG_TOOLCHAIN_PREFIX}clang++ \ -DCMAKE_C_COMPILER_LAUNCHER=${GOMA_DIR}/gomacc \ -DCMAKE_CXX_COMPILER_LAUNCHER=${GOMA_DIR}/gomacc \ -DCMAKE_EXE_LINKER_FLAGS="-ldl -lpthread" \ -DCMAKE_SHARED_LINKER_FLAGS="-ldl -lpthread" \ -DLLVM_PARALLEL_LINK_JOBS=${LINK_JOBS} \ -DLLVM_ENABLE_LTO=OFF \ -DLLVM_ENABLE_PROJECTS="clang;lld" \ -DLLVM_ENABLE_RUNTIMES="compiler-rt;libcxx;libcxxabi;libunwind" \ -DLLVM_DEFAULT_TARGET_TRIPLE=x86_64-linux-gnu \ -DLINUX_x86_64-linux-gnu_SYSROOT=${FUCHSIA}/buildtools/linux-x64/sysroot \ -DFUCHSIA_SDK=${SDK_DIR} \ -C ${LLVM_SRCDIR}/clang/cmake/caches/Fuchsia-stage2.cmake \ ${LLVM_SRCDIR}/llvm ninja distribution -j${JOBS}
To run Clang tests, you can use the check-<component>
target:
ninja check-llvm check-clang
You can all use check-all
to run all tests, but keep in mind that this can take significant amount of time depending on the number of projects you have enabled in your build.
You can start building test binaries right away by using the Clang in ${LLVM_OBJDIR}/bin/
, or in ${LLVM_OBJDIR}/tools/clang/stage2-bins/bin/
(depending on whether you did the two-stage build or the single-stage build, the binaries will be in a different location). However, if you want to use your Clang to build Fuchsia, you will need to set some more arguments/variables.
If you are only interested in building Zircon, set the following GN build arguments:
gn gen build-zircon --args='variants = [ "clang" ] clang_tool_dir = ${CLANG_DIR}'
${CLANG_DIR}
is the path to the bin
directory for your Clang build, e.g. ${LLVM_OBJDIR}/bin/
.
Then run fx build-zircon
as usual.
For layers-above-Zircon, it should be sufficient to pass --args clang_prefix="${CLANG_DIR}"
to fx set
, then run fx build
as usual.
Fuchsia's infrastructure has support for using a non-default version of Clang to build. Only Clang instances that have been uploaded to CIPD or Isolate are available for this type of build, and so any local changes must land in upstream and be built by the CI or production toolchain bots.
You will need the infra codebase and prebuilts. Directions for checkout are on the infra page.
To trigger a bot build with a specific revision of Clang, you will need the Git revision of the Clang with which you want to build. This is on the CIPD page, or can be retrieved using the CIPD CLI. You can then run the following command:
export FUCHSIA_SOURCE=<path_to_fuchsia> export BUILDER=<builder_name> export REVISION=<clang_revision> export INFRA_PREBUILTS=${FUCHSIA_SOURCE}/fuchsia-infra/prebuilt/tools cd ${FUCHSIA_SOURCE}/fuchsia-infra/recipes ${INFRA_PREBUILTS}/led get-builder 'luci.fuchsia.ci:${BUILDER}' | \ ${INFRA_PREBUILTS}/led edit-recipe-bundle -O | \ jq '.userland.recipe_properties."$infra/fuchsia".clang_toolchain.type="cipd"' | \ jq '.userland.recipe_properties."$infra/fuchsia".clang_toolchain.instance="git_revision:${REVISION}"' | \ ${INFRA_PREBUILTS}/led launch
It will provide you with a link to the BuildBucket page to track your build.
You will need to run led auth-login
prior to triggering any builds, and may need to file an infra ticket to request access to run led jobs.
Documentation:
Talks: