This guide will walk you through the process of writing a benchmark, running it at every commit, and automatically tracking the results in the Performance Dashboard.
Today we support automating benchmarks for these projects:
Fuchsia benchmarks are command-line executables that produce a JSON results file. The executable must meet the following criteria:
Your benchmark executable should be built into a Fuchsia package. For more information please read the Fuchsia Build overview.
We have shell scripts that run all of a layer's benchmarks at every commit to that layer.
These shell scripts are written using a helper library called benchmarking. Add a command to the appropriate script to execute your test. See the existing commands for examples.
At this point, you're ready to build Fuchsia and test that your benchmark runs successfully. Run the following in a shell:
jiri update -gc # Benchmarks are not included in production packages, so use $layer/packages/kitchen_sink # or they will not be built. fx set core.<board> --with //bundles:kitchen_sink fx build && fx emu
Once the Fuchsia shell is loaded:
# Run just your benchmark run my_benchmark [options] # Run all benchmarks for $layer /pkgfs/packages/${layer}_benchmarks/0/bin/benchmarks.sh /tmp
If no errors occurred, you should see your benchmark's output file in /tmp
, along with the results files of other benchmarks.
Please see the Performance Dashboard User Guide
Note: We do not yet have a User guide for the Performance Dashboard Version 2.