tree: 1e28420f166e052c115e33e958a1c26370e27ad0 [path history] [tgz]
  1. tools/
  2. BUILD.gn
  3. channels.cc
  4. channels.h
  5. events.cc
  6. fifos.cc
  7. filesystem.cc
  8. main_gbenchmark.cc
  9. main_test.cc
  10. mmu.cc
  11. null.cc
  12. ports.cc
  13. processes.cc
  14. pthreads.cc
  15. README.md
  16. round_trip_service.fidl
  17. round_trips.cc
  18. round_trips.h
  19. sockets.cc
  20. syscalls.cc
  21. test_runner.cc
  22. test_runner.h
  23. threads.cc
  24. time_get.cc
  25. vmo.cc
bin/zircon_benchmarks/README.md

Zircon Benchmarks

Microbenchmarks for the Zircon kernel and services.

There are three ways to run zircon_benchmarks:

  • gbenchmark mode: This uses the Google benchmark library (gbenchmark). This is the default.

    For this, run zircon_benchmarks with no arguments, or with arguments accepted by gbenchmark (such as --help).

    By default, this mode is quite slow to run, because gbenchmark uses a high default value for its --benchmark_min_time setting. You can speed up gbenchmark by passing --benchmark_min_time=0.01.

    Note: gbenchmark's use of statistics is not very sophisticated, so this mode might not produce consistent results across runs for some benchmarks. Furthermore, gbenchmark does not output any measures of variability (such as standard deviation). This limits the usefulness of gbenchmark for detecting performance regressions.

  • fbenchmark mode (Fuchsia benchmarks): This mode will record the times taken by each run of the benchmarks, allowing further analysis, which is useful for detecting performance regressions. This uses the test runner code in test_runner.cc.

    For this, run zircon_benchmarks --fbenchmark_out=output.json. The result data will be written to output.json. This uses the JSON output format described in the Fuchsia Tracing Usage Guide.

    Options:

    • --fbenchmark_runs=N: The number of times to run each benchmark. The default is 1000.

    • --fbenchmark_filter=REGEX: A regular expression that specifies a subset of benchmarks to run. By default, all the benchmarks are run.

    • --fbenchmark_enable_tracing: Enable Fuchsia tracing, i.e. enable registering as a TraceProvider. This is off by default because the TraceProvider gets registered asynchronously on a background thread, and that activity could introduce noise to the benchmarks.

    • --fbenchmark_startup_delay=N: Wait for N seconds on startup, after registering a TraceProvider. This allows working around a race condition where tracing misses initial events from newly-registered TraceProviders (see TO-650).

    Note: Not all of the benchmarks have been converted so that they will run in this mode. (TODO(TO-651): Convert the remaining tests.) Those that have been converted will run in both fbenchmark mode and gbenchmark mode.

  • Test-only mode: This runs on the bots via runtests, and it just checks that each benchmark still works. It runs quickly -- it runs only a small number of iterations of each benchmark. It does not print any performance information.

    For this, run /system/test/zircon_benchmarks_test.