Naming style for performance metrics

This page describes the naming style that should be used for performance metrics reported via fuchsiaperf.json files.

There are two parts to each metric name:

  • Test suite (test_suite field): This should be in lower case, with components separated by dots. It should start with “fuchsia.”. e.g. “fuchsia.microbenchmarks”.

  • Test name (label field): This should usually be a CamelCaseName (rather than a snake_case_name). Components of the name should be separated with slashes. e.g. “Syscall/Null”.

    If the metric comes from a parameterized test -- i.e. a test that is instantiated multiple times with different parameter values -- the parameter value can be included as a slash-separated component. e.g. “Memcpy/100000bytes”.

    For multi-step tests using the perftest C++ library, metric names will contain a step name that follows a period, e.g. “Event/Replace.replace_handle”. For these, the step name should be in snake case (lower case with underscores).

This two-part namespace structure is inherited from Chromeperf.

Avoid adding “benchmarks” or “test” as a suffix in names, because it is redundant.

Some trace-based performance tests use a different naming convention, where the test suite field is used as the name of the test (following the style described above, e.g. “fuchsia.flatland_latency.view-provider-example”), and the test name field is used as the name of a metric generated by that test (following looser naming constraints, e.g. “CPU Load”, “flatland_vsync_latency”).

Some weak naming constraints are enforced for all performance tests by the performance.dart library or in Python by the perf_publish library.

Current list of metric names

To see the current list of metrics, look at a recent successful build from the terminal.x64 CI builder and look at the output under the step named “summary perf test results” (the “stdout” link).

The metric names currently in use do not always follow these naming guidelines. No large-scale renamings of the metric names have been done so far, because Chromeperf does not handle renaming well: renaming breaks continuity in Chromeperf. Chromeperf is not able to graph the results for a metric across a renaming, and it might miss regressions that occur close to a renaming. A large-scale renaming could be done once this limitation has been addressed.