This framework has been designed to evaluate and compare relative performance of memory function implementations on a particular host.
It will also be use to track implementations performances over time.
Python 2 being deprecated it is advised to used Python 3.
Then make sure to have matplotlib
, scipy
and numpy
setup correctly:
apt-get install python3-pip pip3 install matplotlib scipy numpy
You may need python3-gtk
or similar package for displaying benchmark results.
To get good reproducibility it is important to make sure that the system runs in performance
mode. This is achieved by running:
cpupower frequency-set --governor performance
memcpy
benchmarkThe following commands will run the benchmark and display a 95 percentile confidence interval curve of time per copied bytes. It also features host informations and benchmarking configuration.
cd llvm-project cmake -B/tmp/build -Sllvm -DLLVM_ENABLE_PROJECTS='clang;clang-tools-extra;libc' -DCMAKE_BUILD_TYPE=Release -G Ninja ninja -C /tmp/build display-libc-memcpy-benchmark-small
The display target will attempt to open a window on the machine where you're running the benchmark. If this may not work for you then you may want render
or run
instead as detailed below.
The benchmarking process occurs in two steps:
json
filejson
fileTargets are of the form <action>-libc-<function>-benchmark-<configuration>
action
is one of :run
, runs the benchmark and writes the json
filedisplay
, displays the graph on screenrender
, renders the graph on disk as a png
filefunction
is one of : memcpy
, memcmp
, memset
configuration
is one of : small
, big
Using a profiler to observe size distributions for calls into libc functions, it was found most operations act on a small number of bytes.
Function | % of calls with size ≤ 128 | % of calls with size ≤ 1024 |
---|---|---|
memcpy | 96% | 99% |
memset | 91% | 99.9% |
memcmp1 | 99.5% | ~100% |
Benchmarking configurations come in two flavors:
1KiB
, representative of normal usageL1
cache to prevent measuring the memory subsystem32MiB
to test large operations1 - The size refers to the size of the buffers to compare and not the number of bytes until the first difference.
It is possible to merge several json
files into a single graph. This is useful to compare implementations.
In the following example we superpose the curves for memcpy
, memset
and memcmp
:
> make -C /tmp/build run-libc-memcpy-benchmark-small run-libc-memcmp-benchmark-small run-libc-memset-benchmark-small > python libc/utils/benchmarks/render.py3 /tmp/last-libc-memcpy-benchmark-small.json /tmp/last-libc-memcmp-benchmark-small.json /tmp/last-libc-memset-benchmark-small.json
render.py3
flags--output=/tmp/benchmark_curve.png
.--headless
.To learn more about the design decisions behind the benchmarking framework, have a look at the RATIONALE.md file.