Netstack benchmarks are a set of test components with the following objectives:
Our benchmarks fill a similar role to automated testing, but specifically for performance: they give us the confidence to change and improve the network stack while being aware of the performance impact of those changes.
Most netstack benchmarks can be run either as regular hermetic tests, or as end-to-end performance tests using Fuchsia's perftest infrastructure. The former is recommended for local development and iteration when possible because it is simpler: it is compatible with any product configuration and it does not require adding any additional dependencies to your package set.
To run the benchmarks as end-to-end tests, pass the --e2e
flag to fx test
and specify one of the tests defined in //src/tests/end_to_end/perf/test. For example:
fx test --e2e netstack_benchmarks_test
Make sure that your package universe contains the //src/tests/end_to_end/perf:test
group so you have the required dependencies available. Also note that if you are running netstack_iperf_test
or netstack_benchmarks_test
, you will need to run on the terminal
product so that SL4F is available. For example, your fx set
line might look like the following:
fx set terminal.x64 --with //src/tests/end_to_end/perf:test
These focus on measuring the duration of specific socket related system calls from the benchmarking binary for TCP, UDP, and ICMP sockets over loopback. They run against Netstack2, Netstack3, and Netstack2 with Fast UDP enabled.
The loopback socket benchmarks also run against the “fake netstack”, a stubbed-out netstack the implements the minimum amount of the fuchsia.posix.socket API possible, to attempt to measure the overhead of the API structure itself.
These benchmarks measure the same operations as the loopback socket benchmarks, except there are two netstacks involved connected via a network-tun device-pair. This means that the latency numbers also include time spent in the netstack's device layer, which is absent from the loopback benchmarks. They run against Netstack2 and Netstack3.
These benchmarks measure the time to serialize and deserialize the metadata that is sent with each packet in Fast UDP.
These benchmarks measure the round trip latency for packet buffer transmission between a client and a device, using an in-process fake network device.
Netstack3 defines microbenchmarks internally that measure the following:
These benchmarks use iperf3 to measure throughput, CPU usage, jitter, and packet loss for TCP and UDP traffic. They run over both loopback (fully on-device) and over Ethernet, with the server running on Fuchsia and the client on the host. They exercise both single-flow and multi-flow scenarios.
These benchmarks measure the resource usage (memory consumption and handle counts) of a hermetic netstack process while exercising it with a set of workloads. Metrics are emitted for baseline usage on netstack startup, initial usage after one run of a given workload, peak usage while running a workload repeatedly, and final increase in usage from the baseline after a workload is complete.