There are 12 benchmarks that get run for every filesystem. The currently supported filesystems are Fxfs, F2fs, Memfs, and Minfs.
The IO benchmarks are all of the combinations of read/write, sequential/random, and warm/cold. Every read/write call uses an 8KiB buffer and each operation is performed 1024 times spread across an 8MiB file. The benchmarks measure how long each read/write operation takes.
preadcalls to the file.
pwritecall to the file.
WalkDirectoryTree benchmarks measure how long it takes to walk a directory tree with POSIX
readdir calls. The directory tree consists of 62 directories and 189 files and is traversed 20 times by the benchmarks. The “cold” variant of the benchmarks remounts the filesystem between each traversal and the “warm” variant does not.
OpenFile benchmark measures how long it takes for a filesystem to open a file.
OpenDeeplyNestedFile benchmark expands on the
OpenFile benchmark by placing the file several directories deep and then opening it from the root of the filesystem. When compared to the
OpenFile benchmark, the
OpenDeeplyNestedFile captures how long it takes the filesystem to internally traverse directories.
StatPath benchmark measure how long it takes to call
stat on a path to a file.
GitStatus benchmark mimics the filesystem usage pattern of running
git status. The benchmark contains 2 phases:
fstataton all of the files in the index to see if any of them have changed. All of the
fstatatcalls happen relative to the top level directory.
PageInBlob benchmarks measure page fault times for mmap'ed blobs.
PageInBlobSequentialUncompressedcreates an incompressible blob and pages it in by sequentially accessing each page.
PageInBlobSequentialCompressedcreates a compressible blob and pages it in by sequentially accessing each page.
PageInBlobRandomCompressedcreates a compressible blob and randomly accesses 60% of the pages in a way similar to executing an executable. Only 60% of pages are accessed to try to mimic an executable starting.
The blob writing benchmarks measure how long it takes to write blobs. This is important for both fast updates in production and development workflows.
WriteBlobwrites a single realistically compressible blob to a blob filesystem.
WriteRealisticBlobscreates several realistically compressible blobs with varying sizes and concurrently writes 2 blobs to a blob filesystem. This ideally mimics how pkg-cache writes blobs. The benchmark measure how long it takes to write all of the blobs.
OpenAndGetVmo benchmarks measure how long it takes to open a package and get the VMO for a blob within it. Notably,
OpenAndGetVmo goes through the package directory as opposed to directly opening the blob through Blobfs/Fxblob, and thus allows us to more accurately measure open times via SWD.
OpenAndGetVmoMetafarBlobcreates and opens a metafile (prefix “meta/” in the resource path).
OpenAndGetVmoContentBlobcreates and opens a content blob (non-“meta” prefix in the resource path i.e. “data/”).
At the beginning of most benchmarks is a setup phase that creates files within the filesystem. Simply closing all handles to those files doesn‘t guarantee that the filesystem will immediately clear all caches related to those files. If the caches aren’t cleared then the benchmark may only ever hit cached (warm) data. “Cold” (uncached) read benchmarks remount the Fuchsia filesystem before doing their read operations. Remounting the filesystem guarantees that all data related the file that isn't normally cached gets dropped.
When cold writing to memfs, the kernel needs to allocate pages for the VMO backing the file as the pages are used. This causes cold writes to be slower than warm writes which have the pages already allocated.
The Fuchsia Filesystem Benchmarks use a custom framework for timing filesystem operations. Filesystems hold state external to the
write operations being benchmarked which can lead to drastically different timings between consecutive operations. For other performance tests, we want to treat the initial one or more iterations as warm-up iterations and drop their timings. (For example, for some IPC performance tests, the initial iteration doesn‘t complete until a subprocess has finished starting up, making it much slower than the later iterations.) These storage tests differ in that we don’t want to drop the initial iterations' timings.
Ex. On the first
readoperation to a file in Minfs, Minfs reads the entire file into memory and each subsequent
readis served from memory. The warm-up phase of fuchsia-criterion would hide the extremely slow
fx test fuchsia-pkg://fuchsia.com/storage-benchmarks#meta/storage-benchmarks.cm
touch /tmp/blk.bin truncate -s 256M /tmp/blk.bin fdisk /tmp/blk.bin # Press 'g' to create a GPT partition table, and then 'w' to save fx qemu -kN -- -drive file=/tmp/blk.bin,index=0,media=disk,cache=directsync
The set of benchmarks and filesystems can filtered with the