tree: 1f6c04d1824d73cd8b620bac05d8aaa489e4ec59 [path history] [tgz]
  1. benchmark_result.go
  2. client_pool.go
  3. continuous_reads.go
  4. directory_benchmark.go
  5. download_benchmark.go
  6. go.mod
  7. go.sum
  8. main.go
  9. README.md
  10. upload_benchmark.go
  11. utils.go
  12. w1r3.go
  13. warmup.go
storage/internal/benchmarks/README.md

go-bench-gcs

This is not an officially supported Google product

Run example:

This runs 1000 iterations on 512kib to 2Gib files in the background, sending output to out.log:

go run main -project {PROJECT_ID} -samples 1000 -object_size=524288..2147483648 -o {RESULTS_FILE_NAME} &> out.log &

CLI parameters

ParameterDescriptionPossible valuesDefault
-projectprojectIDa project ID*
-bucketbucket supplied for benchmarks
must be initialized
any GCS regionwill create a randomly named bucket
-bucket_regionbucket region for benchmarks
ignored if bucket is explicitly supplied
any GCS regionUS-WEST1
-ofile to output results to
if empty, will output to stdout
any file pathstdout
-output_typeoutput results as csv records or cloud monitoringcsv, cloud-monitoringcloud-monitoring
-samplesnumber of samples to reportany positive integer8000
-workersnumber of goroutines to run at once; set to 1 for no concurrencyany positive integer16
-clientstotal number of Storage clients to be used;
if Mixed APIs, then x3 the number are created
any positive integer1
-apiwhich API to useJSON: use JSON
XML: use JSON to upload and XML to download
GRPC: use GRPC without directpath enabled
Mixed: select an API at random for each object
DirectPath: use GRPC with directpath
Mixed
-object_sizeobject size in bytes; can be a range min..max
for workload 6, a range will apply to objects within a directory
any positive integer1 048 576 (1 MiB)
-range_read_sizesize of the range to read in bytesany positive integer
<=0 reads the full object
0
-minimum_read_offsetminimum offset for the start of the range to be read in bytesany integer >00
-maximum_read_offsetmaximum offset for the start of the range to be read in bytesany integer >00
-read_buffer_sizeread buffer size in bytesany positive integer4096 for HTTP
32768 for GRPC
-write_buffer_sizewrite buffer size in bytesany positive integer4096 for HTTP
32768 for GRPC
-min_chunksizeminimum ChunkSize in bytesany positive integer16 384 (16 MiB)
-max_chunksizemaximum ChunkSize in bytesany positive integer16 384 (16 MiB)
-connection_pool_sizeGRPC connection pool sizeany positive integer4
-force_garbage_collectionwhether to force garbage collection
before every write or read benchmark
true or false (present/not present)false
-timeouttimeout (maximum time running benchmarks)
the program may run for longer while it finishes running processes
any time.Duration1h
-timeout_per_optimeout on a single upload or downloadany time.Duration5m
-workload1 will run a w1r3 (write 1 read 3) benchmark
6 will run a benchmark uploading and downloading (once each)
a single directory with -directory_num_objects number of files (no subdirectories)
9** will run a benchmark that does continuous reads on a directory with directory_num_objects
1, 6, 91
-directory_num_objectstotal number of objects in a directory (directory will only contain files,
no subdirectories); only applies to workload 6 and 9
any positive integer1000
-warmuptime to spend warming the clients before running benchmarks
w1r3 benchmarks will be run for this duration without recording any results
this is compatible with all workloads; however, w1r3 benchmarks are done regardless of workload
the warmups run with the number of logical CPUs usable by the current process
any time.Duration0s

* required values

** Note that this workload is experimental and will not work under certain conditions. Here's a non-comprehensive list of notes on workload 9:

  • output type must be cloud-monitoring
  • it continues reading until the timeout is reached - samples should be set to 1
  • directory_num_objects must be larger than workers