tree: 070035f28f1dbbe5a4e365a0d02ba4b7b61db1b1 [path history] [tgz]
  1. base.py
  2. cog_checkout.py
  3. README.md
  4. run-benchmarks
  5. runner.py
  6. sample_benchmark.py
  7. sample_benchmark2.py
scripts/cog/benchmark/README.md

Benchmark Suite

This document describes how to use the benchmark suite to run and analyze benchmarks.

Running Benchmarks

To run all benchmarks, execute the run-benchmarks script:

./run-benchmarks

To run one or more specific benchmarks, provide each benchmark name with its own flag:

./run-benchmarks --benchmark <benchmark_name_1> --benchmark <benchmark_name_2>

Creating a Benchmark

To create a new benchmark, create a new Python file in the benchmark directory and define a class that inherits from Benchmark.

from base import Benchmark

class MyBenchmark(Benchmark):
    def __init__(self):
        super().__init__(
            name="my_benchmark",
            description="A description of my benchmark.",
            expected_to_pass=True,
            compare=["other_benchmark"]
        )

    def run(self):
        # Benchmark code goes here.
        pass

Understanding the Output

The script will print a report with the following information for each benchmark:

  • Name: The name of the benchmark.
  • Description: The description of the benchmark.
  • Time taken: The time it took to run the benchmark.
  • Pass/Fail status: Whether the benchmark passed or failed.
  • Expected: Whether the result was expected.