1.0.1 • Published 6 years ago

radargun v1.0.1

Weekly downloads
2
License
MIT
Repository
github
Last release
6 years ago

radargun

Build Status

Easy to use benchmarking utility for Node.js. Provides high-precision execution time metrics for: average, min, and max.

Table of Contents

Usage

  1. Install radargun as a dev dependency in your app:

    $ npm install radargun -D
  2. Create a .bench.js script with a series of functions to benchmark:

    benchmark/primes.bench.js

    The bench() function is a method added to the global scope by radargun.

    bench(
      [
        function sieveOfEratosthenes() {
          // calculate primes to 10000
        },
        function sieveOfSundaram() {
          // calculate primes to 10000
        }
      ],
      { runs: 1000 }
    );
  3. Execute your script with radargun:

    $ radargun benchmark/primes.bench.js
  4. Read the results that are printed to the terminal:

    ┌──────────────────────┬────────────┬────────────┬────────────┐
    │ NAME                 │ AVG        │ MIN        │ MAX        │
    ╞══════════════════════╪════════════╪════════════╪════════════╡
    │ sieveOfErathosthenes │ 1492392 ns │ 1337675 ns │ 5455338 ns │
    ├──────────────────────┼────────────┼────────────┼────────────┤
    │ sieveOfSundaram      │ 802019 ns  │ 688149 ns  │ 3883506 ns │
    └──────────────────────┴────────────┴────────────┴────────────┘

API

bench(functions [, options])

Runs a benchmark analysis for a given array of functions, and generates a report.

Parameters

  • functions: (required) an array of functions for which to gather performance metrics. Each function generates its own set of metrics. Every entry in the array must be a function or an object with the following keys:

    • bind: (optional) the this context to use while executing the function. The default is null.

    • fn: (required) a reference to the function to execute.

    • label: (optional) the label to use for this function in the generated report. The default is the name of the function.

    • params: (optional) an array of values to pass into the function.

  • options: (optional) an object containing keys to customize the behavior of bench(). This object can have the following keys:

    • reporter: (optional) a function that is responsible for generating the report. The default is the built-in reporter. See Custom Reporter for more information.

    • runs: (optional) the number of times to execute the function. The default is 10.

    • stream: (optional) a writable stream to which status and report information is written. The default is process.stdout.

    • thresholds: (optional) an object that sets performance thresholds for a target function. A failure to stay within these thresholds will cause the radargun utility to exit with 3.

      All metric thresholds are expressed as a percentage of performance, between the target function and all others (1 - target/other) that the target function must meet or exceed.

      • avg: (optional) a number specifying the percentage threshold for "average." If omitted, the target's "avg" metric must simply be less than or equal to all other functions.

      • max: (optional) a number specifying the percentage threshold for "max." If omitted, the target's "max" metric must simply be less than or equal to all other functions.

      • min: (optional) a number specifying the percentage threshold for "min." If omitted, the target's "min" metric must simply be less than or equal to all other functions.

      • target: (required) the target function who's benchmark metrics must fall within all specified performance thresholds. This value is a number specifying an index in the functions array.

Example

const comparisonModule = require('comparison-module');
const fs = require('fs');

const myModule = require('../lib');

const params = ['foo', 'bar'];
const stream = fs.createWriteStream('results.txt');

bench(
  [
    {
      fn: myModule,
      label: 'My Module',
      params,
    },
    {
      fn: comparisonModule,
      label: 'Comparison Module',
      params,
    },
  ],
  {
    runs: 1000,
    stream,
    thresholds: {
      avg: 0.5,
      max: 0.5,
      min: 0.5,
      target: 0,
    },
  }
);

In this example, we are comparing the performance of "My Module" against the performance of comparison-module. If My Module's performance is less than 50% greater than that of comparison-module, radargun exits with error code 3. Results are writen to a file "results.txt."

CLI

The radargun command line interface takes a single parameter, which is a glob that specifies how to find bench files to run.

$ radargun benchmark/**/*.bench.js

Benchmarking Very Fast Functions

In situations where you need to benchmark very fast functions, you get a more accurate understanding of performance by wrapping what it is you want to test in a function, and execute your function in a loop.

bench(
  [
    function () {
      for (let i = 0; i < 1000000) {
        myVeryFastFunction();
      }
    }
  ],
  { runs: 100 }
);

CI Pipelines

It's possible to use radargun in a continuous integration pipeline, and fail a build in the event that a new version of your code sees an unacceptable drop in performance.

Example

const myCurrentModule = require('my-module');
const myUpdatedModule = require('../lib');

bench(
  [
    {
      fn: myUpdatedModule,
      label: 'New Hotness',
      params: ['foo', 'bar'],
    },
    {
      fn: myCurrentModule,
      label: 'Currently Published',
      params: ['foo', 'bar'],
    },
  ],
  {
    runs: 1000,
    thresholds: {
      avg: -0.08,
      max: -0.08,
      min: -0.08,
      target: 0,
    },
  }
);

In this example, we are comparing the performance of the implementation of my-module in the local code base against the published version of my-module. Thresholds are set such that if there is a drop in performance of more than 8%, the radargun utility will exit with an error.

Keep in mind that benchmarking is non-deterministic. It's possible that, depending on conditions with the host, radargun could fail with a false negative. It is recommended that you do some experimentation before settling on thresholds to use in your CI pipeline.

Custom Reporter

A reporter is a function used to generate a benchmark report. It is called by the bench() method after it has completed gathering performance metrics. The radargun module ships with a built-in formater, which functions as the default. It is possible to provide a custom reporter function as an option to bench(). This function is called with the parameters:

  • stream: a writable stream to which the report should be written.

  • report: an array of data containing the performance metrics gathered by the bench() method. Each entry is an object with the keys:

    • avg: a number indicating the average execution time (in nanoseconds) of all runs for the function while benchmarking.

    • label: a string value indicating the label to use in the report. This is typically the name of the function

    • max: a number indicating the highest execution time (in nanoseconds) encountered for the function while benchmarking.

    • min: a number indicating the lowest execution time (in nanoseconds) encountered for the function while benchmarking.

  • thresholds: if threshold configuration was provided as options to the bench() method, this value is passed to the reporter. Otherwise, this value is null.