0.1.0 • Published 4 years ago

@ember-performance-monitoring/tracerbench-compare-action v0.1.0

Weekly downloads
1
License
MIT
Repository
-
Last release
4 years ago

@ember-performance-monitoring/tracerbench-compare-action

Commit over Commit Performance Analysis Automation for Web Applications.

Samples and Analysis are gathered using Tracerbench Compare

What is it?

Think "Lighthouse CI" but with statistical rigor and more meaningful data.

This library is general enough it could be used to benchmark any Web Application with any CI setup via Tracerbench; however, it comes finely tuned for benchmarking Ember applications and Addons via a Github Action.

Initial Setup

To use this, place markers with performance.mark(<markerName>) in your application at key points. You can then conigure tracerbench to use a subset (or all) of these markers to create a segmented analysis.

Currently, in order for Tracerbench to know when to stop analyzing your application it needs to redirect to about:blank after the Paint event following the last marker you care about. Typically in an Ember Application this means adding the following in whichever route is being benchmarked. This constraint may be removed in the future.

class MyRoute extends Route {
 afterModel() {
     if (document.location.href.indexOf('?tracing') !== -1) {
     endTrace();
   }
 }
}

function endTrace() {
 // just before paint
 requestAnimationFrame(() => {
   // after paint
   requestAnimationFrame(() => {
     document.location.href = 'about:blank';
   });
 });
}

Usage as a GithubAction

You can use this action by adding it to an existing workflow or creating a new workflow in your project.

For example, the below adds a check for the users route to all pull requests to the master branch.

name: PerformanceCheck

on:
  pull_request:
    branches:
      - master

jobs:
  analyze-users-route:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v1
        with:
          fetch-depth: 0
      - uses: ember-performance-monitoring/tracerbench-compare-action@master
        with:
          experiment-url: 'http://localhost:4201/users'
          control-url: 'http://localhost:4200/users'
          markers: 'end-users-model-load'
          regression-threshold: 25
          fidelity: high

Local Usage and Usage with other CI Systems

The GithubAction this project provides is actually a small wrapper that pipes the configuration into the project's main. This allows for easy use in any setup (local or CI) by adding this action as a dependency.

yarn add @ember-performance-monitoring/tracerbench-compare-action

For example, you could mirror the above check in an Ember Application by doing the following.

{
  "experiment-url": "http://localhost:4201/users",
  "control-url": "http://localhost:4200/users",
  "markers": "end-users-model-load",
  "regression-threshold": 25,
  "fidelity": "high"
}
const analyze = require('@ember-performance-monitoring/tracerbench-compare-action');
const config = require('./perf-test-config.json');

analyze(config);

Configuration Options

OptionDefaultDescription
build-controltrueWhether to build assets for the control case
build-experimenttrueWhether to build assets for the experiment case
control-dist./dist-controlThe location of the control assets once a build has been performed (or if build-control is false the location they are already)
experiment-dist./dist-experimentThe location of the experiment assets once a build has been performed (or if build-control is false the location they are already)
control-shagit rev-parse --short=8 origin/masterSHA to be built for the control commit
experiment-shagit rev-parse --short=8 HEADSHA to be built for the experiment commit
experiment-refcurrent branch or tagThe reference being built for the experiment
control-build-commandember build -e production --output-path ${control-dist}command to execute to build control assets if build-control is true
experiment-build-commandember build -e production --output-path ${experiment-dist}command to execute to build experiment assets if build-experiment is true
use-yarntrueWhen building control/experiment whether to use yarn for install (npm is used otherwise)
control-serve-commandember s --path=${control-dist}command to execute to serve the control assets
experiment-serve-commandember s --path=${experiment-dist}command to execute to serve the experiment assets
clean-after-analyzetrue if experiment-ref is preset, false otherwisewhether to try to restore initial repository state after the benchmark is completed. Useful for local runs.

Tracerbench Config

The following options are supplied directly to the tracerbench compare command that is run once the assets for control and experiment are being served.

OptionDefaultDescription
control-urlhttp://localhost:4200?tracing=trueurl to benchmark at which the control assets are being served
experiment-urlhttp://localhost:4201?tracing=trueurl to benchmark at which the experiment assets are being served
fidelitylowhow many runs to perform. high (50) is recommended for CI
markersdomCompletecomma separated list of markers to consider in the report
runtime-statsfalsewhether to analyze chrome runtime stats (expensive and degrades the rigor of the rest of the test) }
reporttruewhether to produce a report PDF
headlesstruewhether to run Chrome in headless mode
regression-threshold50milliseconds of change at which to fail the test