0.8.1 • Published 3 years ago

@jsenv/perf-impact v0.8.1

Weekly downloads
-
License
MIT
Repository
github
Last release
3 years ago

Perf impact

Integrates performance impact to your pull requests on GitHub.

npm package github main codecov coverage

Presentation

This package is under construction.

This tool makes it possible to estimate performance impacts before making code available to your users: right from GitHub pull requests.

However the metrics cannnot be trusted blindly due to performance variability. For this reason this tool can be used to catch big performance impacts but not for the small ones. See how to catch small performance impacts?

Performance variability

Performance metrics will change due to inherent variability, even if there hasn't been a code change. It can be mitigated by measuring performance multiple times. But you should always keep in mind this variability before drawing conclusions about a performance-impacting change.

With time you will learn how your performance usually vary. This will make you capable to recognize unusual variation.

How to catch small performance impacts?

Catching small to very small performance impacts with confidence requires a LOT of repetition and time. Both strategies means you will have to wait before knowing the real performance impact.

How to catch small impacts with a lot of repetition?

  • Let your code be used a lot of times in a lot of scenarios and see the results. This could be scripts, real users or both.

  • Push your performance metrics in a tool like Kibana or DataDog and check the tendency of your performance metrics.

In the end I would recommend the following approach:

  1. Setup a script measuring performance metrics
  2. Setup @jsenv/perf-impact to see performance impact into GitHub pull requests
  3. Setup a script uloading performance metric to a dashboard.
  4. Inside a pull request, when performance impact looks big: investigate.
  5. If performance impact looks small: ignore.
  6. Finally, once per week/month, watch the dashboard to check performance metrics tendency: are they stable, increasing or decreasing?
0.3.0

3 years ago

0.8.1

3 years ago

0.8.0

3 years ago

0.4.4

3 years ago

0.5.0

3 years ago

0.4.1

3 years ago

0.4.0

3 years ago

0.7.0

3 years ago

0.5.2

3 years ago

0.6.0

3 years ago

0.5.1

3 years ago

0.2.0

3 years ago

0.1.2

3 years ago

0.1.1

3 years ago

0.1.0

3 years ago

0.0.3

3 years ago

0.0.2

3 years ago

0.0.1

3 years ago