1.5.3 • Published 10 months ago

micro-test-runner v1.5.3

Weekly downloads
-
License
MIT
Repository
github
Last release
10 months ago

Micro Test-Runner

node tests npm npm bundle size

A minimal JavaScript test runner.

PackageDemo

Installation

npm install micro-test-runner

Include Micro Test-Runner in your project with:

import test from 'micro-test-runner';

// Or

const test = require('micro-test-runner');

Overview

overview flow diagram

Usage

Create a new test-runner with:

const testRunner = test(yourFunction);

If your function is asynchronous, chain the .async method:

testRunner.async();

If your function requires a specific context (this), chain the .context method:

class YourClass {
	public static c = 17;
	
	public static yourFunction (a, b) {
		return a + b + this.c;		// `this` is used by yourFunction.
	}
}

// ...

testRunner.context(YourClass);

Specify the arguments to pass into your function:

testRunner.with([arg1, arg2, arg3, etc...]);

You can chain .with methods to run your function multiple times with different arguments:

testRunner.with([arg1, arg2])	// Test 1.
          .with([argA, argB])	// Test 2.
          .with([argX, argY])	// Test 3.

Optionally, specify the number of times to run each test:

testRunner.times(5);	// Will run each of the sequential tests 5 times.

Finally, specify the results you expect your function to return from each test:

testRunner.expect([result1, result2, result3, etc...]);

If a function is passed as an expected result, it will be evaluated on the value that the candidate returned for that particular test. This function should then return a boolean indicating whether the value was correct or not. For example:

testRunner.expect([result1, result2, (value) => value typeof 'number']);

Results

Calling .expect will run the test(s), returning true if your function passes, false if not. If your function is asynchronous, you will need to await this value or use .then().

const outcome = testRunner.expect([result1, result2]);

Alternately, if you'd like Micro Test-Runner to log the results for you, you can chain the .logging() method.

import test, { FailureLogSeverity } from 'micro-test-runner';

test(yourFunction)							  // Test `yourFunction`...
	.times(3)							  // 3 times...
	.logging('Function Name', FailureLogSeverity.WARN, ['✅', '❌']) // Logging the outcome...
	.with(['Hello', 'world!'])					  // With these arguments...
	.expect(['Hello world!']);					  // And expect these results.

This method takes 4 arguments:

  • The name of the test.
  • (Optional) The severity used to log the test's failure. There are 3 options for this argument:
    • LOG - Logs test results to the console.
    • WARN - Same as LOG, but failures will appear as warnings.
    • ERROR - Same as LOG, but failures will throw an error.
  • (Optional) Icons used to visually indicate the outcome of the test.
  • (Optional) Log the performance of each test run in the desired format:
    • true - Average of all runs.
    • 'average' - Average of all runs.
    • 'table' - A table showing the performance of each individual run.

The logging() method removes the need to handle the value returned from .expect().

Examples

Basic:

import test from 'micro-test-runner';
import { yourFunction } from './yourProject';

const result = test(yourFunction)	// Test `yourFunction`...
	.times(3)			// 3 times...
	.with(['Hello', 'world!'])	// With these arguments...
	.expect(['Hello world!']);	// And expect these results.

if (result) {
	// Your test passed.
} else {
	// Your test failed.
}

Logging:

import test from 'micro-test-runner';
import { yourFunction } from './yourProject';

test(yourFunction)			// Test `yourFunction`...
	.times(3)			// 3 times...
	.logging('Function Name')	// Logging the outcome...
	.with(['Hello', 'world!'])	// With these arguments...
	.expect(['Hello world!']);	// And expect these results.

Async:

import test from 'micro-test-runner';
import { apiCall } from './yourProject';

const result = await test(apiCall)			// Test your `apiCall` function...
	.async()					// Asynchronously...
	.times(3)					// 3 times...
	.with(['https://example.com/api', '/endpoint'])	// With these arguments...
	.expect([{ data: 'Hello world!' }]);		// And expect these results.

if (result) {
	// Your test passed.
} else {
	// Your test failed.
}

Promise:

import test from 'micro-test-runner';
import { apiCall } from './yourProject';

test(apiCall)						// Test your `apiCall` function...
	.async()					// Asynchronously...
	.times(3)					// 3 times...
	.with(['https://example.com/api', '/endpoint'])	// With these arguments...
	.expect([{ data: 'Hello world!' }])		// And expect these results.
	.then(result => {
		if (result) {
			// Your test passed.
		} else {
			// Your test failed.
		}
	});

Performance Logging:

import test, { FailureLogSeverity } from 'micro-test-runner';
import { slowFunction } from './yourProject';

test(slowFunction)							// Test `slowFunction`...
	.times(3)							// 3 times...
	.with([2, 3])							// With these arguments...
	.with([4, 1])							// And these arguments...
	.logging('Slow', FailureLogSeverity.LOG, undefined, 'table') 	// Logging the outcome and performance to a table in the console...
	.expect([(value, runIndex, duration) => { 			// And expect these results (verified with a function).
		return
			value === 5					// Check the value returned by `slowFunction`.
			&& duration < 200;				// Check that `slowFunction` took less than 200ms.
	}]);


/* Console output...

✓ Slow test passed in 1004.742ms (x̄ 160.779ms per run, over 6 runs):
  ╭──────┬───────┬───────────────╮
  │ Test │  Run  │ Duration (ms) │
  ├──────┼───────┼───────────────┤
  │ 1    │ 1     │       150.812 │
  │      │ 2     │       184.766 │
  │      │ 3     │       161.057 │
  ├──────┼───────┼───────────────┤
  │ 2    │ 1     │       162.936 │
  │      │ 2     │       159.213 │
  │      │ 3     │       145.887 │
  ╰──────┴───────┴───────────────╯

*/
1.5.3

10 months ago

1.5.2

11 months ago

1.5.1

12 months ago

1.5.0

12 months ago

1.2.0

1 year ago

1.4.0

1 year ago

1.3.1

1 year ago

1.3.0

1 year ago

1.1.0

1 year ago

1.0.1

1 year ago

1.0.0

1 year ago