0.0.4 • Published 8 years ago

testarmada-manifest v0.0.4

Weekly downloads
255
License
MIT
Repository
-
Last release
8 years ago

TestArmada Manifest

Build Status Codecov

Magellan sends information about system status and test runs using events, that's great, but it's not the most convenient things to report results against. Manifest is a helper class for Magellan that tracks suite runs and builds an easy-to-use DOM style structure with tests, environments, individual test runs and so on.

Creating the SuiteRunResult object

All the information about a Magellan suite-run is stored in a SuiteRunResult object, to create one:

const SuiteRunResult = require("testarmada-manifest");
const srr = new SuiteRunResult();

Connecting to Magellan

There are two entrypoint methods that you need to connect up to Magellan, the first is global messages:

Reporter.prototype.initialize = function (magellanGlobals) {
  const analytics = magellanGlobals.analytics;
  const self = this;
  const deferred = Q.defer();
  deferred.resolve();

  analytics.sync().forEach((message) => srr.globalMessage(message));

  analytics.getEmitter().addListener("message", (message) => {
    srr.globalMessage(message);  
  });

  return deferred.promise;
};

This example is from a Magellan reporter.

The second hookup is onto test messages:

Reporter.prototype.listenTo = function (testRun, test, source) {
  if (test && testRun) {
    source.addListener("message", (event) => {
      srr.testRunMessage(testRun, test, message);
    });
  } else {
    source.addListener("message", (message) => {
      srr.globalMessage(message);  
    });
  }
};

Events from SuiteRunResult

There are some important high level events you should pay attention to.

start and end are sent when the suite run starts and ends. There are no arguments with the message.

testRunStart and testRunEnd is sent when a test run starts and ends. There is one argument which is the testRun object.

newTest is sent when a new test is encountered that was not previously seen. There is only one argument, which is the test object.

There are other events emit for the status of workers, when magellan is idle and busy, and when events are emit from Magellan that the SuiteRunResult object doens't understand. The last one is more for system diagnostics.

Structure of the SuiteRunResult

The SuiteRunResult is a hierarchy:

SuiteRunResult -> Tests -> Environments -> TestRuns

The suite run contains a bunch of tests, each of those is run against different environments, and each environment can be run several times, with a test-run for each time.

SuiteRunResult API

Once you have the events hooked up you'll want to inspect the result of the suite run.

PropertyDescription
testsThe array of test objects
passedTrue if all the tests passed
failedTrue if a single test failed
retriedThe total number of retries
testRunsAn array of all the tests runs
timePassingThe total time spent on passing tests
timeFailingThe total time spent on failing tests
timeRetryingThe total time spent retrying tests

There are also high level report properties that you can inspect.

PropertyDescription
testsByNameReportHigh level metrics by test name
testsByEnvironmentAn object that contains arrays of test runs organized by environment
testsByEnvironmentAndTestAn object that returns all of the test runs organized by environment first, then by test name
testsByEnvironmentReportHigh level metrics organized by environment and test name
reportsReturns a Report object for this suite run (see below)

Here are some deeper properties you may or may not need.

PropertyDescription
satisfiedWorkersAn array of all the satisfied workers
satisfiedWorkersTimeTotal time spent on satisfied workers
unsatisfiedWorkersAn array of all the unsatisfied workers
unsatisfiedWorkersTimeTotal time spent on unsatisfied workers

There are methods associated with tagging tests:

PropertyDescription
setTags(test, tags)Sets the array of tags for a test with the given name
testsByTag(tag)Returns the array of tests for a given tag
passedByTag(tag)The total number of tests with that tag that passed
failedByTag(tag)The total number of tests with that tag that failed
timeElapsedByTag(tag)The total amount of time spent on tests with that tag

Test API

Each test has these properties:

PropertyDescription
nameThe name of the test
environmentsAn object with a key for each environment pointing to an Environment object
testRunsAn array of all of the test runs across all the environments
passedTrue if all the tests passed across all the environments
retriedThe total number of retries across the environments
timeRetryingThe time spent retrying
timeElapsedThe total time elapsed across all the environments
timeForPassedThe total time for all the passed tests, but only for the passing test run

Environment API

Each environment object has these properties:

PropertyDescription
idThe environment ID
testThe associated test object
attemptsThe test runs
retriesThe retry test runs
passingAttemptThe passing attempt, if there was one
passedTrue if this test passed on this environment
timeRetryingThe time spent retrying
timeElapsedThe total time elapsed across all the test runs
timeForPassedThe total time for all the passed tests, but only for the passing test run

TestRun API

PropertyDescription
testThe associated test object
environmentThe environment object
statusThe status
passedTrue if this test run passed
metadataMetadata for the test run
attemptNumberThe attempt number

Reports API

PropertyDescription
timeElapsed.topThe top 10 tests sorted by least time elapsed
timeElapsed.bottomThe bottom 10 tests sorted by least time elapsed
timeElapsed.completeAll the tests sorted by least time elapsed
timeElapsed.averageThe average time elapsed
timeElapsed.stddevThe time elapsed standard deviation
timeForPassed.topThe top 10 tests sorted by least time for the passing test run
timeForPassed.bottomThe bottom 10 tests sorted by least time for the passing test run
timeForPassed.completeAll the tests sorted by least time for the passing test run
timeForPassed.averageThe average time for the passing test run
timeForPassed.stddevThe time for the passing test run standard deviation
byEnvironment[env].minThe minimum time for the passing test run on the given environment
byEnvironment[env].maxThe maximum time for the passing test run on the given environment
byEnvironment[env].averageThe average time for the passing test run on the given environment
byEnvironment[env].stddevThe standard deviation for time for the passing test run on the given environment
byEnvironment[env].avgRetriesThe average number of retries for the passing test run on the given environment

License

Licenses All code not otherwise specified is Copyright Wal-Mart Stores, Inc. Released under the MIT License.