1.1.1 • Published 3 years ago

bespoken-datadog-plugin v1.1.1

Weekly downloads
7
License
ISC
Repository
gitlab
Last release
3 years ago

Bespoken DataDog Plugin

This plugin makes it easy to send your voice app's end-to-end test results to a DataDog instance for reporting and monitoring.

It leverages Bespoken's filters to report test results to DataDog.

Getting Started

Installation and Usage

To use the Bespoken Datadog Plugin, just include it in your test project dependencies with:

npm install bespoken-datadog-plugin --save

Then include it as your filter like this in the testing.json:

{
  "filter": "./node_modules/bespoken-datadog-plugin/index.js"
}

Alternatively, you can call it from an existing filter like this:

const DatadogPlugin = require('bespoken-datadog-plugin')

module.exports = {
  onTestEnd: async (test, testResult) => {
    await DatadogPlugin.sendToDataDog(test, testResult)
  },
  onTestSuiteEnd: async(testResults) => {
   await DatadogPlugin.sendSuiteResultsToDataDog(testResults);
  },
}

Environment Variables

VariableDescriptionDefault
DATADOG_API_KEYAPI key to access DATADOG
DATADOG_CUSTOMERCustomer Tag for data points in DATADOGbespoken
DATADOG_JOB_NAMEJobName Tag for data points in DATADOGEndToEndTests
DATADOG_RUN_NAMERunName Tag for data point in DATADOGCurrent date in ISO format

The DATADOG_API_KEY must be set as an environment variable.

For local running, we typically recommend using the package dotenv and setting it in a file labeled .env. We have provided a file example.env you can use as a template (just copy and rename to .env).

It can also be set manually like so:

export DATADOG_API_KEY=<DATA_DOG_API_KEY>

(Use set instead of export if using Windows Command Prompt).

DataDog Configuration

  • Create a DataDog account.
  • Take the API key from the Integrations -> API section
  • Add it to the .env file

DataDog Metrics

DataDog captures metrics related to how all the tests have performed. Each time we run the tests, we push the result of each test to DataDog.

We use next metrics:

  • utterance.success
  • utterance.failure
  • test.success
  • test.failure
  • testsuite.success_percentage
  • testsuite.failure_percentage
  • testsuite.skipped_percentage

The metrics can be easily reported on through a DataDog Dashboard. They also can be used to set up notifications when certain conditions are triggered.

Read more about configuring DataDog in our walkthrough.

DataDog Tags

By default we report the results at utterance, test and test suite level:

Reporting ElementTagDescription
Utterance & Test & TestSuitejobNameThe name of the testing job for the current execution (defaults to EndToEndTests). You can define it here.
Utterance & Test & TestSuiterunNameThe name of the test execution. By default, it is the timestamp when the test was executed.
Utterance & Test & TestSuitetestSuiteNameThe name of the test suite we are executing. In general, it is the YAML file name containing the test cases.
Utterance & TesttestNameThe test description. Usually written after the test: keyword of the test script.
UtterancecustomerThe name of the customer running the test scripts. You can change it here.
UtteranceutteranceThe interaction sent to the voice app.
UtterancevoiceIdThe voice used to do TTS when sending the text utterance to the voice service. We can use Amazon Polly or Google Wavenet voices.
1.1.3

3 years ago

1.1.2

3 years ago

1.1.1

4 years ago

1.1.0

5 years ago

1.0.0

5 years ago