donald v0.1.0
PlutoKpi
Simple KPI fetching tool with node and InfluxDB.
Getting started
Clone the project and execute yarn to install all the dependencies. After that, duplicate the pluto.sample.yml, rename it to pluto.yml and add all the config needed for work (you can find below how to configure your dev environment). Then you can start working on it. For testing your changes you can do the following:
yarn debug <namespace> <measurement>What this means?
debugis a handy command that will executeyarn buildandyarn startnamespaceis the name of the app that the worker will collect the info from, and where the config will be retrieved on thepluto.yml. E.g.companies,yoda...measurementis the metric unit that you want to recollect data for, and will execute the proper worker. E.g.gems.packages...
⚠️ If you don't have influxdb or you don't want to write there, you can add ONLY_OUTPUT=1 as the first parameter of your command. Example given:
ONLY_OUTPUT=1 yarn debug companies packagesExample of pluto.yml
meta:
database:
database: "pluto"
host: "localhost"
companies:
packages:
schedule: "* * * * *"
image: "docker.dc.xing.com/companies/companies"
folder: "client"
gems:
schedule: "* * * * *"
image: "docker.dc.xing.com/companies/companies"
yoda:
lighthouse:
schedule: "10 1 2 * *"
website: "https://www.xing.com/url-to-test/whatever"
login:
url: "https://www.xing.com/login"
username_field_selector: "#login_form_username"
password_field_selector: "#login_form_password"
login_button_selector: "#login_form_submit"
username: "test@test.com"
password: "somethingToBeSafe🔒"Deploying to pluto
- Make sure you have permission to access
pluto@ams1-redcomp01.xing.hh ssh pluto@ams1-redcomp01.xing.hhcd /var/plutogit pull(or editpluto.yml)service restart pluto
Note: Currently pluto writes to http://ams1-redcomp01.xing.hh:8086 influx instance
Architecture Design
This is the structure of the project
├── src
│ ├── index.js
│ ├── storage
│ │ └── storage.js
│ └── workers
│ └── workers.jsThe entry point of the application is index.js file. This file will work as a CLI accepting multiple arguments <worker> and <application>.
The worker argument represent one of the multiple sources where we fetch data.
The application argument represent the application from where to fetch the data.
The store directory holds the logic for the DB InfluxDB (documentation)https://docs.influxdata.com/chronograf/v1.4/introduction/getting-started/
The workers directory holds all the logic for extracting data from the different sources, each worker file export one function that accept storage and application
The storage in this case InfluxDB expects to receive a key ex: companies_response_codes and a object ex: { fields: { YOUR_DATA_HERE } }
storage.write(`jira_yoda`, {
fields: { total_bugs: parseInt(json.total) }
});- As a side note remember that Grafana work better with
integerorfloat
Adding a new worker
Create a new folder inside the worker folder with a representative name of the source where are you are extracting data.
Add the worker in workers.js inside the workersAvailable constant
Environment config
Install InfluxDB, nodejs, Grafana, yarn
$ brew install grafana
$ brew install influxdbRemember to start both services in order to store a visualize data.
Usually Grafana start on port localhost:3000
Copy .env.sample to .env
Change the defaults if needed
To build the code run: yarn dev
The application does not create a database so if the configured influx database does not exist you have to create it first using the influx client:
$ influx
> create database <your database in the .env file>With all of this you can start storing data to your InfluxDB by using the command yarn start <worker> <application>
Installing the package globally
Installing the application globally will means that the bin generated by the package will be available in your $PATH
$ yarn build
$ npm install -gThen we can run our workers by executing
pluto <worker> <application>List of workers
CI
buildTimeScrapes data from jenkins UI and populates influxdb with build times for each successfull build on the page of the supplied url.
configexample:ci.buildTime: schedule: "* * * * *" builds: - name: "pullRequest" url: "https://ci.dc.xing.com/view/xtm/job/xtm-pullrequest/api/json?tree=allBuilds[result,duration]" - name: "dockerImage" url: "https://ci.dc.xing.com/view/xtm/job/xtm-pullrequest-generate-docker-image/api/json?tree=allBuilds[result,duration]" auth: username: "foo" password: "secret"datapoint example:table: ci.buildTime key: pullRequest (Build times of pull request jobs) key: dockerImag (Build times of docker image builder jobs)
Gems
Fetches supplied docker image and runs a command inside a container of that image to determine the number of outdated gems.
config example:
gems:
schedule: "* * * * *"
image: "docker.dc.xing.com/xtm/xtm:latest"data point example:
table: gems
key: default (Number of outdated gems)Packages
Fetches supplied docker image and runs a command inside a container of that image to determine the number of outdated packages. You will need to supply a folder, or if not will assume the root.
config example:
packages:
schedule: "* * * * *"
image: "docker.dc.xing.com/xtm/xtm:latest"
folder: "foo"data point example:
table: packages
key: default (Number of outdated packages)Code climate
Uses rest API of code climate to determine metrics like:
- Maintainability cost (tech debt)
- Code coverage
- Number of security issues
config example:
code_climate:
schedule: "* * * * *"
url: "https://codeclimate.dc.xing.com/api/v1"
api_token: "secret"
app_id: "secret"data point example:
table: code_climate
key: techDebt (Calculated by code climate based on number/nature of issues and time cost to fix them)
key: testCoverage (Global percentage code coverage as determined by code climate)
key: securityIssues (Number of issues in security category)Logjam
apdexScrapes data off of logjam UI to determine overall apdex for a project.
configexample:logjam.apdex: schedule: "* * * * *"datapoint example:table: logjam.apdex key: apdex (global apdex for project)errorsScrapes data off of logjam UI to retrieve fatals, error and warning for project
configexample:logjam.errors: schedule: "* * * * *"datapoint example:table: logjam.errors key: fatals (fatal errors for project) key: errors (errors for project) key: warnings (warnings for project)requestsScrapes data off of logjam UI to retrieve number of requests by response code (e.g. 2xx, 3xx, 4xx, 5xx) for a project.
configexample:logjam.requests: schedule: "* * * * *" include_last_read: truedatapoint example:table: logjam.requests key: 2xx (Number of requests with a 2xx response) key: 3xx (Number of requests with a 3xx response) key: 4xx (Number of requests with a 4xx response) key: 5xx (Number of requests with a 5xx response)slowControllersScrapes data off of logjam UI to retrieve the count of slowest controllers for a project (.e.g those with an apdex of less than 0.7)
configexample:logjam.slowControllers: schedule: "* * * * *"datapoint example:table: logjam.slowControllers key: slowControllersslowestControllersScrapes data off of logjam UI to retrieve the list of slowest controllers for a project (.e.g those with an apdex of less than 0.7)
configexample:logjam.slowestControllers: schedule: "* * * * *"datapoint example:table: logjam.slowestControllers key: apdex tag: controllerNameapdexPerControllerActionScrapes data off of logjam UI to retrieve the value of the apdex for a particular controller of a project
configexample:logjam.apdexPerControllerAction: schedule: "* * * * *" actions: - name: "Xtm::Search::IdentitiesController#index"datapoint example:table: logjam.apdexPerControllerAction key: apdex
Jira query
User JIRA rest API to perform queries to retrieve info about bugs and tasks. Is flexible enough to allow any kind of query.
config example:
jira_query:
schedule: "* * * * *"
include_last_read: true
url: "https://jira.xing.hh/rest/api/latest"
queries:
- name: "all_bugs"
query: "issuetype = Bug"data point example:
table: jira_query
key: all_bugs (value of name in list of queries)Pingdom
Worker that checks from pingdom the status of certain application
config example:
pingdom_is_alive:
schedule: "1 * * * *"
url: https://api.pingdom.com/api/2.1/checks
check_id: 2373562 Also following sensitive data is required on .env:
PINGDOM_API_KEY=<api lkey from pingdom>
PINGDOM_ACCOUNT_EMAIL=office-management.bcn@xing.com
PINGDOM_USERNAME=<username from pingdom>
PINGDOM_PASSWORD=<password>data point example:
table: jobs_pingdom_is_alive
key: aliveDebugging with VSCode
For debug node projects, you can use the debugging tool of VSCode. Is quite powerful but tricky to config. You can use this snippet as your configuration:
{
"version": "0.2.0",
"configurations": [
{
"type": "node",
"request": "launch",
"name": "Debug node",
"program": "${workspaceFolder}/build/main.js",
"args": ["companies", "gems", "ONLY_OUTPUT=1", "VS_DEBUG"]
}
]
}The only thing that you need to tweak are the args, which are essentially the same as you use when executing yarn debug. The flag VS_DEBUG is just for debugging purposes such forcing an if else statement or things like that
Roadmap
Testing
Task for setting DB
7 years ago