2.0.1 • Published 6 years ago

netrix v2.0.1

Weekly downloads
2
License
MIT
Repository
github
Last release
6 years ago

npmBuild StatusCoverage Status

netrix

Lightweight, pluggable metrics aggregator, loosely modelled on statsd.

NOTE: UDP payloads. Loses metrics under heavy load.

const netrix = require('netrix');

netrix.createServer(options)

  • options \ Optional.
    • port \ Listen UPD port. Default 49494.
    • flushInterval \ Interval at which metrics are aggregated and reported. Default 1000ms.
  • Returns \ Resolve with a running instance of

class NetrixServer

const {NetrixServer} = require('netrix');

This class implements the metrics accumulator and aggregation server. It runs the UDP/datagram server to which all NetrixClient instances send their metrics. See bin/eg-server.

new NetrixServer(options)

options Same as netrix.createServer([options])

NetrixServer is an EventEmitter with the following events.

Event: 'error'

  • \

Emitted when an error occurs on the datagram server.

Event: 'metric'

  • \ Metric type 'c' or 'g' for counter or gauge.
  • \ Metric name.
  • \ Metric value.

Emitted for every metric arriving from netrix.Client instances.

Event: 'flush'

  • \ Timestamp.
  • \ Aggregated metrics.
  • \ Raw metrics.

Emitted at flushInterval and contains aggregated results from all metrics accumulated since the previous flush.

The counter metrics are scaled up or down to a per-second sample even if the flushInterval is not 1000ms.

The counter metrics are reset to zero at each flush boundary so that the counting can resume appropriately.

The gauge metric values remain unchanged until the client sends an update.

Example metrics object:

{
  counters: {
    'netrix.bytes.received': 5773742,  // builtin, bytes received since last flush
    'netrix.frames.received': 5674,    // frames received since last flush
    'netrix.metrics.received': 481618, // metrics received since last flush
    counter1: 481618 // user counter from client.increment('counter1');
  },
  gauges: {
    'netrix.flush.lag': 3 // builtin, flush timer lag ms
  }
}

server.start()

  • Returns \

Starts the server.

server.stop()

  • Returns \

Stops the server.

server.reset()

Removes all counters and gauges. Bear in mind that under normal operation once a counter or gauge is created it remains in place and is reported with each flush even if there was no change in value.

class NetrixClient

See bin/eg-client.

new NetrixClient(options)

  • options \ Optional.
    • host \ Hostname of the server. Default 'localhost'
    • port \ Server port. Default 49494.
    • flushInterval \ Efficienttly accumulate metrics before sending every default 50ms.
    • maxDatagram \ Multiple metrics sent in each datagram. Limits size. Default 1024 bytes.
  • Returns \<netrix.Client>

maxDatagram can theoretically be set as high as 65507 bytes BUT large datagrams simply vanished on OSX laptop. If you decide to deviate from the default do some in-situ benchmarking to determine your sweetspot.

client.start()

  • Returns \

Start the client.

client.stop()

  • Return \

Stop the client.

client.increment(metricName, value)

  • metricName \ For example 'service1.login.failures'.
  • value Optional. Defaults incrementing counter by 1.

Each call to increment() on the client will result in the counter being incremented at the server. Once the server arrives at the flush boundary the total will be reported in the flush event and the counter will be set back to zero at the server.

client.gauge(metricName, value)

  • metricName \ For example 'host1.cpu.percent_idle'
  • value The percentage idle.

Each call to gauge() on the client will result in the corresponding guage data being accumulated at the server. These accumulated values will be averaged and reported in the flush event.

client.metric(metricName, value, customTypeCode)

  • metricName
  • value
  • customType \ Not a guage ('g') or counter ('c').

This allows for the sending of things other than gauge and counter metrics. The server ignores these. But they do cause the metric event to fire and they can be found in each flush event in the raw data.

2.0.1

6 years ago

2.0.0

7 years ago

1.0.0

7 years ago