@anpingli/common v1.1.6
Henson Common Utilities Library
Table of Contents
config
- Configurationapplog
- Application Loggingmetrics
- Service Metrics Reporting- ES6 Promise Enhancement
utils
- Small Common Used Functionscollection
- Collection APItellog
- TEL log API for sending TEL to 'whttp'.EgressLog
- EgressLog Api to save EgressLog to diskalarm
- alarm Api for sending alarmstracer
- trace api for call stack traceexpressTrace
- trace middleware forexress
coder
- general text encode/decode moduleException
- custom error objectMemSessionStore
- In-Memory store for express-sessionCidr
- utilities to validate/match IP (v4/v6) address/subnetCachingFileReader
A cached version of node.js fs module read-only API implementation- Application Common
configFilter
- filter Api for common formatted configurationFileCache
An in-memory filecache and format convertion API implementationFileCacheManager
A manager to manage the FileCache APIextendedJoi
An extended Joi instance to support more validation schemas
config
- Configuration
Read properties from configuration file. Supports Henson standard configuration
file ConfigData.json
and any other JSON files.
It also read from environment variable and environment takes priority.
Loaded files are monitored and will be reloaded when change is detected.
A good practice for application is to get
from this API every time,
so that new value will always be used.
File Path
In production mode, the config path is /opt/miep/etc/config/
, otherwise it
loads from config/
under current working directory. You can override the
path by calling from()
or setting environment variable CONFIG_FILE_PATH
.
If configuration file doesn't exist, the API response is different depends on running mode:
- In production mode, an error is thrown when try to read configuration from a non-exist file.
- In non-production mode, missing configuration file is treated as an empty file was loaded.
Environment Variable
Config API reads environment variable first and the same name property in config file will be override.
For property inside standard configuration file ConfigData.json
, environment variable name is the property name.
For other configuration files, environment variable should be named in the format of [file]_[property]
, where file is the relative file path and name with suffix.
For example: sam/Configuration.json_authentication.user
.
ⓘ Shell doesn't allow to set environment variable which contains character /
or .
. It is intended to be used with Docker.
Usage
Methods:
get(property: string, options?: Option): any
Get property from Henson standard configuration file or environment variable.
get(file: string, property: string, options?: Option): any
Get property from specific configuration file. File name may contain relative path, extension can be omitted (because it's always .json).
getAll(file: string): any
Get the whole property tree from specific configuration file.
register(file: string, onReloaded: () => void): void
Register a callback which will be called when the file is reloaded.
from(path?: string): void
Override the default path to load configure files from. Setting a new path will clear all files loaded before. Reload callbacks will no longer work as well. Calling it without parameter will reset path to default. It's mainly used in test case to switch configuration.
exists(file: string): boolean
Check if the configuration file exists in configuration path
Option:
default: any
- The default value when configuration item is undefinedundefinable: boolean
- Behavior when the configuration item is undefined and has no default valuetrue
- Returnundefined
false
(default) - Throw error
Example:
import {config} from '@anpingli/common';
// read property from Henson standard config 'ConfigData.json'
const alias = config.get('dmsLdapHomePageAlias');
// read from specify config file 'SppServiceMapConfig.json',
// an object is returned
const sms = config.get('SppServiceMapConfig', 'SMS');
// read a property through the tree
const sms = config.get('InformationForwarding',
'trusted.parameters.Bearer-type.header');
// read property in array notation
const sms = config.get('users["Bill Gate"].email');
// get undefined property will throw Error
// give default value if the property can be omitted
const p = config.get('someProperty', {default: 100});
See source code and test case for more detail.
applog
- Application Logging
The logging API internally use the Jigsaw modified version of bunyan, which is a logging library outputs in JSON format.
Output Destination
The philosophy of logging is to log directly to stdout and allowing the deployment environment to decide where that should go. The application just handle its core business and keep the flexibility to let auxiliary affairs to be handled by others.
During development, just simply pipe stdout to bunyan
CLI to render the log
in console.
$ node src/index.js | bunyan
In production, redirect stdout to a file. Log file rotation can be done by common Linux tool e.g. logrotate.
$ NODE_ENV=production node lib/index.js >> /var/log/miep/myapp.log 2>&1 &
Usage
Each module should use its name to create a logger.
⚠ init
is not necessary since v10.2.0
import { applog } from '@anpingli/common';
const log = applog.logger('backend'); // e.g. current file is 'backend.ts'
log.debug('Current directory: %s', process.cwd());
Log check points when trace is hit or process.env.TRACER_DEBUG is true.
import {applog} from '@anpingli/common';
// each module should use its name to create the logger
const log = applog.logger('test');
log.check('step1'); // log msg: step1
// ...
log.check('step2'); // log msg: step1 -> step2
// ...
log.check('step3'); // log msg: step1 -> step2 -> step3
// ...
Read bunyan document for detail about log method.
Output Level
The output level of loggers are configured in file applogconfig.json
.
Please check config
API document for the file path.
{
"myapp" : {
"level" : "debug",
"modules" : {
"index" : "warn",
"backend" : "info"
}
},
"anotherapp" : {
"level" : "info"
}
}
Modification on log config file will take effect immediatelly.
If there is no configuration for the app, in non-production mode the default
output level is TRACE
, which means everything, in production mode
(NODE_ENV === ‘production’) the default level is INFO
.
Suppress Logging Output
During unit test, the logging output mixes with test report will make test report unreadable.
Using bunyan
CLI filtering feature can suppress most of normal logging output.
$ gulp test | bunyan -l fatal
metrics
- Service Metrics Reporting
Collect metrics data and report in InfluxDB line protocol.
Metric Types
The following metric types are available:
Gauge
A gauge is a metric that represents a single numerical value that can arbitrarily go up and down. Gauges are typically used for measured values like current memory usage.
Methods:
from(readFn: () => number): void
: Use a read function to retrieve value on readvalue(value: number): void
: Set current value. The value is ignore if a read function is given infrom()
Fields:
value
Counter
A counter is a cumulative metric that represents a single numerical value that can increase or decrease. A counter is typically used to count requests served, tasks completed, errors occurred, etc.
A special use case of counter is to measure rate of events over time (per second). It should only be increased when it's used in this way.
Methods:
inc(n: number = 1): void
dec(n: number = 1): void
reset(n: number = undefined): void
Options:
autoreset
: whether auto reset counter value to zero when value is reported. Default istrue
.
Fields:
count
: Counter valuecount_s
: Average rate per second in a report period. Exist only whenautoreset
istrue
.
Histogram
A histogram samples observations (usually things like request durations or response sizes). It calculates configurable quantiles (percentiles) and counts them in configurable bins. It also provides sum, max, min of all observed values.
To limit CPU and memory footprint, records are stored in a fix size reservoir. When the reservoir is full, it drop records randomly and only keep samples. Percentiles and bins fields are not absolute accurate.
Methods:
record(value: number): void
Options:
percentiles: number[]
: Percentile fields. Default is[0.9, 0.95, 0.99]
, which outputs 90%, 95% and 99% percentile.binEdges: number[]
: Define the bins to count records into value ranges. Bins need not be of equal width. Right most edge can beInfinity
. Example:[0, 100, 1000, Infinity]
. By default there is no bin setup.
Fields:
max
: max record valuemin
: min record valuemean
: average value of all recordssum
: sum of all recordscount
: number of recordspNN
: percentile atNN
, e.g.p90
,p95
andp99
. Configured bypercentiles[]
in options.bin_L_R
: number of records in a bin, whereL
andR
are the left (include) and right (exclude) edge of the bin, e.g.bin_0_100
,bin_100_1000
andbin_1000_inf
. Configured bybinEdges[]
in options.
Timing
Timing is a helper base on histogram
to measure distribution of duration (usually things like request durations).
Methods:
begin(): Stopwatch
: Begins duration timing.end(stopwatch: Stopwatch)
: Usually callStopwatch.end()
rather than this method.Stopwatch.end()
: Ends duration timing and record the duration into underlay histogram
Options:
- Same as
histgram
.
Fields:
- All fields in
histgram
ongoing
: number of stopwatches that has began but not ended.
Reporter
Current only supports InfluxDB.
The InfluxDb
reporter is able to write data in
InfluxDB line protocol
format into a given logger, for debug without connection to a InfluxDB server.
Options:
host?: string
- InfluxDB address. Omitting will disable sending data to server.port?: number
- InfluxDB port, default is8086
.db: string
- Metrics database to report to. Each application should have its database.logPoints?: boolean
- Write data into log, default isfalse
.logger?: (data: string) => void
- The logger function, used whenlogPoints
istrue
.defaultTags?: Tags
- Default tags are added to all metric points.
Example
See JsDoc for API detail. Example can be found in test/metrics/demo-metrics.ts
.
⚠ Use only alphanumeric and underscore in metrics and tag naming.
import {metrics} from '@anpingli/common';
metrics.reportTo(new metrics.reporter.InfluxDb({
host: '127.0.0.1',
db: 'test',
defaultTags: { // default tags are added to all metric points
zone: 'cgc'
}
}));
// create measurements in collection
let metricsCollection = metrics.newCollection();
// how to sepcify the type of a metric
let counterC: metrics.Measurement<metrics.metric.Counter>;
// for some metric type, there can be options as third parameter
counterC = metricsCollection.counter('counter_C', {}, { autoreset: true });
metricsCollection.gauge('os_freemem').from(os.freemem);
metricsCollection.gauge('service_gauge').tag({ service: 'z' }).value(100);
metricsCollection.counter('incoming').tag({ bucket: 1 }).inc();
// shorthand style, if no tag to be added
metricsCollection.counter('incoming').dec();
// assign metric to variable beforehead
counterC.inc();
// default tag `node` will be added to add points
let m = metricsCollection.counter('outgoing', { node: 'local' });
m.tag({ bucket: 1 }).inc();
m.tag({ bucket: 2 }).inc();
// release internal memory when the collection is no long used
metricsCollection.release();
Be careful on memory footprint when design the tags. For each tags combination there will be a metric created.
ES6 Promise Enhancement
This module enhances ES6 Promise
by adding methods to it. It's described in the
book ECMAScript 6 入门.
Finally
finally( () => void ): Promise
The callback passed into finally
method will be called no matter a promise is
resolved or rejected. Original value in the promise is returned.
import '@anpingli/common';
server.listen()
.then( () => {
// do something
})
.finally( () => {
// always perform clean up
});
utils
- Small Common Used Functions
utils
includes some useful small function:
uid
Generate a unique identifier by random.
function uid(len: number): string
uid(10); // => "FDaS435D2z"
The probability of duplicates in n
IDs of length L
is:
p(n) ≈ n²/(2*62^L)
ID has length 21 has lower probability of duplicates then type 4 (random) UUID. Generally length 10 is enough for million of IDs.
hash
Return the hash of a string, range in [0, n).
⚠ Set n
to a prime number to minimize collisions (achieve even distribution).
function hash(s: string, n: number): number
sleep
Pause logic flow for a duration. It must be called inside async function using await
.
function sleep(millisec:number): Promise<void>
await sleep(3000);
// continue following task
urlSafeBase64Encode
Encode a string or a buffer to url-safe base64 string.
function urlSafeBase64Encode(value: string | Buffer, encoding?: string): string
urlSafeBase64Encode('123122333asdascx');
urlSafeBase64Encode(Buffer.from('12312312adasdv3');
urlSafeBase64Decode
Decode a url-safe base64 string to a normal string;
function urlSafeBase64Decode(value: string, encoding?: string): string
urlSafeBase64Decode('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-/=');
random
Return a random integer in range [min, max]
. Both min
and max
are inclusive and must be integer.
It's a simplified version of lodash random
.
If you're looking for floating random in a range, use lodash one.
random(1, 3); // possible returned value: 1, 2, 3
orderedJsonStringify
Stringify a simple object to JSON and make the keys sorted. It can be used as key in Map
.
orderedJsonStringify({
type: 1,
value: {
y: 2,
x: 1,
},
name: 'foo',
})
// In output JSON, key is sorted:
// {"name":"foo","type":1,"value":{"x":1,"y":2}}'}
getModuleName
Get module name from package.json. :warning: The application must start with 'npm start', or the module name will be undefined.
const moduleName = getModuleName();
if (!moduleName) {
logger.error('can not get the module name');
}
convertCase
Convert an object to another case-style object recursively.
Usage
Methods
function convertCase<T, U>(input: T, convertStr: (name: string) => string): U
Generic convert function that requires a specific convert functionfunction toKebabCase<T, U>(input: T): U
Convert an object to a kebab-case object recursivelyfunction toCamelCase<T, U>(input: T): U
Convert an object to a camel-case object recursivelyfunction toSnakeCase<T, U>(input: T): U
Convert an object to a snake-case object recursively
import * as utils from '@anpingli/common';
import * as _ from 'lodash';
const snakeCase = [
{
a_bb_ccc: {
d_ee_ff: [1, 2, { r_ss_ttt: [4, 5, 6] }],
j_kk_lll: true,
},
},
{
x_yy_zzz: {
o_pp_qqq: undefined,
g_hh_iii: 'whats up',
},
},
{
// tslint:disable:no-null-keyword
m_nn_ooo: null,
},
null,
];
const camelCase = [
{
aBbCcc: {
dEeFf: [1, 2, { rSsTtt: [4, 5, 6] }],
jKkLll: true,
},
},
{
xYyZzz: {
oPpQqq: undefined,
gHhIii: 'whats up',
},
},
{
// tslint:disable:no-null-keyword
mNnOoo: null,
},
null,
];
const kebabCase = [
{
'a-bb-ccc': {
'd-ee-ff': [1, 2, { 'r-ss-ttt': [4, 5, 6] }],
'j-kk-lll': true,
},
},
{
'x-yy-zzz': {
'o-pp-qqq': undefined,
'g-hh-iii': 'whats up',
},
},
{
// tslint:disable:no-null-keyword
'm-nn-ooo': null,
},
null,
];
toCamelCase(snakeCase); // camelCase
toCamelCase(kebabCase); // camelCase
convertCase(snakeCase, _.camelCase); // camelCase
toKebabCase(snakeCase); // kebabCase
toKebabCase(camelCase); // kebabCase
toSnakeCase(camelCase); // snakeCase
toSnakeCase(kebabCase); // snakeCase
getTimeString
Convert an number to timestamp string.
Usage
Methods
function getTimeString(time: number, options = { inputFormat: TimeFormat.Millisecond, outputFormat: TimeFormat.Millisecond }): string
- time: millisecond or second
- options.inputFormat: Millisecond(1572941171964) or Second(1572941171)
- options.outputFormat: Millisecond(2019-11-05T08:06:11.964Z) or Second(2019-11-05T08:06:11Z)
const timestamp = utils.getTimeString(1572941171, {
inputFormat: utils.TimeFormat.Second,
outputFormat: utils.TimeFormat.Second,
});
// 2019-11-05T08:06:11Z
const timestamp = utils.getTimeString(1572941171, {
inputFormat: utils.TimeFormat.Second,
outputFormat: utils.TimeFormat.Millisecond,
});
// 2019-11-05T08:06:11.000Z
const timestamp = utils.getTimeString(1572941171964, {
inputFormat: utils.TimeFormat.Millisecond,
outputFormat: utils.TimeFormat.Second,
});
// 2019-11-05T08:06:11Z
const timestamp = utils.getTimeString(1572941171964, {
inputFormat: utils.TimeFormat.Millisecond,
outputFormat: utils.TimeFormat.Millisecond,
});
// 2019-11-05T08:06:11.964Z
collection
- Collection API
ExpiringMap
A Map
that has capacity limitation and data expiration. Can be used as cache.
The API is compatible with ES6 Map
.
When maximum capacity is specified and the map is already full, oldest entry will be removed (no matter expired or not) on adding new entry.
Whether to return expired value on get
is configurable by option getExpiredValue
.
If you choose to keep returning expired value, expired entries won't be auto cleared.
When this is the case, make sure you have either set maximum capacity or
call clearTimeout
periodically to limit memory usage.
Constructor
ExpiringMap<KEY, VALUE>(ttl: number, options: ExpiringMapOptions)
ttl
is the entries' time to live in millisecond.
options
:
maxCapacity: number
- default is unlimited.getExpiredValue: boolean
- Whether return expired value onget()
. Default isfalse
.updateTimestampOnGet: boolean
- Whether update timestamp (prolong expire time) onget()
. Default isfalse
.
Methods
size: number
set(key: KEY, value: VALUE): VALUE
- Returns the value just being setget(key: KEY): VALUE | undefined
- Returns the value of a keygetOrSet(key: KEY, defaultValue: VALUE): VALUE
- Set key to default value if it's undefineddelete(key: KEY): boolean
- Returns false when key doesn't existclear(): void
- Remove all entries.clearExpired(time?: number): boolean
Remove all expired entries from collection. If time isn't specified, default to now. Return true if any entry is deleted.keys(): Iterable<KEY>
- Expired entries are included.values(): Iterable<KEY>
- Expired entries are included.entries(): Iterable<[KEY, VALUE]>
- Expired entries are included.
Event
evict
An entry is evicted because of expiration or map size reaches maximum limit.
map.on('evict', function(key: KEY, value: VALUE): void {
// item evicted
}
Example
import {collection} from '@anpingli/common';
const testMap = new collection.ExpiringMap<string, number>(1000);
testMap.set('one', 1);
testMap.set('two', 2);
testMap.set('three', 3);
console.log(testMap.size);
// delete key. return true if delete success.
if (testMap.delete('one')) {
console.log('delete success');
}
// iterator for map
for (let [key, value] of testMap) {
// ...
}
// iterator for key-value pairs.
for (let [key, value] of testMap.entries()) {
// ...
}
// iterator for keys.
for (let key of testMap.keys()) {
// ...
}
// iterator for values.
for (let value of testMap.values()) {
// ...
}
// build an array for keys.
[...map.keys()]
// ['one', 'two', 'three']
// build an array for values.
[...map.values()]
// [1, 2, 3]
// build an array for key-value pairs.
[...map.entries()]
// [['one', 1], ['two', 2], ['three', 3]]
// build an array for key-value pairs.
[...map]
// [['one', 1], ['two', 2], ['three', 3]]
BinaryHeap
BinaryHeap
stores a collection of objects in such a way that the highest priority element is always on top.
Constructor:
BinaryHeap(compareFn?: (a: T, b: T) => number)
compareFn
Function to compare the priority of
a
andb
. Returns positive ifa
has higher priority thanb
; returns negitave ifb
has higer priority thana
. If this parameter is omitted, useT.compare()
if exist, otherwise native number/string comparation.
Properties:
- size
Methods:
push(element: T): number
Adds an element into the collection and return the new size of the collection.
Alias: enqueue
pop(): T
Returns the element of the top of the collection and removes it from the collection.
Alias: dequeue
peek(): T
Returns the element of the top of the collection without removing it.
pollLast(): T
Remove the element at the end of the collection and retures it.
⚠ Heap is not sorted, the element at the end may not be the lowest priority.
clear()
Example:
const heap = new BinaryHeap<number>();
heap.push(1);
heap.push(-1);
heap.push(0);
// Content: 1, -1, 0
console.log(heap.size) // -> 3
heap.pop(); // --> 1
heap.pop(); // --> 0
heap.pop(); // --> -1
Queue
A FIFO data structure with optional high and low watermarks.
Property:
- size
Methods:
- constructor(lowWaterMark: number = 0, highWaterMark: number = Number.MAX_SAFE_INTEGER)
- enqueue(data: T): void
- dequeue(): T
- peek(): T
- isEmpty(): boolean
Example:
const queue = new Queue<number>(2, 4);
queue.enqueue(1);
queue.underLowWatermark(); // returns true
queue.enqueue(2);
queue.enqueue(3);
queue.enqueue(4);
queue.enqueue(5);
queue.aboveHighWatermark(); // returns true
queue.peek(); // returns 1
queue.dequeue(); // returns 1
queue.dequeue(); // returns 2
tellog
- TEL log API for sending TEL to 'whttp'.
Steps:
Design TEL.
Hardcode TEL in 'henson-cpp': 'henson-cpp/OMS/OMTEL/OMTELSERV/src/TelAsn1Info.cpp'.
Use
tellog
API to send TEL to 'telserver'.
Example
- Harcode in TelAsn1Info.cpp.
insertObjectInfo("0.77.97.131", "updateApnsTokenStatus", NoneE, true, ConstructedE, 1);
insertObjectInfo("0.77.97.131.21", "xTransactionId", StringE, false);
insertObjectInfo("0.77.97.131.22", "apnsTokens", StringE, false );
insertObjectInfo("0.77.97.131.23", "tokenStatus", StringE, false );
insertObjectInfo("0.77.97.131.25", "errorCode", StringE, true );
insertObjectInfo("0.77.97.131.26", "errorMsg", StringE, true );
- Use 'tellog' API in node.js.
import {tellog} from '@anpingli/common';
// init(name: string, ip: string = '127.0.0.1', port: number = 9999);
// don't forget to init.
tellog.init('test', '127.0.0.1', 9999);
// create a TelLogObject.
let telLogObj: TelLogObject = {
sesProcWtdr: {
wtdrId: 1,
imsi: '460001123313123',
actionList: {
sesAction: {
updateApnsTokenStatus: {
TransactionId: '123123123132',
apnsTokens: '12312313123',
tokenStatus: 'valid'
}
}
}
};
// send log to 'telserver';
// if success, return true.
// if fail, throw an error.
try {
await tellog.log(telLogObj);
} catch(error) {
// ...
}
TelLogObject
{
sesProcWtdr: {
wtdrId: number; // mandatory
xTransactionId?: string;
recordingEntity?: string;
msisdn?: string;
url?: string;
httpMethod?: string;
sessionId?: string;
sourceIPAddress?: string;
sourcePort?: number;
imsi?: string;
returnCode?: number;
failureReason?: string;
uaIdentificationString?: string;
manufacturer?: string;
model?: string;
requestInTime?: string;
responseOutTime?: string;
virtualGWName?: string;
virtualGWExternalIpAddress?: string;
timeStamp?: string;
imei?: string;
deliveryResult?: string;
actionList: { // mandatory
sesAction: any; // contains the specific fields of TEL.
};
};
}
EgressLog
- EgressLog Api to save EgressLog to disk
EgressLog is the log record operation to external node(i.e: provision of AAA), below is record field description:
Field | Type | optional | Detail Description |
---|---|---|---|
sequence | number | no | The sequence number of the transaction. |
transactionId | string | no | The transaction identification. |
subscriberId | Object | yes | Key attributes of the request that identify a subscription (i.e. vIMSI). |
subscriberId.msisdn | string | yes | MSISDN as a subscriber ID. |
subscriberId.imsi | string | yes | IMSI as a subscriber ID. |
subscriberId.vImsi | string | yes | vIMSI as a subscriber ID. |
subscriberId.userId | string | yes | User ID as a subscriber ID. |
subscriberId.eid | string | yes | EID as a subscriber ID. |
subscriberId.deviceInfo | Object | yes | DeviceInfo as a subscriber ID. It contains device ID and application category |
subscriberId.deviceInfo.deviceId | string | yes | Refer to subscriberId.deviceInfo . |
subscriberId.deviceInfo.applicationCategory | string | yes | Refer to subscriberId.deviceInfo . |
subscriberId.operatorId | string | yes | The operatorId corresponding to the imsi, msisdn |
apiUser | string | yes | The user name of HTTP authentication used to send the request. |
hostName | string | no | The identity of the Henson instance. |
startTime | number | no | The time when the transaction begins. The format is ISO8601 "YYYY-MM-DDTHH:mm:ss.sssZ. |
executionTime | number | no | The response duration (ms). |
backendNodeType | string | no | The backend node that the request is sent to. |
sourceIp | string | yes | The source IP of the request. |
sourcePort | number | yes | The source port of the request. |
destinationIp | string | yes | The destination IP of the request. |
destinationPort | number | yes | The destination port of the request. |
action | string | no | Brief action of the request |
errorDescrption | string | yes | Error description that indicates error message or exceptions |
request | Object | no | Detail fields of the request, so the fields start with request. below. |
request.method | string | yes | HTTP method of the request. |
request.path | string | yes | HTTP path of the request. |
request.headers | Object | yes | HTTP headers of the request. |
request.query | Object | yes | HTTP query of the request. |
request.body | Object | yes | HTTP body of the request. |
response | object | yes | Detail fields of the response, so the fields start with response. below. |
response.statusCode | Object | yes | HTTP status code of the response. |
response.headers | Object | yes | HTTP headers of the response. |
response.body | Object | yes | HTTP body of the response. |
Usage
Interfaces
interface Request {
method?: string;
path?: string;
headers?: any;
query?: any;
body?: any;
}
interface Response {
statusCode?: number;
headers?: any;
body?: any;
}
interface SubscriberId {
msisdn?: string;
imsi?: string;
vImsi?: string;
userId?: string;
deviceInfo?:
{
deviceId: string;
applicationCategory: string
};
eid?: string;
}
interface EgressLog {
transactionId: string;
subscriberId?: SubscriberId;
apiUser?: string;
startTime: number; // timestamp in ms
executionTime: number;
backendNodeType: string;
sourceIp?: string;
sourcePort?: number;
destinationIp?: string;
destinationPort?: number;
action: string;
errorDescription?: string;
request: Request;
response?: Response;
}
Methods
isEnabled(): boolean
Check whether egress log is enabled or not.
log(egressLog: EgressLog): Promise<void>
Output the egress log to the module's egress log file. :warning: It may throw errer, socatch
is mandotory.
import { egresslog } from '@anpingli/common';
if(egresslog.isEnabled()) {
const request:egresslog.Request = {
body: {
name: '401017519417201@proxy.com',
imsi: '999990000047056',
msisdn: '13475593529',
apn: 'ims.tmus',
userStatus: 'enable',
certificateId: '3792402676649527712',
certificateIssuerName: 'DC=org, DC=gsm1900, CN=T-Mobile USA DES Issuer CA 01',
transactionId: '522361529377014390401017519417201',
}
};
const response:egresslog.Response = {
body: {
message: '1 object(s) created.',
}
};
const startTime = Date.now();
const endTime = Date.now();
const egressLog:egresslog.EgressLog = {
transactionId: '146161521617280893999990027162514',
subscriberId: {
imsi: '999990000047056'
},
action: 'xxx',
apiUser: 'mts',
backendNodeType: 'aaa',
sourceIp: '172.17.22.120',
sourcePort: 2356,
destinationIp: '10.175.147.162',
destinationPort: 22,
startTime,
executionTime: endTime - startTime,
request,
response,
};
egresslog.log(egressLog).catch(error)
{
logger.error(`failed to output egress log`, error);
}
;
}
alarm
- alarm Api for sending alarms
In linux, now we use commandline tool agentxtrap
to send trap to net-snmp
. Then net-snmp
send trap to ESA
.
Alarm Config
Each process should have its own config file, for example sam.json
and sam_customized.json
sam.json
is the default configuration file.
{
"samDbConnFailure": {
"oid": "1.3.6.1.4.1.193.126.3.65.1.1.1.1.1",
"threshold": 50,
"interval": 60,
"active": true,
"moduleId": "sesSam",
"errorCode": 101,
"severity": 3,
"modelDescription": "SAM failed to connect to SDD.",
"activeDescription": "SAM failed to connect to SDD.",
"eventType": 2,
"probableCause": 506,
"documentation": {
"description": "SAM failed to connect to SDD.",
"alarmingObject": "SAM",
"raisedBy": "SAM received null or failure responses from SDD.",
"clearedBy": "SAM gets successful responses from SDD.",
"proposedRepairAction": "- Check the SDD status.\n - Verify the network between SAM and SDD."
}
},
"crossSiteSamConnFailure": {
"oid": "1.3.6.1.4.1.193.126.3.65.1.2.1.1.1",
"threshold": 50,
"interval": 60,
"active": true,
"moduleId": "sesSam",
"errorCode": 201,
"severity": 3,
"modelDescription": "SAM failed to receive any response or received 5xx error responses from the SAM server on the other Henson site.",
"activeDescription": "SAM failed to receive any response or received 5xx error responses from the SAM server on the other Henson site.",
"eventType": 2,
"probableCause": 506,
"documentation": {
"description": "The SAM server failed to receive any response, or received 5xx error responses from the SAM server on the other Henson site during OAuth authorization.",
"alarmingObject": "SAM",
"raisedBy": "The total number of no response and error responses with status code 5xx that SAM received from the other SAM server reached the alarm raising threshold.",
"clearedBy": "The total number of no response and error responses with status code 5xx that MTS received from other SAM server is lower than the alarm clearing threshold.",
"proposedRepairAction": "Check that the network between the two SAM servers is working."
}
}
}
sam_customized.json
is the customized configuration file. Customer can configure the threshold
, interval
, etc.
ⓘ If the default alarm configuration file name is test.json
then the customized configuration file name must be test_customized.json
.
{
"alarms":{
"samDbConnFailure":{
"description":"This alarm is sent when it is not possible to connect to the database server.",
"threshold": 70, //Alarm will send when alarms > 80%. Alarm will clear when alarms < 80% * 0.7.
"interval":60 //if interval is equal to 0, then alarm will send and clear immediately.
},
"crossSamConnFailure":{
"description":"This alarm is sent when it is not possible to connect to the cross-site SAM",
"threshold": 70,
"interval":60
}
}
}
The alarm module reads the sam.json
by default, and overwrite the properties configured in sam_customized.json
Example
import {applog, alarm} from '@anpingli/common';
applog.init('sam');
let logger = applog.logger('alarm');
let sender = new alarm.AgentxTrap({ logger: logger.warn.bind(logger) });
alarm.init('sam', sender); //parameter 'sam' is the file name of alarm 'sam.json'.
let dbConnAlarm = alarm.getAlarm('samDbConnFailure');
dbConnAlarm.ttl = 300 * 1000; // this step is optional. The ttl is a millisecond property as life cycle of alarm. Default value is 0. Alarm will be automatically cleared after ttl. If interval of alarm or ttl is equal to 0, no automatic clearing.
function monitor(): void {
let myDate = new Date();
logger.debug('Start monitor!, time = ' + myDate.getTime() / 1000);
let query = 'SELECT 1';
pgPool.query(query).then((result) => {
if (result.rows.length >= 0) {
logger.debug('monitor db ok');
dbConnAlarm.clearAlarm();
} else {
logger.error('monitor db fail, query result is null');
dbConnAlarm.raiseAlarm();
}
}).catch((e) => {
logger.error('monitor db fail, error is %s', e);
this.dbConnAlarm.raiseAlarm();
});
}
tracer
- trace api for call stack trace
Use tracer
to trace call stack in node.js. Support Promise.
Usage
Enumerations:
enum TraceTypeId {
ip = 0,
imsi,
deviceId,
ownerId,
xffIp,
msisdn,
email,
uri,
sipUsername,
serviceInstanceToken,
eid,
iccid
}
Interfaces:
interface TraceIdentity {
traceTypeId: TraceTypeId;
userIdentity: string;
}
Methods:
trace(traceId: string | TraceIdentity[], transactionId: string, callback: TraceFunction): TraceFunction
trace function
callback
. All function in the call stack ofcallback
will be traced.transactionId
should not be undefined. IftraceId
is an array ofTraceIdentity
, then checktraceId
with configurations inTraceLogConfig.json
to determine whether trace is hit. IftraceId
is string, then trace is hit. IftraceId
is undefined, then trace is not hit. If environment variableTRACER_DEBUG
is set, thentracer
ignores the validation oftraceId
.isHit(): boolean
check whether trace is hit.
getTraceId(): string
get
traceId
of current trace context if trace is hit.getTransactionId(): string
get
transactionId
of current trace context if trace is hit.set(key: string, value: any): boolean
set a value on the current trace context if trace is hit.
get(key: string): any
get a value on the current trace context if trace is hit.
Example
- Normal Function.
import {tracer} from '@anpingli/common';
function normalFunction(test: string): string {
return test;
}
function testFunction1(test: string): string {
tracer.set('test', test);
return testFunction2();
}
function testFunction2(): string {
return tracer.get('test');
}
function testFunction3(): string {
return tracer.get('test');
}
// normal call
normalFunction('test'); // return 'test';
// trace normalFunction
tracer.trace('traceId', 'transactionId', testFunction1)('test'); // return 'test';
// normal call
testFunction1('test'); // return undefined
// trace testFunction1. testFunction2 is traced too because it is in the call stack of testFunction1.
tracer.trace('traceId', 'transactionId', testFunction1)('test'); // return 'test'
// normal call
testFunction3(); // return undefined;
// trace by ids.
const traceIds: tracer.TraceIdentity[] = [];
traceIds.push({ traceTypeId: tracer.TraceTypeId.msisdn, userIdentity: '+8613988889999' });
traceIds.push({ traceTypeId: tracer.TraceTypeId.imsi, userIdentity: '4300000000000001' });
tracer.trace(traceIds, 'transactionId', testFunction1)('test'); // return 'test'
- Promise.
import {tracer} from '@anpingli/common';
function testFunction1(test: string): Promise<string> {
return new Promise((resolve, reject) => {
return resolve(test);
});
}
// normal call
let normalResult = await testFunction1('test');
// trace testFunction1. testFunction2 is traced too because it is in the call stack of testFunction1.
let traceResult = await tracer.trace('traceId', 'transactionId', testFunction1)('test'); // traceResult === normalResult
expressTrace
- trace middleware for exress
Usage
trace(options?: TraceOptions): express.RequestHandler
TraceOptions:
serviceName: string. service name to generate
transaction-id
whentransaction-id
can not be gotten from request, mandatory.traceIdName: string. trace id name in http request header, case-insensitive, default is 'trace-id'.
transactionIdName: string. transaction id name in http request header, case-insensitive, default is 'x-transaction-id'.
setHeader: boolean. whether set header
transaction-id
andtrace-id
in response. default is false.
import * as express from 'express';
import {expressTrace} from '@anpingli/common';
let app = express();
//all middleware used in express will be traced;
app.use(expressTrace.trace({
serviceName: 'test',
traceIdName: 'x-trace-id',
transactionIdName: 'x-transaction-id',
setHeader: true
}));
//only router 'index' will traced;
app.use('/index', expressTrace.trace({
serviceName: 'test',
traceIdName: 'x-trace-id',
transactionIdName: 'x-transaction-id'
}));
coder
- general text encode/decode module
coder
provides unified interfaces to encode/decode text. The format of input text must be utf8
. The general encode format is:
| Byte | Encode Format |
| ---: | :----------------- |
| 0 | 0x09 |
| 1-n | Base64 Encode Text |
The Base64 Encode Text
format is:
Byte | Encode Format |
---|---|
0 | Version |
1-n | Encode Text |
The available Version
and Encode Text
Algorithm types are:
Version | Encode Algorithm |
---|---|
0 | AES-256-CBC with static IV |
Methods:
- encode(version: number, text: string, password?: string): string;
encode text
- decode(text: string, password?: string): string;
decode text. If text doesn't start with 0x09, return original text.
Usage
import {coder} from '@anpingli/common';
const password = 'I am password';
const text = 'Test Successfully.';
const encodeResult = coder.encode(0, text, password);
const decodeResult = coder.decode(encodeResult, password);
Exception
- custom error object
It is considered good practice to only throw the Error
object itself or an object using the Error
object as base objects for user-defined exceptions. The fundamental benefit of Error
objects is that they automatically keep track of where they were built and originated. Another benefit is that error handling can be organized with instanceof
.
Creating custom error is often necessary because built-in Error
class only accept a message
parameter on construction, however in application it's usually need to have more information about an application exception, such as error code.
@anpingli/common
provides a custom error class Exception
. It extends built-in Error
with extra properties:
code: string
- Application defined error codevalueObject: any
- An arbitrary object to store additional information in error
Application can directly throw Exception
, or extends it to create specific error types. Inside catch
block, it's convenience to handle different error types separately by testing error type by instanceof
.
Read below example on how to use:
import { Exception } from '@anpingli/common';
throw new Exception(message, code, valueObject);
throw new Exception(message, code);
throw new Exception(message, valueObject);
throw new Exception(message); // = Error(message)
class CommunicationError extends Exception {
constructor(message: string, reason: {url: string} ) {
super(message, 'ERROR-001', reason);
}
}
class DatabaseError extends Exception {
constructor(message: string) {
super(message, 'ERROR-002');
}
}
try {
throw new CommunicationError('error message...', {url});
} catch (e) {
if (e instanceof CommunicationError) {
// commication error
log.error(`Failed to connect to ${e.valueObject.url}`, e);
} else if (e instanceof DatabaseError) {
// database error
} else {
// other error
}
}
MemSessionStore
- In-Memory store for express-session
A session store implementation for express-session.
Because the default MemoryStore for express-session will lead to a memory leak due to it haven't a suitable way to make them expire.
The sessions are still stored in memory, so they're not shared with other processes or services.
Usage
Constructor(ttl: number, maxCapacity?: number)
ttl
is the item to live in millisecond.maxCapacity
is maximal capacity of the store, default is unlimited.
import * as express from 'express';
import * as session from 'express-session';
import { MemSessionStore } from '@anpingli/common';
class Portal {
private app: express.Express;
constructor() {
this.app = express();
const sessionTimeOut = 30 * 1000 * 60; // 30 minutes
// Use session with MemSessionStore
this.app.use(session({
secret: 'your secret',
resave: false,
saveUninitialized: false,
cookie: {
httpOnly: false,
secure: true
},
store: new MemSessionStore(sessionTimeOut, 100000)
}));
}
}
Cidr
- utilities to validate/match IP (v4/v6) address/subnet
Cidr
provides utilities to validate/match IP address
- Class
Cidr
: a utility to validate IP address/subnet and to match an address/subnet to another address/subnet - Function
isMatched
: a utility to match an address/subnet to another address/subnet
Usage
Class
Cidr
TheCidr
class is a type for validate and match CIDR (IP Classless Inter-Domain Routing) addresss, supporting both IPv4 and IPv6.A host IP address is treated as a CIDR address too. For example,
10.10.10.10
is equivalent to10.10.10.10/32
.
Class Methods:
new Cidr(ipstr: string)
Create a new
Cidr
object from a string (ipstr
must be trimmed).'Cidr.from(ipstr: string)'
Allocate a Cidr from a string.
Cidr.from
is recommended rather thannew Cidr
.ipVersion(): number
Get IP version
- 0 for invalid address.
- 4 for IPv4.
- 6 for IPv6.
isValid(): boolean
Check if the
Cidr
object is valid, asnew Cidr(...)
andCidr.from(...)
never throw errors, use this function to validate.netmask(): number
Get the netmask length (treat an IPv4 address as an IPv4-compatible IPv6 address).
For example, if the IP is
10.10.10.0/24
, it will be treated as::FFFF:10.10.10.0/120
.toString(): string
Get the original IP string.
isMatched(cidr: Cidr | string): boolean
Match
this
object with another CIDR (whether it is a host/subnet of the targetcidr
).Cidr.isMatched(matcher: Cidr | string, matchee: Cidr | string): boolean
Match
Cidr
object or CIDR string (whethermatcher
is a host/subnet of the targetmatchee
).
import { Cidr, isMatched } from '@anpingli/common';
const CidrEntity1 = new Cidr('192.168.1.0/22');
CidrEntity1.isValid(); // true, valid host/subnet
CidrEntity1.ipVersion(); // 4, IPv4
CidrEntity1.netmask(); // 118 (22 + 96), IPv4 is treated as a IPv6 `::FFFF:192.168.1.0/118`
const ip4Host1 = '192.168.1.1';
const ip4Host2 = '192.168.1.2';
const ip4Host5 = '192.168.1.5';
const ip4Cidr12 = '192.168.1.0/30';
const ip4Cidr125 = '192.168.1.0/29';
const cidrEntity1 = new Cidr(ip4Host1);
const cidrEntity2 = Cidr.from(ip4Host2);
const cidrEntity5 = Cidr.from(ip4Host5);
const cidrEntityCidr12 = new Cidr(ip4Cidr12);
const cidrEntityCidr125 = new Cidr(ip4Cidr125);
cidrEntity1.isMatched(cidrEntityCidr12); // true, `192.168.1.1` is in `192.168.1.0/30`
Cidr.isMatched(ip4Host1, ip4Cidr12); // true, `192.168.1.1` is in `192.168.1.0/30`
Cidr.isMatched(ip4Cidr12, ip4Host1); // false, notice the order of the 2 parameters
cidrEntity5.isMatched(cidrEntityCidr12); // false, `192.168.1.5` is NOT in `192.168.1.0/30`
cidrEntity5.isMatched(cidrEntityCidr125); // true, `192.168.1.5` is in `192.168.1.0/29`
isMatched(ip4Host5, ip4Cidr125); // true
const ip6Host1 = '2001:DB8::88F1';
const ip6Host2 = '2001:DB8::88F2';
const ip6HostE = '2001:DB8::88FE';
const ip6Cidr12 = '2001:DB8::88F0/126';
const ip6Cidr12E = '2001:DB8::88F0/124';
const ip6Check1 = new Cidr(ip6Host1);
const ip6Check2 = new Cidr(ip6Host2);
const ip6CheckE = new Cidr(ip6HostE);
const ip6CheckCidr12 = new Cidr(ip6Cidr12);
const ip6CheckCidr12E = new Cidr(ip6Cidr12E);
ip6Check1.isMatched(ip6CheckCidr12); // true
Cidr.isMatched(ip6Host1, ip6Cidr12); // true
ip6CheckE.isMatched(ip6CheckCidr12); // false
Cidr.isMatched(ip6HostE, ip6Cidr12); // false
CachingFileReader
A cached version of node.js fs module read-only API implementation
Constructor(ttl: number = 1000 * 60 * 15)
ttl
is the maximum millisecound for the cached file live in memory, default to 15 minutes.
There are only one API exposed by this module, they have the same parameters with those in node.js' fs module.
CachingFileReader.readFile(path[,options]): Promise<string|Buffer>
NOTE If no encoding is specified, a promise with raw buffer is returned.
NOTE When the path is instanceof URL, cache mechanism will lose efficacy.
import { CachingFileReader } from '@anpingli/common';
// default to 15 minutes expire time
const fs = new CachingFileReader();
fs.readFile('foo.txt').then((content) => {
// Normal speed this time.
fs.readFile('foo.txt').then((content) => {
// Much faster this time.
});
});
const content = fs.readFileSync('foo.txt');
Application Common
internal-service
- utility to manage internal services
internal-service
supports function(s) to manage internal services (services which are deployed on internal-data-network
).
Types:
type InternalService = http.Server | https.Server
The type of return value when
initInternalService
andstartInternalService
is successful.
Methods:
initInternalService(app: express.Express) : InternalService
Initialize an Henson service that is deployed on
internal-data-network
. Whether HTTP or HTTPS is applied depends on the parameterpalInternalServiceProtocol
.The function might throw expections:
- Exception thrown by
@anpingli/common
'sconfig.get
. - Exception thrown by
fs.readFileSync
.
So it must be wrapped by
try-catch
.⚠ The returned
server
is only initialized without listening.- Exception thrown by
Methods:
startInternalService(app: express.Express, port: number, errorHandler: (error: Error) => void)) : InternalService
Initialize and start an Henson service that is deployed on
internal-data-network
. Whether HTTP or HTTPS is applied depends on the parameterpalInternalServiceProtocol
.The function might throw expections:
- Exception thrown by
fs.readFileSync
. - Exception thrown by
express
'sApplication.listen
.
So it must be wrapped by
try-catch
.errorHandler
is used to handle the asynchronouserror
events of the server listening.- Exception thrown by
import { initInternalService, startInternalService, InternalService } from '@anpingli/common';
const internalDataNetwork: string = <string>process.env.INTERNAL_DATA_IP;
function initErrorHandler(error: Error): void {
logger.error('init service failed with error', error);
process.exit(1);
};
function startErrorHandler(error: Error): void {
logger.error('start service failed with error', error);
process.exit(1);
};
function successCallback(): void {
logger.info('start service successfully');
}
const app = express();
let service: InternalService;
/**
* 1st usage: init and then start
*/
// init service
try {
service = initInternalService(app);
} catch (error) {
initErrorHandler(error);
}
// start service
service.listen(8080, internalDataNetwork, successCallback).on('error', startErrorHandler);
/**
* 2nd usage: start in one step
*/
try {
service = startInternalService(app, 8080, startErrorHandler);
} catch (error) {
initErrorHandler(error);
}
logger.info('start service successfully');
/**
* stop service if needed, for both usages
*/
service.stop();
cluster
- enable cluster mode for your application
start(workerFile: string, workerNumber: number): void
start cluster for your application.
workerFile
: File name of worker file, default ismain
.workerNumber
: Number of worker thread, default is 3.
Your application should contain two files, one is the worker file, you may name it main.ts
, which looks like:
import * as http from 'http';
http.createServer((_req: http.IncomingMessage, res: http.ServerResponse) => {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('okay');
}).listen(8080, '127.0.0.1', () => {
console.log('listen succesfully!');
});
The other one is the index file named index.ts
, which looks like:
import { cluster } from '@anpingli/common';
cluster.start('main', 3);
Then after building and starting your application with node build/src/index.js
or NODE_ENV=production node lib/index.js
, your application is started in cluster mode and 3 worker threads are created.
Process Title
When there are multiple node.js applications running, all processes are shown the same
in ps
output and make it hard to distinguish.
$ ps -x
2491 pts/6 Sl+ 0:00 node lib/index.js --param xxx
2492 pts/6 Sl+ 0:00 node lib/index.js --param xxx
When the application is run by npm start
, this library enhances the process title output to replace
script name with package name from package.json
.
If the package name has scope (e.g. @anpingli/myservice
), the scope will be stripped from the name.
No API is exposed for this function, just importing the library do the trick.
$ ps -x
2491 pts/6 Sl+ 0:00 node myservice --param xxx
2491 pts/6 Sl+ 0:00 node anotherservice --param xxx
⚠ Because the process title string is pre-allocated underlay, the replaced pacakge name can not longer than original script name. If the package name is too long, it will be trancated.
$ ps -x
2491 pts/6 Sl+ 0:00 node my-long-serv --param xxx
^^^^^^^^^^^^ trancated
catch-unhandled
It's common issue that a promise is returned but forget to catch the rejection.
Node only prints UnhandledPromiseRejectionWarning
to stderr without stack trace,
which makes it difficult to find out the place where catch is missed.
This module is implicitly imported to write an error log into logger main
on unhandled promise rejection.
configFilter
- filter Api for common formatted configuration
Get 1st-matched value from a collection of filter/value objects for a target object.
Usage
configFilter(objCollection: Array<{ [index: string]: any }>, targetObj: { [index: string]: any }, options: Options = {}): any
objCollection
An array of objects, each of which contains a set of filter/value.targetObj
The target object to match.options
An option defines the filter/value key, match policy and default value oncetargetObj
matches nothing inobjCollection
.options.filterKey
The key of the filter (default: "filter").options.valueKey
The key of the value that determines to return.string
: thevalueKey
-indexed part of the matched object will be returned.undefined
: the whole matched object will be returned.
options.matchPolicies
The match policy for some filter properties.MatchPolicy.Equal
: equal value (default)MatchPolicy.RegExp
: regular expression (only applicable forstring
)
options.defaultValue
The return value when no candidate matches.
import { applog, MatchPolicy, FilterOptions, configFilter } from '@anpingli/common';
const logger = applog.logger('configFilter');
const configuration = {
"clients": {
"specific": [{
"filter": {
"requestorName": "client-A"
},
"rules": {
"authentication": {
"basic": [{
"username": "root-A",
"password": "wapwap-A"
}],
"oauth": {
"channel": "External"
}
},
"applicationCategory": "app-cat-ott-A",
"supportedServiceNames": [
"all"
],
"scopes": [
"batch-deletion"
]
}
},
{
"filter": {
"requestorName": "client-B",
"otherField": [1, 2, 3]
},
"rules": {
"authentication": {
"basic": [{
"username": "root-B",
"password": "wapwap-B"
}],
"oauth": {
"channel": "External"
}
},
"applicationCategory": "app-cat-ott-B",
"supportedServiceNames": [
"all"
],
"scopes": [
"batch-deletion"
]
}
},
{
"filter": {
"requestorName": "client-C",
"otherField": [1, 2, 3],
"otherField2": "other2"
},
"rules": {
"authentication": {
"basic": [{
"username": "root-C",
"password": "wapwap-C"
}],
"oauth": {
"channel": "External"
}
},
"applicationCategory": "app-cat-ott-C",
"supportedServiceNames": [
"all"
],
"scopes": [
"batch-deletion"
]
}
},
{
"filter": {
"requestorName": "client-D",
"otherField2": "^[A-Za-z0-9]+$"
},
"rules": {
"authentication": {
"basic": [{
"username": "root-D",
"password": "wapwap-D"
}],
"oauth": {
"channel": "External"
}
},
"applicationCategory": "app-cat-ott-D",
"supportedServiceNames": [
"all"
],
"scopes": [
"batch-deletion"
]
}
}
],
"default": {
"authentication": {
"basic": [{
"username": "root-default",
"password": "wapwap-default"
}]
},
"applicationCategory": "app-cat-ott-default",
"supportedServiceNames": [
"all"
],
"scopes": []
}
},
"serverCertificate": {
"key": "/opt/miep/certs/origin_server_ssl/server_cert/dummy.key",
"certificate": "/opt/miep/certs/origin_server_ssl/server_cert/dummy.cert"
},
"servicePort": {
"http": 9198,
"https": 8091
}
}
// target only has one field, `client-A` will be selected.
const objCollection = configuration.clients.specific;
const objTarget = { requestorName: 'client-A' };
const objOptions: FilterOptions = {
filterKey: 'filter',
valueKey: 'rules'
};
const ret = configFilter(objCollection, objTarget, objOptions);
if (ret.applicationCategory === 'app-cat-ott-A') {
logger.info('rules in client-A will be selected');
}
// target has two fields, the `otherField` is included in the array of `client-B`, `client-B` will be selected.
const objCollection = configuration.clients.specific;
const objTarget = { requestorName: 'client-B', otherField: 3 };
const objOptions: FilterOptions = {
filterKey: 'filter',
valueKey: 'rules'
};
const ret = configFilter(objCollection, objTarget, objOptions);
if (ret.applicationCategory === 'app-cat-ott-B') {
logger.info('rules in client-B will be selected');
}
// target has two fields, the `otherField` is not included in the array of `client-B`, `default` will be selected.
const objCollection = configuration.clients.specific;
const objTarget = { requestorName: 'client-B', otherField: 9 };
const objOptions: FilterOptions = {
filterKey: 'filter',
valueKey: 'rules',
defaultValue: configuration.clients.default
};
const ret = configFilter(objCollection, objTarget, objOptions);
if (ret.applicationCategory === 'app-cat-ott-default') {
logger.info('rules in default will be selected');
}
// target has more fileds than filter, `client-C` will be selected.
const objCollection = configuration.clients.specific;
const objTarget = { requestorName: 'client-C', otherField: 2, otherField2: 'other2', otherField3: 'other3' };
const objOptions: FilterOptions = {
filterKey: 'filter',
valueKey: 'rules'
};
const ret = configFilter(objCollection, objTarget, objOptions);
if (ret.applicationCategory === 'app-cat-ott-C') {
logger.info('rules in client-C will be selected');
}
// one field in target using regular expression
const objCollection = configuration.clients.specific;
const objTarget = { requestorName: 'client-D', otherField2: 'abc123xyz' };
const objOptions: FilterOptions = {
filterKey: 'filter',
valueKey: 'rules',
defaultValue: configuration.clients.default,
matchPolicies: {
requestorName: MatchPolicy.Equal,
otherField2: MatchPolicy.RegExp
}
};
const ret = configFilter(objCollection, objTarget, objOptions);
if (ret.applicationCategory === 'app-cat-ott-D') {
logger.info('rules in client-D will be selected');
}
// options without valueKey, return all { key:value } object
const objCollection = configuration.clients.specific;
const objTarget = { requestorName: 'client-A' };
const objOptions: FilterOptions = {
filterKey: 'filter'
};
const ret = configFilter(objCollection, objTarget, objOptions);
if (ret.rules.applicationCategory === 'app-cat-ott-A') {
logger.info('whole client-A object will be selected');
}
FileCache
An in-memory filecache and format convertion API implementation
This API is to cache the local file into the memory, in addition, prov