@rootfor/monkeypen-ts v1.1.0
Introduction
Useful utility for root backend. This library exposes some common infrastructural components which can facilitate inter-service communication between microservices
Supported features:
- Cache: The caching is generically implemented here which can be used for object storage in redis. We also expose a wrapper for API controllers which can wrap code segments
- Queues: The module helps create easy event infrastructure to queue jobs. This is a wrapper over Redis + BullMQ.
- Pub/Sub: The module helps create easy event infrastructure to communicate between services using pub sub. This is a wrapper over
ioredis
pub/sub - Logging: The module unifies the logging across all backend services. This is a wrapper over winston logging library.
- Utility: Simple util functions like parse or converters etc
init
Before using any service related to the library, the user is expected to do init
:
interface MonkeyPen {
const;
init: (config: IMonkeyPenConfig) => void;
}
interface IMonkeyPenConfig {
// The environment param (alpha, prod, dev, stage, etc)
environment: string;
// The redis used for storage and other purposes
redisUrl: string;
// This is redis instance to override the above redisUrl connection, which is used for mocking redis in tests
redisOverride: any;
}
Cache
Can be used to store js objects & also to cache mongo objects & API responses
Store
Simple API to store and retrieve records
export const store = async (
key: string,
value: string,
expiryInSecs = 60 * 60 * 6
): void => {};
export const getValue = async (key: string): string | null => {
return value;
};
To store & retrieve Jsons
export const storeJson = async (
key: string,
value: unknown,
expiryInSecs = 60 * 60 * 6
): void => {};
export const getJson = async (key: string): any | null => {};
An API to delete an existing value in the cache
export const deleteByKey = async (key: string): void => {};
Note: These APIs are wrappers over the ioredis
redis npm lib
Wrapper
This is a wrapper method which can wrapper a function or snippet which returns a value, and create a cache entry
export const rootCacheWrapper = async (
// A namespace, eg: API name, collection name etc
namespace: string,
// A finer query inside that namespace
query: any,
// the code to be placed inside cache
block: () => any,
// Force fetch the from source, ignore cache
forceFetch: boolean = false,
// The validaty of the cached object
validity: number = 60 * 15,
// Should we let nulls or empty arrays or objects to cached
allowEmpty: boolean = false,
// To check if we should be cache the newly fetched value, based on the fetched vakue itself
cacheCondition: (response: CacheResponse) => boolean = () => {
return true;
}
): Promise<CacheResponse> => {};
The cache response will have if the value was fetched from the cache or not
export interface CacheResponse {
// The object requested for
data: any | null;
// represents if it's fetched from the cache or from the source
isCache: boolean;
}
Queue
We are using redis and BullMQ for running queues. The library takes care of deduplication of event across nodes, and retry on failure.
Create
Every service can create a queue using the following code:
this.deetoQ = MonkeyPen.fetchQueue(QueueName.DEETO_MAIN_QUEUE_V1)
this.chewbaccaQ = MonkeyPen.fetchQueue(QueueName.CHEBACCA_MAIN_QUEUE_V1)
Note: Though there are few QueueNames exposed by the library, you can pass any string as qName
We have the major queue names already defined in the library:
declare namespace QueueName {
const CHEBACCA_MAIN_QUEUE_V1 = 'q_chewbacca_main_v1';
const DEETO_MAIN_QUEUE_V1 = 'q_deeto_main_v1';
const ARTOO_MAIN_QUEUE_V1 = 'q_artoo_main_v1';
const AESY_API_MAIN_QUEUE_V1 = 'q_aesy_api_main_v1';
}
Produce
Once you have queue instance created you can produce events to queue by:
produce: (eventName: string, body: any) => void;
Eg:
chewbaccaQ.produce(CHEWBACCA_SYNC_EVENT, { accountAddress, chains, syncId });
Where CHEWBACCA_SYNC_EVENT
is the event name (This is business logic specific event name).
Consume
To consumer from a queue, you can start the consumer by:
consume: (genericEventDelegator: (payload: GenericEventBody) => void) => void;
This will pass the event data
into the genericEventDelegator
callback if the event is valid
Note: All events will have the same structure. This is internally taken care by the library
export interface EventMeta {
id: string;
ts: number;
checksum: number;
name: string;
}
export interface GenericEventBody {
meta: EventMeta;
data: any;
}
Pub/Sub
This framework helps create a channel between all the publishers and subscribers, which in our case will be the services. The use-case is that we want to notify everyone of some events like, sync complete or minting complete, etc
The easiest is to use the common pub/sub-manager, this will help you publish & subscribe to the same channel.
export const getCommonPubSubManager = (): RootPubSub => {};
To consume
consume = (genericEventDelegator: (payload: GenericEventBody) => void) => {};
and to produce
produce = (eventName: string, body: any) => {};
Apart from this if you really need a new pub/sub-manager, which is not the common one you can create one using the following method
export const createPubSubManager = (channelName: string): RootPubSub => {};
Logging
It's simple, use the namespace RootLogger
. The log level is controlled by the env variable LOG_LEVEL
Eg:
RootLogger.debug('Let there be logs');
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago
10 months ago