data-bits v0.15.2
DataBits
install:
npm install --save data-bits
What is New
0.16.x
Loader instance is added to the bit ctx when executed by a loader along with the loader key used to trigger that execution.
0.15.x
A Bit will internally calculate all its arguments needs by searching multiple data points to improve the handling of cross argument circular references, and improve dynamic ordering of argument to increase proper execution. These needs can be used and accessed but are mostly used for internal optimization.
Ensuring all parallel executions have resolved before giving up and throwing an UNRESOLVED_BITS error. This does not stand true if a Bits resolver throws and error. If that happens all executions are exited and an error will be thrown.
Improved argument handling and caching within single execution to reduce unneeded execution.
Collapsing and caching arguments in collapsed and loader executions increasing performance and more accurate context state between collapsed parallel executions and cache loaded bits when using the loader.
0.14.x
An arguments arg can now define the location of where to pull data from, args, params, or needs. By default DataBits will search the args and needs of a bit and assign a location to that argument based on the field name. If there is a collision meaning there is an arg and a need with the same name an error will be thrown. To have an arg reference a param that must be defined. When an arg references a need it must exist in the bits needs list otherwise an error will be thrown. This features adds a much more robust pattern to both defining args, knowing what data will be available and when, improving error handling, and providing the DataBits runner even more data on when to execute a resolver to further optimize performance.
The reason for an arguments arg not resolving in no tracked in the unresolved error thrown by DataBits in the info node. That information of all the unresolved bits can be found in this single node. For those using typescript the interface UnresolvedErr can be imported for handling these type of errors.
Introduction
DataBits is a typed orchestration pattern designed to optimize parallel executions across tightly coupled infrastructures where many dependencies build on each other. Eliminating the over head of orchestration between dependencies and dynamically optimizing at runtime when each dependency can be run.
A bit is the smallest unit of data and in DataBits a bit represents a resource. A resource can be anything, a file system call, database request, request over HTTP, etc. DataBits provides the ability to organize all these resource into bits, these bits are able to define dependencies, we call a dependency a need.
Example, perhaps you need to make a call to a database to fetch a user's friends. But first you need to resolve a JWT. In this case you would have two bits one USER_JWT another GET_FRIENDS. In this case USER_JWT must be called before GET_FRIENDS so USER_JWT is a need of GET_FRIENDS because the request would fail without resolving the JWT.
The needs of a Bit act similar to a tree structure but execution always begins at the base, but not always. All bits which have the required data available from its bitCtx are resolved in parallel and once any bit is resolved the tree is searched again and the process repeats until we bubble up to the parent bit. We don't wait for all bits to be resolved incase some operations take longer than others. We don't want the worst performing operation to determine the response time for the others.
We can even optimize this flow by providing an option to DataBits to check if the bitCtx has the required data needed for any bit to resolve early skipping the need to resolve its dependencies first. Doing this we will skip unnecessary bits not needed from the tree structure. We can increase performance even further by executing multiple bits in parallel and by sharing context between parallel execution we can collapse request to improve performance.
Along with the ability to execute multiple bits in parallel DataBits also provides a dataloader which we call a BitLoader to solve the common problem of N+1 which we see in many applications today. This is usually found in most novice implemented graphql applications but can also be seen in other type of applications like REST.
Bit
As mentioned above a bit is the smallest unit of data and in DataBits it represents a resource.
fields | description | type | required | default |
---|---|---|---|---|
id | a unique string which will represent a bit | string | yes | N/A |
type | the data type which is returned from the bits resolver | DataBitsType | yes | N/A |
needs | an array of bit ids which are required to resolve the bit | array | no | [] |
neededBy | an array of bit ids which require this bit to resolve | frozen array | no | [] |
args | array of arguments required for the bit to successfully resolve | array | no | [] |
loader | a config used for a bit during a BitLoader execution (not used during normal execution) | object | no | {} |
config | function which returns any value which will be injected into the resolver | function | no | null |
resolver | function used to fetch data from a data source | function | yes | N/A |
methods | description |
---|---|
getArg | returns a specific arg config based on field name |
buildKey | will build loader key based off a key source object config |
parseKey | will parse a loader key and return a key source object |
id
a unique string which will represent a bit
type
Type is used to define the data type which will be returned from the bits resolver. The DataBits ships with a set of data-types (Any, String, Number, Boolean, Array, Object). A custom data type can be created by creating a new instance of the DataBitsType class.
When creating a custom type you need to provide the type name and a validator function which will validate the value returned from the resolver. If the function returns false an error will be thrown by DataBits.
const { Readable } = require('stream')
const { DataBitsType } = require('data-bits')
const Readable = new DataBitsType({
type: 'Readable',
validator: (value) => (value instanceof Readable)
})
needs
Needs is an array of bit ids. The bits defined in the needs array are the bits which are required by the bit itself to resolve.
args
A bits args are used to determine when it is ready to be resolved. The executor of the data bits assumes if all the required args cant be resolved the resolver will not resolve successfully and moves on. If no args are listed in a bit the executor will execute the bits resolver immediately.
By default a resolver is not required for an arg. When that is the case DataBits will try to resolve the field by looking at the params object in the bitCtx. Params are provided to the execute method or defaults to an empty object. DataBits adds all resolved args to the params object for other bits to use later. DataBits will only try to resolve args which don't already exist in the params list shared between all bits and will not override the params list. It is recommended to use unique field names for different values or you run the risk of a resolver failing because of bad data in the args.
The args resolver is provided the bitCtx but the context does not contain the args list because those obviously have yet to be resolved. If the arg already exist in the params list the arg resolver will not be called. This is done to increase performance and prevent overriding of param data. Args that have been resolved will be cached per bit so args are not resolved multiple times.
There is an option which can override this behavior and force all arg resolvers to be executed. By passing forceArgResolver: true to the execute method in the options this will force all arg resolvers to be executed regardless of the arg existing in the params. This behavior will also override any existing value in the params list.
fields | description | type | required | default |
---|---|---|---|---|
field | will be the field name in the args object inject into the resolvers context | string | yes | N/A |
type | the data type of the argument | DataBitsType | yes | N/A |
args | Very similar to a bits args but way to reference the bits args within the arg itself | array | no | N/A |
needs | A Bit will calculate its arguments true needs internally to describe all data needed by the argument | array | no | array |
addToParams | Should the value returned from the resolver be added to the shared params | boolean | no | false |
required | if the field is required by the bits resolver | boolean | no | false |
forceResolver | will force the arg to always be resolved throw the resolver regardless of params | boolean | no | false |
resolver | function used to resolve the argument | function | yes | N/A |
arguments args
An argument can also have args themselves. These args are a little different they do not have a resolver but rather they reference other args in the same bit the parent arg is defined in. This feature is useful because params are not always a guarantee location for pulling data as it is shared across bits and can be mutated. But data for an args arg is pulled from the internal cache specific to the bits args and can not be mutated by other bits in the same execution.
When you define an arg for an arg you provide a field which reference an other arg in that bit, if it is required, and the type it should return. The type must match the referenced arg and if the arg and its arg is "required" the referenced arg must also be required to insure integrity of the bit. The args arg can be required and the arg which it reference can not be required if the parent arg is not required.
An arguments resolver will not be executed until all it required args in its args list are resolved. If required args can not be resolved the error message will include the list of required args unable to resolve. If the args are able to resolve the resolved args will be available in the bitCtx provided to the arguments resolver. These args provided in the bitCtx are not the same provided to the bits resolver!!!
When a bit is created DataBits will sort a bits arguments based on its arguments args and the dependencies between each other work. This is done to ensure no argument is tried to be resolved before one of its dependencies is resolved. After the sorting process the arguments array is freezed to ensure nothing is changed. DataBits also check circular dependencies between arguments and will throw an error if one is found. This is all done to ensure a bits arguments can resolve regardless of the order they were put in by the developer because that order does have an effect and could create a race condition if not handled correctly.
An arguments arg can now define the location of where to pull data from, args, params, or needs. By default DataBits will search the args and needs of a bit and assign a location to that argument based on the field name. If there is a collision meaning there is an arg and a need with the same name an error will be thrown. To have an arg reference a param that must be defined. When an arg references a need it must exist in the bits needs list otherwise an error will be thrown. This features adds a much more robust pattern to both defining args, knowing what data will be available and when, improve error handling, and provide the DataBits runner even more data on when to execute a resolver to further optimize performance.
fields | description | type | required | default |
---|---|---|---|---|
field | will be the field name in the args object inject into the args bitCtx must match an existing bit arg | string | yes | N/A |
type | the data type of the argument. Must match the referenced bit arg | DataBitsType | yes | N/A |
location | where the data should be pulled from, arguments, bits needs list(resolved), or params | string | no | args or needs |
required | if the field is required by the arg resolver. If required the referenced arg must also be required | boolean | no | false |
config
Config is a function provided to a bit which is executed before the resolver and the arg resolvers to pull together configuration properties needed by the resolvers. The resolved value will be injected into the resolvers context. Config can be a function, async function, or function that returns a promise.
The config is provided the bitCtx as the param but does not contain the resolved args or the resolved config value.
loader
With DataBits comes a dataloader call BitLoader. You are able to configure each bit to behave the way needed to fit a use case instead of each behaving the same.
You can define if the bits response should be cached, not cached, only cached if the execution was resolved, or if it was rejected. However, a bit can't be cached if the bit loader is not configured for caching.
You can choose if an individual bit should be cloned or provide custom clone function for that bit to meet a specific use-case.
A bits key can be used and is generated when a loader key source is provided. We can make keys custom to any bit by defining specific fields in the source to be included in the key. A field of the key can be required which would cause an error to be thrown if not found in the key source. A type can also be asserted to ensure the key value is correct. Again if the field of the key is required an error will be thrown. If it is not required the key will not be added to the final built key.
fields | description | type | required | default |
---|---|---|---|---|
key | array of key part configurations | array | no | null |
cache | should the bit response be cached | boolean | no | loader config |
clone | should the cached response be cloned using the internal or custom cloner | boolean / function | no | loader config |
shouldCacheRejections | should rejections be cached | boolean | no | loader config |
key
By filtering the key source and creating a custom key per bit you can reduce eliminate the chance of creating a key with irrelevant data to a bit which could cause additional request to be made when they are not needed.
fields | description | type | required | default | |
---|---|---|---|---|---|
field | will be the name used to look up the value in the key source | string | string | yes | N/A |
type | the data type of the key field | DataBitsType | yes | N/A | |
required | should DataBits throw an error if something goes wrong, incorrect type or not found in key source | boolean | no | false |
resolver
The data returned by the resolver will be available to all its parents bits. This is the value used to build up the required data to eventually bubble up to the parent bit and finally resolve. Resolver can be a function, async function, or function that returns a promise. The resolver is provided the bitCtx as the param.
Bit Example
A bit can be create by using an object literal or by creating an instance of the Bit class provided by DataBits
const { Bit, Types } = require('data-bits')
const USER = new Bit({
id: 'USER',
type: Types.Object,
needs: [...],
args: [{
field: 'userID',
type: Types.String,
required: true,
resolver: async (bitCtx) => {...}
}],
config: async (bitCtx) => {...},
resolver: async (bitCtx) => {...}
})
DataBits
Bits are great but without something to execute and orchestrate the bits they are not useful. By creating an instance of DataBits and passing the bits to it. DataBits instance provides you a way to execute your bits by providing a parent bit id, or multiple id's for parallel execution.
fields | description | type |
---|---|---|
ids | list of bit ids maintained by the DataBits instance | array |
bits | bit id to bit object | object |
API
Creating a DataBits instance
When creating a DataBits instance a list of bits can be provide or the bits can be added later.
const { Bit, Types, DataBits } = require('data-bits')
const USER = new Bit({
id: 'USER',
type: Types.Object,
needs: [...],
args: [{
field: 'userID',
type: Types.String,
required: true,
resolver: async (bitCtx) => {...}
}],
config: async (bitCtx) => {...},
resolver: async (bitCtx) => {...}
})
const bits = new DataBits([ USER ])
// or
const bits = new DataBits()
bits.addBit(USER)
// or
const bits = new DataBits()
bits.addBits([ USER ])
methods | description |
---|---|
execute | method which orchestrates the execution of the bits based on the parent bit |
executeAll | orchestrates the execution of multiple parent bits in parallel |
executeProps | similar to executeAll but returns an object of key value pairs |
executeAllSettled | orchestrates the execution of multiple parent bits in parallel and always resolves returning the status of execution similar to Promise.allSettled |
executePropsSettled | similar to executeAllSettled but returns an object of key value pairs |
get | returns bit based on id |
getBit | alias for get |
getBits | returns an array of all bits |
addBit | adds a bit to the DataBits instance |
addBits | adds a list of bits to the DataBits instance |
shake | returns a list of bits needed by the parent bit |
smartShaker | returns a list of bits bitCtx which are ready to be resolved |
createLoader | returns a new instance of the BitLoader class |
createRunner | returns the internal mechanism which is used for execution. Can be leveraged to create additional functionality beyond this module |
execute
The execute method builds the dependency tree based on the needs of the parent and recursively resolves bits until all the bits are resolved, a bit fails to resolve and throws an error, or no more bits can be resolved and an error is thrown.
execute signature (id: string, ctx?: {}, params?: {}, options?: {})
id
id is the id of the parent bit where all needed bits are resolved from.
ctx
ctx is user provided which is passed between all bits during the execution process. An example of a ctx may be one from your server framework such as express or koa. Defaults to an empty object literal.
params
Params should be an object literal which args are pulled from and set to. As args are resolved by DataBits they will be added to the params object. Params are passed between all bits in the execution and available through the bitCtx.
options
Options can be used to affect the default behaviors of the execution method.
fields | description | type | default |
---|---|---|---|
inOrder | will execute all bits in order based off the parent bit and its needs. Will not optimize based on what bits are resolvable | boolean | false |
optimize | will do a smart shake of the dependencies and determine which ones need to be executed based on the resolvable args | boolean | false |
parentOnly | forces execute to only return the parents resolved data not all resolved data | boolean | false |
forceArgResolver | will force each arg resolver to be called regardless of value existing in the params list and will override params | boolean | false |
const { Bit, Types, DataBits } = require('data-bits')
const bits = [
new Bit({
id: 'USER',
type: Types.Object,
args: [{
field: 'userID',
type: Types.String,
required: true
}],
config: async (bitCtx) => {...},
resolver: async (bitCtx) => {...}
}),
new Bit({...}),
new Bit({...})
]
const bits = new DataBits(bits)
app.use(async (req, res) => {
const userID = req.get('userID')
const ctx = { req, res }
const params = { userID }
const { USER: data } = await bits.execute('USER', ctx, params)
res.json(data)
})
executeAll
If there is a requirement to execute multiple bits in parallel executeAll excepts a list of bit id's. When using executeAll the ctx and params will be used between all parallel executions. This is done on purpose because DataBits tries to increase performance by sharing context between parallel executions. If the use case calls for a different context it is suggested to wrap multiple executions in a Promise.all. ExecuteAll will return a list of resolved bits in the same order which was provided to the method.
fields | description | type | default |
---|---|---|---|
inOrder | will execute all bits in order based off the parent bit and its needs. Will not optimize based on what bits are resolvable | boolean | false |
optimize | will do a smart shake of the dependencies and determine which ones need to be executed based on the resolvable args | boolean | false |
collapse | will collapse all resolvers and not duplicate request if bit has been resolved | boolean | false |
parentOnly | forces execute to only return the parents resolved data not all resolved data | boolean | false |
forceArgResolver | will force each arg resolver to be called regardless of value existing in the params list and will override params | boolean | false |
executeProps
ExecuteProp uses executeAll under the hood and provides a more descriptive interface to work with using custom key value pair instead of an array.
fields | description | type | default |
---|---|---|---|
inOrder | will execute all bits in order based off the parent bit and its needs. Will not optimize based on what bits are resolvable | boolean | false |
optimize | will do a smart shake of the dependencies and determine which ones need to be executed based on the resolvable args | boolean | false |
collapse | will collapse all resolvers and not duplicate request if bit has been resolved | boolean | false |
parentOnly | forces execute to only return the parents resolved data not all resolved data | boolean | false |
forceArgResolver | will force each arg resolver to be called regardless of value existing in the params list and will override params | boolean | false |
executeAllSettled
Works just like executeAll but returns a promise which always resolves. If you have used Promise.allSettled before this method works very much like that. The status will provide details on if the execution was a success (fulfilled) or failed(rejected). If fulfilled the result will be in the value node if rejected the error will be in the reason node.
fields | description | type | default |
---|---|---|---|
inOrder | will execute all bits in order based off the parent bit and its needs. Will not optimize based on what bits are resolvable | boolean | false |
optimize | will do a smart shake of the dependencies and determine which ones need to be executed based on the resolvable args | boolean | false |
collapse | will collapse all resolvers and not duplicate request if bit has been resolved | boolean | false |
parentOnly | forces execute to only return the parents resolved data not all resolved data | boolean | false |
forceArgResolver | will force each arg resolver to be called regardless of value existing in the params list and will override params | boolean | false |
executePropsSettled
ExecutePropsSettled uses executeAllSettled under the hood and provides a more descriptive interface to work with using custom key value pair instead of an array.
fields | description | type | default |
---|---|---|---|
inOrder | will execute all bits in order based off the parent bit and its needs. Will not optimize based on what bits are resolvable | boolean | false |
optimize | will do a smart shake of the dependencies and determine which ones need to be executed based on the resolvable args | boolean | false |
collapse | will collapse all resolvers and not duplicate request if bit has been resolved | boolean | false |
parentOnly | forces execute to only return the parents resolved data not all resolved data | boolean | false |
forceArgResolver | will force each arg resolver to be called regardless of value existing in the params list and will override params | boolean | false |
const { Bit, Types, DataBits } = require('data-bits')
const bits = [
new Bit({
id: 'USER',
type: Types.Object,
args: [{
field: 'cookie',
type: Types.String,
required: true
}],
config: async (bitCtx) => {...},
resolver: async (bitCtx) => {...}
}),
new Bit({
id: 'ADD_FRIEND',
type: Types.Object,
needs: ['USER'],
args: [{
field: 'userID',
type: Types.String,
required: true
},{
field: 'friendID',
type: Types.String,
required: true
}]],
config: async (bitCtx) => {...},
resolver: async (bitCtx) => {...}
}),
new Bit({
id: 'UPDATE_FRIENDS_LIST',
type: Types.Object,
needs: ['USER'],
args: [{
field: 'userID',
type: Types.String,
required: true
}, {
field: 'friendID',
type: Types.String,
required: true
}],
config: async (bitCtx) => {...},
resolver: async (bitCtx) => {...}
})
]
const bits = new DataBits(bits)
// executeAll
const [ added, newFriendList ] = await bits.executeAll(
[ 'ADD_FRIEND', 'UPDATE_FRIENDS_LIST' ],
null,
{ cookie: 'auth=8nc83nd02nd', friendID: 2 },
{ collapse: true }
)
// executeProps
const { added, newFriendList } = await bits.executeProps(
{ added: 'ADD_FRIEND', newFriendList: 'UPDATE_FRIENDS_LIST'},
null,
{ cookie: 'auth=8nc83nd02nd', friendID: 2 },
{ collapse: true }
)
// executeAllSettled
const [ user, newFriendList ] = await bits.executeAllSettled(
{ added: 'ADD_FRIEND', newFriendList: 'UPDATE_FRIENDS_LIST'},
null,
{ cookie: 'auth=8nc83nd02nd', friendID: 2 },
{ collapse: true }
)
if (newFriendList.status === 'rejected') {
// handle error
throw newFriendList.reason
}
if (user.status === 'rejected') {
// handle error
throw user.reason
}
// executeAllSettled
const { user, newFriendList } = await bits.executePropsSettled(
{ added: 'ADD_FRIEND', newFriendList: 'UPDATE_FRIENDS_LIST'},
null,
{ cookie: 'auth=8nc83nd02nd', friendID: 2 },
{ collapse: true }
)
if (newFriendList.status === 'rejected') {
// handle error
throw newFriendList.reason
}
if (user.status === 'rejected') {
// handle error
throw user.reason
}
Types
Types are DataBits way to enforce standards for your bits args, and resolvers to ensure the correct data is being resolved. DataBits forces args and resolvers to have types. If a bit is defined without defining types for one of its args or its resolver an error will be thrown. DataBits may force types but it provides the flexibility to create your own. If your arg or resolver doesn't resolve into one of the default types (Any, String, Number, Boolean, Array, and Object) DataBitsType class can be used to create a custom type.
Types | description |
---|---|
Any | Any value |
String | (typeof value === string) |
Number | (typeof value === number) |
Boolean | (typeof value === boolean) |
Array | (Array.isArray) |
Object | Plain Javascript object |
NotNull | Any value not null or undefined |
DataBitsType | Create custom type :) |
const { Readable } = require('stream')
const { DataBitsType } = require('data-bits')
const Readable = new DataBitsType({
type: 'Readable',
validator: (value) => (value instanceof Readable)
})
Errors
DataBits throws errors when one of two things happen. First when a bit fails execution and an error is thrown. This will halt the execution of all bits and should be caught by a .catch or an async catch block. This type of error is not handled by DataBits The second is when no more bits are resolvable. This is determined when DataBits has exhausted all resources to resolve bit args. Remember bits are considered in a state of readiness only when all their required args can be resolved.
When a bit or bits fail to be resolved a detailed error will be thrown providing detail into what bits were unable to resolve, the args which failed to resolve, and the errors which caused them to not resolve.
There will be a code on the error UNRESOLVED_BITS and a message providing the bit or bits which were unable to be resolved. On that error there will be a meta data object attached to "error.info" providing granular detail into what happened.
{
"err": {
"info": [{
"id": "USER_FRIENDS",
"needs": ["USER"],
"args": ["userId", "friendId"],
"resolvedArgs": ["userId"],
"unresolvedArgs": [{
"field": "friendId",
"type": "string",
"required": true,
"error": "Unable to resolve all required arguments for argument friendId, Arguments: userId",
"unresolvedArgs": [{
"field": "userId",
"type": "string",
"required": true,
"location": "args",
"error": "The arguments arg \"userId\" cant be resolved because the argument \"userId\" has not been"
}]
}]
}]
}
}
bitCtx
The bitCtx is passed to resolvers (bit and args) and config callbacks. The context provides the current bit, its id, the bits args, the current params, the resolved config value, the provided ctx, the dataBits instance, and an object of resolved values from the resolved bits organized by bit id's.
fields | description | type |
---|---|---|
id | the id of the current bit | string |
bit | the current Bit instance | Bit |
args | list of the args defined in the bit definition with the values resolved | object |
params | list of args from all resolved bits | object |
config | the resolved value from the config | any |
ctx | a user provided ctx or empty object | object |
dataBits | the dataBits instance | DataBits |
resolved | a list of values that have been resolved from previous bits | object |
pending | a list of pending bits waiting to be resolved. These will be used collapse duplicate request between context | object |
runner | the instance of the internal runner which orchestrates the execution of the bits | Runner |
loaderKey | the key used for caching and collapsing by the loader for this execution. Reference to options.key provided to the executing method | Object or String |
loader | the instance of the loader which is orchestrating the execution and caching of the current the bit. Only available when bit is executed by a loader | BitLoader |
BitLoader
The BitLoader uses the bits which are part of your DataBits instance, we can create a BitLoader through our DataBits instance by calling the method createLoader. It is not recommended to have loaders persist between multiple request doing so you risk cached data incorrectly appearing in each request.
The BitLoader uses a combination of bit ids and a user provided key to create an in memory cache which will exist for the duration of a request or as long as the specific loader is being used. This cache does not replace tools like redis and other which are cross application/instance caching tools. It is simple designed to prevent making the same request twice during the same client request.
By leveraging BitLoader over other popular dataloaders you are able to take advantage of the built in optimization focused on parallel request and not have you build it yourself and increase the complexity of your application.
Each need is individually cached based on the key provided in the request along with the full execution itself. The key generated for the full execution includes options found in the load call (optimize and parentOnly) as some options effect the overall result so we need to store them as different key/value in the cache so nothing unexpected happens.
Batching
Batching is BitLoaders way of waiting between ticks of the eventloop to trigger the execution of load requests.
Batch Scheduler
By default each batch is scheduled for each tick of the event loop. If this behavior is not desirable you are able to provide a custom batch scheduler which will schedule each batched request.
const bits = new DataBits()
const loader = bits.createLoader({
batchScheduler: (cb) => setTimeout(cb, 100)
})
Caching
BitLoader provides a cache for all loaders after the first load the resulting value is cached to eliminate redundant loads. The needs of each parent bit are cached at run time, while the execution is taking place. This cache is shared between each execution and they run off each other sharing resolved, pending, and reject bits. This is done to further eliminate redundant calls during execution.
Caching Per-Request
As mentioned above BitLoader caching does not replace redis and other which are cross application/instance caching tools. BitLoader is a data loading tool, and its cache is meant to prevent the unnecessary loading of the same data in the same context of a single request to your application. To do this BitLoader uses a in memory cache and the load functions to prime that cache.
It is not recommended to have loaders persist between multiple request doing so you risk cached data incorrectly appearing in each request. Ideally your, BitLoader instances is created when a request begins, and is garbage collected when the request ends.
Because data is cached in memory any change made to any value returned from the loader is vulnerable to change. Any change made to an object, array, or any other reference type will be reflected across your application for the existence of the loader. By default BitLoader provides the ability to clone plain javascript objects or arrays other types will be ignored. The default handler simply uses JSON methods to clone and does not run any prechecks to check for any possible errors and does not swallow an error if one occurs. A callback can be provided to the clone option when creating a loader to handle custom use cases. The callback is provided the value from the in memory cache.
(express example)
const app = express()
const bits = new DataBits()
app.use(function(req, res, next) {
req.loader = bits.createLoader()
next()
})
...
Keys
DataBits supports a key source (plain JS object) which it will internally be sorted and built into a key. Using a Bits loader options these keys can be even more unique per bit using filtering, meeting specific use cases where a full key may include data not relevant to some bits causing additional request to be made when not needed.
Of course a string can also be provided as a key but you will not be able to use the key configurations which come from using the loader config in each bit. But, not all uses cases require the need for such granular control so DataBits allows multiple options.
While using a key source it is not recommended to provide reference types, but only primitive types. DataBits will try to parse the types correctly but it is much safer and reliable to only provide primitive types.
DataBits will not parse any key string unless one is provided and there is a bit is configured to use a custom key config. This is not prevented but the key must be in the correct key format. If not a key format error will be thrown. If DataBits is able to parse the key it will also try to parse the value to match the correct type to prevent errors caused by the value being stringified.
const bit = new Bit({
id: 'USER',
type: Types.String,
loader: {
key: [{
field: 'userId',
required: true,
type: Types.String
}, {
field: 'name',
required: true,
type: Types.String
}, {
field: 'age',
required: false,
type: Types.Number
}]
},
})
// internally used by the BitLoader but can be used if needed by users
bit.buildKey({ userId: '123', name: 'Gob' }) // => bit::USER__name::Gob__userId::123
bit.parseKey('bit::USER__age:24__name::Gob__userId::123') // => { age: 24, userId: '123', name: 'Gob' }
Disabling Cache
If using BitLoader without cache is required the option can be provide in the options when creating the BitLoader instance
const bits = new DataBits()
const loader = bits.createLoader({ cache: false })
API
BitLoader
BitLoader is an instance which is created through your DataBits instance and leverages your bits which you have created.
fields | description | type | default |
---|---|---|---|
shouldBatch | Set to false to disable batching. This is equivalent to setting maxBatchSize to 1 | boolean | true |
maxBatchSize | Limits the number of items that get passed in to the batchScheduler. May be set to 1 to disable batching | number | Infinity |
cache | Disable cache my setting to false | boolean | true |
clone | Will attempt to clone cached response to help prevent changing reference values. Callback for custom use cases, takes cached value as param | boolean/function | false |
batchScheduler | A function to schedule the later execution of a batch. This function will call a callback in the future to start the batch | function | event loop tick |
shouldCacheRejections | If the execution of a bit fails the rejected value (err) will be cached. By default this feature is toggled off | boolean | false |
const { Bit, Types, DataBits } = require('data-bits')
const USER = new Bit({
id: 'USER',
type: Types.Object,
needs: [...],
args: [{
field: 'userID',
type: Types.String,
required: true,
resolver: async (bitCtx) => {...}
}],
config: async (bitCtx) => {...},
resolver: async (bitCtx) => {...}
})
const bits = new DataBits([ USER ])
const loader = bits.createLoader()
methods | description |
---|---|
load | Load works just like execute but caches the response which will be returned based on the key provided in the request |
loadAll | LoadAll works just like executeAll and uses the method under the hood and supports all the same options like collapsing but adds the benefits of caching |
loadProps | loadProps uses loadAll under the hood and provides a more descriptive interface to work with using a custom key value pair instead of an array |
loadMany | LoadMany provides the ability to execute multiple loads in parallel |
prime | Primes the cache with the provided value |
clear | Clears the specific stored value in cache based on bit id and key |
clearAll | Clears all the stored values in cache |
load
Load works just like execute but caches the response which will be returned based on the key provided in the request. For all subsequent execution for that bit if a matching key is generated a cached response will be returned. The key source or string provided as one of the fields in options and the bit id is appended to that key to generate a unique key for each bit so the same key be used between different bits (bit::${id}__key::${key}). It is recommended to provide a key source as DataBits will sort, filter, and build the key for you and custom per bit logic can be applied for special keys.
const loader = bits.createLoader()
// request made to source
const user = await loader.load('USER', null, null, { key: { userID: 123 } })
// data pulled from in memory cached
const cached = await loader.load('USER', null, null, { key: { userID: 123 } })
loadAll
LoadAll works just like executeAll and uses the method under the hood and supports all the same options like collapsing but adds the benefits of caching. The response from each bit in the array will be separately cached using the key provided. Any bits which have already been loaded their data will be pulled from cache and added to the execution context to speedup the resolve time.
const loader = bits.createLoader()
const [ user, friends ] = await loader.loadAll(['USER', 'USER_FRIENDS'], null, null, { key: { userID: 123 } })
loadProps
loadProps uses loadAll under the hood and provides a more descriptive interface to work with using a custom key value pair instead of an array.
const loader = bits.createLoader()
const { user, friends } = await loader.loadProps({ user: 'USER', friends: 'USER_FRIENDS'}, null, null, { key: { userID: 123 } })
loadAllSettled
Works just like loadAll but returns a promise which always resolves. If you have used Promise.allSettled before this method works very much like that. The status will provide details on if the execution was a success (fulfilled) or failed (rejected). If fulfilled the result will be in the value node if rejected the error will be in the reason node. Works just like Promise.allSettled. Only bits which are fulfilled will be cached while Bits which are rejected will not be.
const loader = bits.createLoader()
const [ user, friends ] = await loader.loadAllSettled(['USER', 'USER_FRIENDS'], null, null, { key: { userID: 123 } })
// always resolves handle errors individually
loadPropsSettled
loadPropsSettled uses loadAllSettled under the hood and provides a more descriptive interface to work with using a custom key value pair instead of an array.
const loader = bits.createLoader()
const { user, friends } = await loader.loadPropsSettled({ user: 'USER', friends: 'USER_FRIENDS'}, null, null, { key: { userID: 123 } })
// always resolves handle errors individually
loadMany
LoadMany provides the ability to execute multiple loads in parallel. There is no difference than using something like Promise.all. Load Many supports the usage of load, loadAll, and loadProps mixed into the same request.
const loader = bits.createLoader()
const [ user, friends ] = await loader.loadMany([
['USER', null, null, { key: { userID: 123 } }],
[[ 'USER_FRIENDS' ], null, null, { key: { userID: 123 } }],
])
// same as
const [ user, friends ] = await Promise.all([
loader.load('USER', null, null, { key: { userID: 123 } }),
loader.loadAll(['USER_FRIENDS'], null, null, { key: { userID: 123 } }),
])
clear
Clears the specific stored value in cache based on bit id and key. Will also clear any current runtime values stored in maps used for collapsing between parallel executions.
loader.clear('USER', { userID: 123 })
clearAll
Clears all the stored values in cache
loader.clearAll()
prime
Primes the cache with the provided value. Prime caches the value provided as is and does not edit it in any way. By providing different options such as parentOnly and optimize will cause a different cache key to be generated. Will also clear any current runtime values stored in maps used for collapsing between parallel executions.
loader.prime('USER', { key: { userID: 123 } }, { id: 123, name: 'skywalker' })
const cache = await loader.load('USER', { key: { userID: 123 } })
// { id: 123, name: 'skywalker' }
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago