@rainbow-o23/n3 v1.0.59
o23/n3
o23/n3 provides the most basic pipeline steps.
Snippet Supporting
Almost all pipeline steps use dynamic snippets to some extent. In order to facilitate various operations on objects in the snippet, o23/n3
injects rich function support in advance when executing these scripts. All the function support provided by the pipeline steps can be
directly obtained and used in scripts using the $helpers handle. For example:
const currentTime = $helpers.$date.now();When using scripts, pay attention to the usage of variables. Typically:
$factorrepresents the incoming data and can be used in most snippet definitions,$resultrepresents the processed data and only appears in thetoResponsesnippet ofFragmentary,$requestrepresents the original request data and can be used in almost all snippets, but it is not recommended,$helpersrepresents function supporting and can be used in all snippets,$optionsrepresents a set of data, usually in error handles.
Typescript support
In dynamic snippet, TypeScript syntax can also be used. Currently, o23/n3 is compiled using ES2022 syntax. It is important to note that
dynamic script fragments are function bodies, so import/export syntax is not supported. Moreover, they are compiled in loose mode, and
the compilation process does not report any errors. Additionally, for script security reasons, the following keywords or classes are also
not supported.
processglobalevalFunction
Basic Steps
Fragmentary
Usually, when processing logic, we do not need all the memory contexts, but only need to extract certain fragments for processing and return
the processing results to the context for subsequent logic to continue processing. Therefore, o23/n3 provides a relevant implementation,
allowing pipeline steps to flexibly access the relevant memory data and write back the processed result data to the context in the required
format. All pipeline steps should inherit from this implementation to obtain the same capabilities.
Constructor Parameters
| Name | Type | Default Value | Comments |
|---|---|---|---|
| fromRequest | ScriptFuncOrBody\ | Get data from request. | |
| toResponse | ScriptFuncOrBody\ | Write data to response. | |
| mergeRequest | boolean or string | false | Shortcut to merge data to response. |
| errorHandles.catchable | ScriptFuncOrBody<HandleCatchableError<In, InFragment, OutFragment>>or Array\ | Catchable error handler. | |
| errorHandles.uncatchable | ScriptFuncOrBody<HandleUncatchableError<In, InFragment, OutFragment>>or Array\ | Uncatchable error handler. | |
| errorHandles.exposed | ScriptFuncOrBody<HandleExposedUncatchableError<In, InFragment, OutFragment>>or Array\ | Exposed uncatchable error handler. | |
| errorHandles.any | ScriptFuncOrBody<HandleAnyError<In, InFragment, OutFragment>>or Array\ | Any error handler. |
Merge Request
| Prerequisite | Behavior |
|---|---|
No toResponse defined, mergeRequest is true | Unbox result and merge into original request content. Make sure request content and result can be unboxed. |
No toResponse defined, mergeRequest is string | Use value of mergeRequest as key, merge result into original request content. Make sure request content can be unboxed. |
No toResponse defined, no mergeRequest defined | Return result directly. |
With toResponse defined, mergeRequest is true | Execute toResponse first, unbox result and merge into original request content. Make sure request content and result can be unboxed. |
With toResponse defined, mergeRequest is string | Execute toResponse first, use value of mergeRequest as key, merge result into original request content. Make sure request content can be unboxed. |
With toResponse defined, no mergeRequest defined | Execute toResponse first, return result directly. |
Error Handles
Each type of error handle can be either a snippet or a set of pipeline steps. If it is defined as pipeline steps, the first step will receive the request data in the following format:
const RequestContent = {$code: errorCode, $error: error, $factor: fragment, $request: request};Except for the first step, each subsequent step will receive request data that depends on the processing result of the previous step.
Get Property
Constructor Parameters
| Name | Type | Default Value | Comments |
|---|---|---|---|
| propertyName | string | Supports multi-level property names, connected by .. |
The data source of given property name is after
fromRequesttransformation.
Delete Property
Constructor Parameters
| Name | Type | Default Value | Comments |
|---|---|---|---|
| propertyNames | string or Array\ |
Multi-level property names is not supported, make sure the data source of given property name is after
fromRequesttransformation.
Snippet
Constructor Parameters
| Name | Type | Default Value | Comments |
|---|---|---|---|
| snippet | ScriptFuncOrBody\ |
Step Sets
Constructor Parameters
| Name | Type | Default Value | Comments |
|---|---|---|---|
| steps | Array\ |
Execute the given pipeline steps in order.
Conditional
Constructor Parameters
| Name | Type | Default Value | Comments |
|---|---|---|---|
| check | ScriptFuncOrBody\ | ||
| steps | Array\ | ||
| otherwiseSteps | Array\ |
First, execute the check snippet and return true. Then execute the given pipeline steps in order. Otherwise, execute the given otherwise
pipeline steps in order.
Routes (Switch)
Constructor Parameters
| Name | Type | Default Value | Comments |
|---|---|---|---|
| conditionalSteps | Array\ | ||
| otherwiseSteps | Array\ |
The conditional step is defined as follows:
export interface RoutesConditionalStepOptions {
check: ScriptFuncOrBody<ConditionCheckFunc>,
steps?: Array<PipelineStepBuilder>;
}Execute each given condition check in order. If a condition is true, execute the corresponding pipeline steps in order. If none of the
conditions are true, execute the given otherwise pipeline steps in order.
For Each
Constructor Parameters
| Name | Type | Default Value | Comments |
|---|---|---|---|
| originalContentName | string | $content | |
| itemName | string | $item | |
| steps | Array\ |
const RequestContent = {$code: errorCode, $error: error, $factor: fragment, $request: request};The specified set of pipeline steps will be applied to each element of the array, and all the execution results will be gathered into an
array and returned. It is important to note that the error handles of the For Each pipeline steps will be used for each iteration of
executing individual elements, rather than being applied to the execution of the array as a whole. If the array is empty, the given pipeline
steps will not be executed. The request data obtained in the first step of each element's execution will appear in the following format:
const RequestContent = {$content: content, $item: element, $semaphore};If you need to terminate the loop prematurely, you just need to return the $semaphore signal from the request data. The loop will
automatically end, and the collected processing results will be gathered and returned as an array.
Parallel
Constructor Parameters
| Name | Type | Default Value | Comments |
|---|---|---|---|
| cloneData | ScriptFuncOrBody\<CloneDataFunc\<In, InFragment, EachInFragment>> | Clone request data for each step. | |
| race | boolean | false | Returns first settled result if race is true. |
The specified set of pipeline steps will be executed parallel, and
- All the execution results will be gathered into an array and returned when
raceis false, - Returns the first settled result when
raceis true,
It is important to note that
- Each sub pipeline step will use the same request data, they share the same content memory address. So please be very
careful NOT to attempt to modify the request data in the sub pipeline steps. Alternatively, you can
use
cloneDatato create a copy of the request data for each sub pipeline steps, so that request data operations can be modified in certain sub pipeline steps without affecting other sub pipeline steps. - The error handles of the
For Eachpipeline step will be used for each iteration of executing individual elements, rather than being applied to the execution of the array as a whole.
Trigger Pipeline
Constructor Parameters
| Name | Type | Default Value | Comments |
|---|---|---|---|
| code | string | Pipeline code. |
Execute the specified pipeline.
Trigger Pipeline Step
Constructor Parameters
| Name | Type | Default Value | Comments |
|---|---|---|---|
| code | string | Pipeline step code. |
Execute the specified pipeline step.
Async
Constructor Parameters
| Name | Type | Default Value | Comments |
|---|---|---|---|
| steps | Array\ |
Execute the given pipeline steps in order, but asynchronous. Therefore, no result returned.
Database (TypeOrm) Steps
Environment Parameters
| Name | Type | Default Value | Comments |
|---|---|---|---|
typeorm.DB.type | string | Database type, mysql, better-sqlite3. | |
typeorm.DB.kept.on.global | boolean | Keep database instance in memory or not. |
DB represents database name.
- For mysql, default kept on global is true,
- For Better-SQLite3, default kept on global is false.
MySQL
When typeorm.DB.type=mysql:
| Name | Type | Default Value | Comments |
|---|---|---|---|
typeorm.DB.host | string | localhost | MySQL host. |
typeorm.DB.port | number | 3306 | MySQL port. |
typeorm.DB.username | string | MySQL username. | |
typeorm.DB.password | string | MySQL password. | |
typeorm.DB.database | string | MySQL database name. | |
typeorm.DB.charset | string | MySQL database charset. | |
typeorm.DB.timezone | string | MySQL database timezone. | |
typeorm.DB.pool.size | number | MySQL connection pool size. | |
typeorm.DB.synchronize | boolean | false | |
typeorm.DB.logging | boolean | false | |
typeorm.DB.connect.timeout | number | ||
typeorm.DB.acquire.timeout | number | ||
typeorm.DB.insecure.auth | boolean | ||
typeorm.DB.support.big.numbers | boolean | true | |
typeorm.DB.big.number.strings | boolean | false | |
typeorm.DB.date.strings | boolean | ||
typeorm.DB.debug | boolean | ||
typeorm.DB.trace | boolean | ||
typeorm.DB.multiple.statements | boolean | false | |
typeorm.DB.legacy.spatial.support | boolean | ||
typeorm.DB.timestamp.format.write | string | %Y-%m-%d %H:%k:%s | MySQL timestamp write format, should compatible with format.datetime, which default value is YYYY-MM-DD HH:mm:ss. |
typeorm.DB.timestamp.format.read | string | YYYY-MM-DD HH:mm:ss | MySQL timestamp read format. |
Mysql driver read
DateTimecolumn to javascriptstring.
PostgreSQL
When typeorm.DB.type=pgsql:
| Name | Type | Default Value | Comments |
|---|---|---|---|
typeorm.DB.host | string | localhost | PgSQL host. |
typeorm.DB.port | number | 5432 | PgSQL port. |
typeorm.DB.username | string | PgSQL username. | |
typeorm.DB.password | string | PgSQL password. | |
typeorm.DB.database | string | PgSQL database name. | |
typeorm.DB.schema | string | PgSQL schema name. | |
typeorm.DB.pool.size | number | PgSQL connection pool size. | |
typeorm.DB.synchronize | boolean | false | |
typeorm.DB.logging | boolean | false | |
typeorm.DB.connect.timeout | number | ||
typeorm.DB.timestamp.format.write | string | YYYY-MM-DD HH24:MI:SS | PgSQL timestamp write format, should compatible with format.datetime, which default value is YYYY-MM-DD HH:mm:ss. |
typeorm.DB.timestamp.format.read | string | YYYY-MM-DDTHH:mm:ss.SSSZ | PgSQL timestamp read format. |
PgSQL driver read
DateTimecolumn to javascriptDate.
MSSQL
When typeorm.DB.type=mssql:
| Name | Type | Default Value | Comments |
|---|---|---|---|
typeorm.DB.host | string | localhost | MSSQL host. |
typeorm.DB.port | number | 5432 | MSSQL port. |
typeorm.DB.username | string | MSSQL username. | |
typeorm.DB.password | string | MSSQL password. | |
typeorm.DB.database | string | MSSQL database name. | |
typeorm.DB.schema | string | MSSQL schema name. | |
typeorm.DB.pool.size | number | MSSQL connection pool size. | |
typeorm.DB.synchronize | boolean | false | |
typeorm.DB.logging | boolean | false | |
typeorm.DB.authentication.type | string | MSSQL authentication type. | |
typeorm.DB.domain | string | MSSQL domain. | |
typeorm.DB.azure.ad.access.token | string | MSSQL azure ad access token. | |
typeorm.DB.azure.ad.msi.app.service.client.id | string | MSSQL azure ad msi app service client id. | |
typeorm.DB.azure.ad.msi.app.service.endpoint | string | MSSQL azure ad msi app service endpoint. | |
typeorm.DB.azure.ad.msi.app.service.secret | string | MSSQL azure ad msi app service secret. | |
typeorm.DB.azure.ad.msi.vm.client.id | string | MSSQL azure ad msi vm client id. | |
typeorm.DB.azure.ad.msi.vm.endpoint | string | MSSQL azure ad msi vm endpoint. | |
typeorm.DB.azure.ad.msi.vm.client.secret | string | MSSQL azure ad msi vm client secret. | |
typeorm.DB.azure.ad.msi.vm.tenant.id | string | MSSQL azure ad msi vm tenant id. | |
typeorm.DB.connect.timeout | number | ||
typeorm.DB.request.timeout | number | ||
typeorm.DB.pool.max | number | 5 | |
typeorm.DB.pool.min | number | 1 | |
typeorm.DB.pool.acquire.timeout | number | ||
typeorm.DB.pool.idle.timeout | number | ||
typeorm.DB.instance | string | ||
typeorm.DB.ansi.null.enabled | boolean | ||
typeorm.DB.cancel.timeout | number | ||
typeorm.DB.use.utc | boolean | ||
typeorm.DB.encrypt | boolean | ||
typeorm.DB.crypto.credentials | string | ||
typeorm.DB.tds.version | string | ||
typeorm.DB.arithmetic.abort | boolean | ||
typeorm.DB.trust.server.certificate | boolean | ||
typeorm.DB.timestamp.format.write | string | yyyy-MM-dd hh:mm:ss | MSSQL timestamp write format, should compatible with format.datetime, which default value is YYYY-MM-DD HH:mm:ss. |
typeorm.DB.timestamp.format.read | string | YYYY-MM-DD HH:mm:ss | MSSQL timestamp read format. |
MSSQL of TypeORM for more details.
MSSQL driver read
DateTimecolumn to javascriptDate.
Oracle
When typeorm.DB.type=oracle:
| Name | Type | Default Value | Comments |
|---|---|---|---|
typeorm.DB.host | string | localhost | Oracle host. |
typeorm.DB.port | number | 5432 | Oracle port. |
typeorm.DB.username | string | Oracle username. | |
typeorm.DB.password | string | Oracle password. | |
typeorm.DB.database | string | Oracle database name. | |
typeorm.DB.sid | string | Oracle database sid. | |
typeorm.DB.service.name | string | Oracle database service name. | |
typeorm.DB.connect.string | string | Oracle connect string. | |
typeorm.DB.schema | string | Oracle schema name. | |
typeorm.DB.pool.size | number | Oracle connection pool size. | |
typeorm.DB.synchronize | boolean | false | |
typeorm.DB.logging | boolean | false | |
typeorm.DB.timestamp.format.write | string | YYYY-MM-DD HH24:MI:SS | Oracle timestamp write format, should compatible with format.datetime, which default value is YYYY-MM-DD HH:mm:ss. |
typeorm.DB.timestamp.format.read | string | YYYY-MM-DD HH:mm:ss | Oracle timestamp read format. |
Oracle driver read
DateTimecolumn to javascriptDate.
Better SQLite3
When typeorm.DB.type=better-sqlite3:
| Name | Type | Default Value | Comments |
|---|---|---|---|
typeorm.DB.database | string | :memory: | |
typeorm.DB.synchronize | boolean | false | |
typeorm.DB.logging | boolean | false |
SQLite save
DateTimecolumn as javascriptstring.NEVER use it in production.
Constructor Parameters
| Name | Type | Default Value | Comments |
|---|---|---|---|
| dataSourceName | string | Database name, DB. | |
| transactionName | string | Transaction name. | |
| autonomous | boolean | false | Autonomous transaction or not. |
Autonomous transactions take precedence over the transaction name, meaning that if an autonomous transaction is enabled, the transaction specified by the transaction name will be ignored. If you need to use the transaction name, you must nest the pipeline steps within transactional step sets, and ensure that the datasource name and transaction name remain the same.
By SQL
Environment Parameters
| Name | Type | Default Value | Comments |
|---|---|---|---|
typeorm.sql.cache.enabled | boolean | true | Cache parsed sql or not. |
Parsed SQL will not be cached when there is one-of syntax.
Constructor Parameters
| Name | Type | Default Value | Comments |
|---|---|---|---|
| sql | string | SQL |
Native SQL Support & Enhancement
SQL supports native database syntax. At the same time, o23/n3 enhances SQL syntax, allowing the use of the $property syntax to retrieve
corresponding data from data objects, also supports multi-level property names, connected by .. For example, $person.name represents
that person is an object and name is a property under person. The following are the supported syntax features:
IN ($...names):one-of,namesshould be an array,LIKE $name%:starts-with,LIKE $%name:ends-with,LIKE $%name%:contains.
Name mapping is case-sensitive.
LIKEis case-sensitive.
Since different databases have varying degrees of support for dialects, o23/n3 also provides appropriate enhanced support for this:
- For pagination,
$.limit($offset, $limit)will be translated and executed in the appropriate dialect. For example, -MySQLusesLIMIT $offset, $limit, -PostgreSQLusesOFFSET $offset LIMIT $limit. -MSSQLandOracleuseOFFSET $offset ROWS FETCH NEXT $limit ROWS ONLY, -MSSQLrequires anORDER BYclause for pagination SQL. If there is noORDER BYclause, will useORDER BY 1 OFFSET $offset ROWS FETCH NEXT $limit ROWS ONLY. - For JSON column, because some databases (such as MSSQL) do not have a JSON column type, they cannot automatically replace strings in the
result set with JSON objects,
- Use
config as "config.@json"to explicitly indicate that theconfigcolumn is of JSON data type. - Use$config.@jsonto explicitly indicate that theconfigproperty of given parameters is of JSON data type. - For boolean column which use numeric(int/smallint/tinyint) as storage type, because some databases (such as PostgreSQL) cannot
automatically convert boolean values in memory to numeric 0 or 1 in the database,
- Use
enabled as "enabled.@bool"to explicitly indicate that theenabledcolumn is of boolean in-memory and numeric in database data type. - Use$enabled.@boolto explicitly indicate that theenabledproperty of given parameters is of boolean in-memory and numeric in database data type. - For datetime (MySQL, MSSQL) / timestamp (Oracle, PostgreSQL) column,
- Use
created_at as "createdAt.@ts"to explicitly indicate that thecreatedAtcolumn is of string in-memory and timestamp in database data type. - Use$createdAt.@tsto explicitly indicate that thecreatedAtproperty of given parameters is of string in-memory and timestamp in database data type.
We recommend that if you need to consider support for multiple database dialects, using enhanced syntax will make it easier to write SQL. If you only need to support a specific database, then using its standard syntax is sufficient.
It is important to note that some databases (such as
PostgreSQL) do not differentiate column names by case. This can affect the property names of the returned objects in the result set (usually recommended in camel case). Therefore, even though it is not a syntax enhancement, it is strongly recommended to use aliases to standardize the column names in the returned result set, for example,PERSON_NAME AS "personName", please pay attention to the use of quotation marks to correctly preserve the case.
Load One by SQL
Request and Response
// request
export interface TypeOrmLoadBasis extends TypeOrmBasis {
params?: Array<TypeOrmEntityValue> | TypeOrmEntityToSave;
}
// response
export type TypeOrmEntityToLoad = Undefinable<DeepPartial<ObjectLiteral>>;Load Many by SQL
Request and Response
// request
export interface TypeOrmLoadBasis extends TypeOrmBasis {
params?: Array<TypeOrmEntityValue> | TypeOrmEntityToSave;
}
// response
Array<TypeOrmEntityToLoad>;Load Many by SQL, Use Cursor
Environment Parameters
| Name | Type | Default Value | Comments |
|---|---|---|---|
typeorm.DB.fetch.size | number | 20 | Fetch size. |
typeorm.DB.stream.pause.enabled | boolean | false | Pause and resume result stream or not. |
Constructor Parameters
| Name | Type | Default Value | Comments |
|---|---|---|---|
| pauseStreamEnabled | boolean | Pause and resume result stream or not. |
Request and Response
// request
export interface TypeOrmLoadBasis extends TypeOrmBasis {
params?: Array<TypeOrmEntityValue> | TypeOrmEntityToSave;
}
// response
Array<any>;By specifying fetchSize, each batch of data retrieved will execute sub-steps. Before executing the sub-steps, the data to be passed to it
will be calculated using the streamTo function. If streamTo is not specified, the batch of data retrieved itself will be passed to the
sub-steps. If the sub-steps is not specified, all retrieved data will be merged and returned.
Therefore, the number of times the sub-step is executed is related to the quantity of data and the fetchSize. Meanwhile, each time the
sub-step is invoked, the context will include a $$typeOrmCursorRound variable indicating the current batch (starting from 0), and a
$typeOrmCursorEnd variable indicating whether it is the last batch.
Save by SQL
Request and Response
// request
export interface TypeOrmSaveBasis extends TypeOrmBasis {
values?: Array<TypeOrmEntityValue> | TypeOrmEntityToSave;
}
// response
export type TypeOrmIdOfInserted = TypeOrmIdType;
export type TypeOrmCountOfAffected = number;
export type TypeOrmWrittenResult = TypeOrmIdOfInserted | TypeOrmCountOfAffected;Bulk Save by SQL
Request and Response
// request
export interface TypeOrmBulkSaveBasis extends TypeOrmBasis {
items?: Array<Array<TypeOrmEntityValue> | TypeOrmEntityToSave>;
}
// response
export type TypeOrmIdsOfInserted = Array<TypeOrmIdOfInserted>;
export type TypeOrmCountsOfAffected = Array<TypeOrmCountOfAffected>;
export type TypeOrmBulkWrittenResult = TypeOrmIdsOfInserted | TypeOrmCountsOfAffected;By Snippet
Constructor Parameters
| Name | Type | Default Value | Comments |
|---|---|---|---|
| snippet | ScriptFuncOrBody\ |
A TypeOrm Query Runner instance, $runner, will be passed to the snippet, and the snippet can use this instance to perform any operation on
the database.
You do not need to manually start a transaction, whether you are using autonomous transaction or if it is nested within transaction step sets. The
$runnerinstance passed to the snippet will automatically start a transaction.
Transactional
Transactional step sets are a wrapper for a set of pipeline steps that require the same transaction. It means that all the sub-steps inside a transactional step set can be executed within a single transaction. However, not all sub-steps within the set necessarily have to be transactional. Only the ones that need to be executed within the same transaction need to define the same transaction name as the step set. Additionally, nested transactions are also supported, which means Transactional Step Sets can be nested as well.
Steps with same datasource name and transaction name should be within same transaction.
Constructor Parameters
| Name | Type | Default Value | Comments |
|---|---|---|---|
| dataSourceName | string | Datasource name, DB. | |
| transactionName | string | $default-transaction | Transaction name. |
| steps | Array\ | Sub steps. |
Http Steps
Fetch
Environment Parameters
| Name | Type | Default Value | Comments |
|---|---|---|---|
endpoints.SYSTEM.ENDPOINT.url | string | Endpoint URL. | |
endpoints.SYSTEM.ENDPOINT.headers | string | Endpoint request headers, use global headers if this parameter doesn't present.Format follows name=value[;name=value[...]]. | |
endpoints.SYSTEM.global.headers | string | Endpoint system global request headers.Format follows name=value[;name=value[...]]. | |
endpoints.SYSTEM.ENDPOINT.headers.transparent | string | Endpoint request transparent-passed headers names, use global headers if this parameter doesn't present.Format follows name1[;name2[...]]. | |
endpoints.SYSTEM.global.headers.transparent | string | Endpoint system global request transparent-passed headers names.Format follows name1[;name2[...]]. | |
endpoints.SYSTEM.ENDPOINT.headers.transparent.omitted | string | Endpoint request headers names which omitted from transparent-passed headers, use global headers if this parameter doesn't present.Format follows name1[;name2[...]]. | |
endpoints.SYSTEM.global.headers.transparent.omitted | string | Endpoint system global request headers names which omitted from transparent-passed headers.Format follows name1[;name2[...]]. | |
endpoints.SYSTEM.ENDPOINT.trace.header.name | string | Endpoint trace header name. | |
endpoints.SYSTEM.global.trace.header.name | string | Endpoint system global trace header name. | |
endpoints.SYSTEM.ENDPOINT.timeout | number | Endpoint request timeout, in seconds, use global timeout if this parameter doesn't present. | |
endpoints.SYSTEM.global.timeout | number | -1 | Endpoint system global timeout, in seconds, -1 represents no timeout. |
SYSTEM represents endpoint system, ENDPOINT represents endpoint url. For example:
CFG_ENDPOINTS_ORDER_PURCHASE_URL=https://order.com/purchase
CFG_ENDPOINTS_ORDER_PAYMENT_URL=https://order.com/paymentConstructor Parameters
| Name | Type | Default Value | Comments |
|---|---|---|---|
| endpointSystemCode | string | Endpoint system code. | |
| endpointName | string | Endpoint name. | |
| urlGenerate | ScriptFuncOrBody\ | Endpoint url generator, $endpointUrl. | |
| method | string | Http method, default post. | |
| timeout | number | Endpoint timeout, in seconds. | |
| transparentHeaderNames | Array | Transparent-passed request headers names. | |
| omittedTransparentHeaderNames | Array | Omitted names of transparently-passed request headers. | |
| headersGenerate | ScriptFuncOrBody\ | Endpoint request headers generator. | |
| bodyUsed | boolean | Send request with body or not, or automatically disregards the body when sending a get request when not specified. | |
| bodyGenerate | ScriptFuncOrBody\ | Endpoint request body generator. | |
| responseGenerate | ScriptFuncOrBody\ | Endpoint response body generator, $response. | |
| responseErrorHandles | ScriptFuncOrBody\or{key: HttpErrorCode: ScriptFuncOrBody\} | Endpoint response error handlers. |
transparentHeaderNamesandomittedTransparentHeaderNames: UsetransparentHeaderNamesto specify the names of request headers whose values need to be transparently passed from the input parameters to the upstream service. Separate the names with;. The names support using.for connection so that values from multi-level objects can be directly retrieved. For example,account.namewill retrieve the value of thenameproperty from theaccountproperty of the input object. When writing the values into the header values, the following rules apply: - If the value is an array, use,to connect the elements.nulland empty strings will be filtered out. - If the value is an object, use the object's keys to generate multiple headers.nulland empty strings will be filtered out. - For other values, convert them to strings.nulland empty strings will be filtered out. - Note that an empty string does not include blank strings, and no automatic trimming will be performed.If the
transparentHeaderNamesat the step level is not defined, use the definition inendpoints.SYSTEM.ENDPOINT.headers.transparent. If it is also not defined at the endpoint level, use the definition inendpoints.SYSTEM.global.headers.transparent.After obtaining the transparently passed request headers, check the definition of
omittedTransparentHeaderNames. If it is defined, remove the corresponding headers from the headers.omittedTransparentHeaderNamesis case-insensitive. Similarly, if theomittedTransparentHeaderNamesat the step level is not defined, use the definition inendpoints.SYSTEM.ENDPOINT.headers.transparent.omitted. If it is also not defined at the endpoint level, use the definition inendpoints.SYSTEM.global.headers.transparent.omitted.For example: If the input data contains
{account: {name: 'John', token: '******'}}andtransparentHeaderNamesis defined asaccount, then two transparently passed headers will be obtained:name=Johnandtoken=******. At this time, ifomittedTransparentHeaderNamesis defined asname, the headers that will be finally transparently passed to the upstream service aretoken=******, andnamewill be ignored.
Request headers
There are three ways to transmit request headers to upstream services. In order of priority from high to low, they are headersGenerate,
headers.transparent, and headers. If a header appears in a higher-priority method, the header with the same name generated by a
lower-priority method will be ignored. Note that the matching is case-sensitive.
Normally, if you need to transparently pass the request headers from the client to the upstream service, you should use headers: true in
the pipeline definition. Then you can directly use transparentHeaderNames: headers to obtain all the request headers, and then use
omittedTransparentHeaderNames for necessary filtering.
It should be noted that since the fetch step initiates a new request to the upstream service, its request structure and data will be modified or reset according to requirements. Therefore, even if you need to transparently pass the request headers, some of the headers are still not applicable. So by default, the two headers
content-encodingandcontent-lengthwill be filtered out. No matter how the request headers are generated in the above process, these two headers are always automatically generated bynode-fetch.
Tracing in upstream service groups
Use endpoints.SYSTEM.ENDPOINT.trace.header.name or endpoints.SYSTEM.global.trace.header.name to define the trace request header name for
the upstream system. The value will be attempted to be retrieved from the response and stored in the execution context to be passed to the
upstream system within the same SYSTEM.ENDPOINT or SYSTEM.
Installation
Note since nestjs only support commonjs module, which means node-fetch 3 series cannot be imported since it is built on esm mode.
1 year ago
1 year ago
1 year ago
7 months ago
1 year ago
7 months ago
12 months ago
1 year ago
12 months ago
12 months ago
12 months ago
12 months ago
7 months ago
7 months ago
8 months ago
11 months ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago