2.0.0 • Published 3 months ago

@fgiova/undici-rest-client v2.0.0

Weekly downloads
-
License
MIT
Repository
github
Last release
3 months ago

Simple REST client using undici

NPM version CI workflow TypeScript Maintainability Test Coverage

Description

This is a simple REST client using undici as http client. It's support a simple retry mechanism using exponential backoff or using delay based on retry-after HTTP header It's implement a simple LRU cache mechanism on idempotent HTTP methods.

!NOTE For node 16 use version 1.x, version 2.x support only Node.js >= 18.

Installation

npm install @fgiova/undici-rest-client

Usage

import { RestClient } from "@fgiova/undici-rest-client";

const client = new RestClient({
    baseUrl: "https://foo.bar.org",
    retry: {
        httpCodes: [503, 429],
        baseTimeout: 1000,
        maxTimeout: 10000,
        maxRetry: 5,
        backoff: (retryCount) => 2 ** retryCount * 1000,
    },
	cache: new LRUCache<string, any>({max: 10})
});

const response = await client.get("/foo/bar", {
    headers: {
        "x-foo": "bar",
    },
    ttl: 1000,
    requestKey: "foo-bar",
});

const response = await client.post("/foo/bar", {
    headers: {
        "x-foo": "bar",
    },
    ttl: 1000,
    requestKey: "foo-bar",
    body: {
        foo: "bar",
    }
});

Client Options

OptionTypeDefaultDescription
baseUrlstringThe base domain url to be used for the client
retryRetry OptionsThe retry options
cacheLRUCache<string, any>The LRU cache instance
undiciUndici OptionThe undici options

Retry Options

OptionTypeDefaultDescription
httpCodesnumber[]502, 503, 429, 408, 504, 599The HTTP codes to be retried
baseTimeoutnumber300The base timeout in ms
maxTimeoutnumber30000The max timeout in ms
maxRetrynumber3The max number of retry
backoff(retryCount: number) => numberexponential backoffThe backoff function

Undici Options

OptionTypeDefaultDescription
clientOptionPool.OptionsThe number of connections
pipeliningnumberThe number of pipelining

RequestOptions

OptionTypeDefaultDescription
headersRecord<string, string>The HTTP headers
bodyanyThe HTTP body
ttlnumberThe TTL for the cache
requestKeystringThe key for the cache
pathstringThe path for the request

Notes: The cache is a simple LRU cache with a max size of 1000 items and a default TTL of 30 seconds. The cache TTL can be overridden using the ttl option in the request. The cache key is generated using the request method, the request path and the request body. The cache key can be overridden using the requestKey option in the request. When the request is not idempotent, the cache is disabled. When the body is a plain object the header content-type "application/json" is added to request. When response is a not compressible (typically a binary response) array buffer are returned. Parallel idempotent requests at same resource are deduplicated.

Methods

request

request<T = any>(options: RequestOptions): Promise<Response<T>>;

get

get<T = any>(path: string, options?: Omit<RequestOptions, "path" | "method" | "body" >): Promise<Response<T>>;

post

post<T = any>(path: string, options?: Omit<RequestOptions, "path" | "method">): Promise<Response<T>>;

put

put<T = any>(path: string, options?: Omit<RequestOptions, "path" | "method">): Promise<Response<T>>;

patch

patch<T = any>(path: string, options?: Omit<RequestOptions, "path" | "method">): Promise<Response<T>>;

delete

delete<T = any>(path: string, options?: Omit<RequestOptions, "path" | "method" | "body" | "ttl">): Promise<Response<T>>;

License

Licensed under MIT.

2.0.0

3 months ago

1.0.1

3 months ago