0.0.12 • Published 5 months ago

@mvmdev/nestjs-rate-limit v0.0.12

Weekly downloads
-
License
MIT
Repository
github
Last release
5 months ago

Description

nestjs-rate-limit is a module which adds in configurable rate limiting for Nest applications.

Under the hood it uses rate-limiter-flexible.

Installation

npm i --save @mvmdev/nestjs-rate-limit

Or if you use Yarn:

yarn add @mvmdev/nestjs-rate-limit

Requirements

Nest.Js v10

Basic Usage

Include Module

First you need to import this module into your main application module:

app.module.ts

import { RateLimitModule } from '@mvmdev/nestjs-rate-limit';

@Module({
    imports: [RateLimitModule],
})
export class ApplicationModule {}

or you can define the module global using true and global config.

import { RateLimitModule } from '@mvmdev/nestjs-rate-limit';

@Module({
    imports: [RateLimitModule.forRoot({}, true)],
})
export class ApplicationModule {}

Using Global Guard

You can choose to register the guard globally:

app.module.ts

import { APP_GUARD } from '@nestjs/core'
import { RateLimitModule } from '@mvmdev/nestjs-rate-limit';
import { RateLimitGuard } from '@mvmdev/nestjs-rate-limit';

@Module({
    imports: [RateLimitModule],
    providers: [
        {
            provide: APP_GUARD,
            useClass: RateLimitGuard,
        },
    ],
})
export class ApplicationModule {}

With Decorator

You can use the @RateLimit decorator to specify the points and duration for rate limiting on per route basis:

app.controller.ts

import { RateLimit } from '@mvmdev/nestjs-rate-limit'
import { UseGuards } from '@nestjs/common';
import { RateLimitGuard } from '@mvmdev/nestjs-rate-limit';
import { minutes } from '@mvmdev/nestjs-rate-limit';

@UseGuards(RateLimitGuard)
@RateLimit({
  keyPrefix: 'sign-up',
  points: 1,
  duration: 60,
  blockDuration: minutes(5),
  errorMessage: 'Accounts cannot be created more than once in per minute'
})
@Get('/signup')
async signUp() {
  console.log('hello')
}

Dynamic Keyprefix

import { RateLimit } from '@mvmdev/nestjs-rate-limit'
import { UseGuards } from '@nestjs/common';
import { RateLimitGuard } from '@mvmdev/nestjs-rate-limit';

@UseGuards(RateLimitGuard)
@RateLimit({
  keyPrefix: () => {
    return 'example'
  },
  points: 1,
  duration: 60,
  customResponseSchema: (rateLimiterResponse) => {
    return { 
      timestamp: '1611479696', 
      message: 'Request has been blocked' 
    }
  }
})
@Get('/example')
async example() {
  console.log('hello')
}

Redis

@UseGuards(RateLimitGuard)
@RateLimit({
  type: 'Redis',
  storeClient: new Redis({
    host: 'redis-15130.c250.eu-central-1-1.ec2.cloud.redislabs.com',
    port: 15130,
    db: 0,
    password: 'aYLhX4LfFkrHhQPvg0pnstu7RNpvrUJM',
    username: 'default'
  }),
  keyPrefix: 'redis',
  points: 3,
  pointsConsumed: 1,
  duration: 5,
  blockDuration: minutes(1)
})
@Get('hello')
getWorld(): string {
  return 'World Hello'
}

Proxies

If your application runs behind a proxy server, check the specific HTTP adapter options (express and fastify) for the trust proxy option and enable it. Doing so will allow you to get the original IP address from the X-Forwarded-For header, and you can override the getTracker() method to pull the value from the header rather than from req.ip. The following example works with both express and fastify:

export class RateLimitProxyGuard extends RateLimitGuard {
  protected getTracker(request: Request): string {
    return request.ips.length > 0 ? request.ips[0] : request.ip;
  }
}

SkipRateLimit

use @SkipRateLimit over your method to skip rate limiting.

import { SkipRateLimit } from '@mvmdev/nestjs-rate-limit';

@SkipRateLimit()
async getHello(){
  return 'Hello world';
}

With All Options

@Module({
    imports: [
        // All the values here are defaults.
        RateLimitModule.register({
            for: 'Express',
            type: 'Memory',
            keyPrefix: 'global',
            points: 4,
            pointsConsumed: 1,
            inmemoryBlockOnConsumed: 0,
            duration: 1,
            blockDuration: 0,
            inmemoryBlockDuration: 0,
            queueEnabled: false,
            whiteList: [],
            blackList: [],
            storeClient: undefined,
            insuranceLimiter: undefined,
            storeType: undefined,
            dbName: undefined,
            tableName: undefined,
            tableCreated: undefined,
            clearExpiredByTimeout: undefined,
            execEvenly: false,
            execEvenlyMinDelayMs: undefined,
            indexKeyPrefix: {},
            maxQueueSize: 100,
            omitResponseHeaders: false,
            errorMessage: 'Rate limit exceeded',
            logger: true,
            customResponseSchema: undefined
        }),
    ],
    providers: [
        {
            provide: APP_GUARD,
            useClass: RateLimitGuard,
        },
    ],
})
export class ApplicationModule {}

Fastify based Graphql

If you want to use this library on a fastify based graphql server, you need to override the graphql context in the app.module as shown below.

GraphQLModule.forRoot({
    context: ({ request, reply }) => {
        return { req: request, res: reply }
    },
}),

Options

● for

Default: 'Express' Type: 'Express' | 'Fastify' | 'Microservice' | 'ExpressGraphql' | 'FastifyGraphql'

In this option, you specify what the technology is running under the Nest application. The wrong value causes to limiter not working.

● type

Default: 'Memory' Type: 'Memory'

Here you define where the limiter data will be stored. Each option plays a different role in limiter performance, to see that please check benchmarks.

● keyPrefix

Default: 'global' Type: string | () => string

For creating several limiters with different options to apply different modules/endpoints.

Set to empty string '', if keys should be stored without prefix.

Note: for some limiters it should correspond to Storage requirements for tables or collections name, as keyPrefix may be used as their name.

● points

Default: 4 Type: number

Maximum number of points can be consumed over duration.

● pointsConsumed

Default: 1 Type: number

You can consume more than 1 point per invocation of the rate limiter.

For instance if you have a limit of 100 points per 60 seconds, and pointsConsumed is set to 10, the user will effectively be able to make 10 requests per 60 seconds.

● inMemoryBlockOnConsumed

Default: 0 Type: number

For Redis, Memcached, MongoDB, MySQL, PostgreSQL, etc.

Can be used against DDoS attacks. In-memory blocking works in current process memory and for consume method only.

It blocks a key in memory for msBeforeNext milliseconds from the last consume result, if inMemoryBlockDuration is not set. This helps to avoid extra requests.

It is not necessary to increment counter on store, if all points are consumed already.

● duration

Default: 1 Type: number

Number of seconds before consumed points are reset.

Keys never expire, if duration is 0.

● blockDuration

Default: 0 Type: number

If positive number and consumed more than points in current duration, block for blockDuration seconds.

● inMemoryBlockDuration

Default: 0 Type: number

For Redis, Memcached, MongoDB, MySQL, PostgreSQL, etc.

Block key for inMemoryBlockDuration seconds, if inMemoryBlockOnConsumed or more points are consumed. Set it the same as blockDuration option for distributed application to have consistent result on all processes.

● whiteList

Default: [] Type: string[]

If the IP is white listed, consume resolved no matter how many points consumed.

● blackList

Default: [] Type: string[]

If the IP is black listed, consume rejected anytime. Blacklisted IPs are blocked on code level not in store/memory. Think of it as of requests filter.

● storeClient

Default: undefined

Type: any

Required for Redis, Memcached, MongoDB, MySQL, PostgreSQL, etc.

Have to be redis, ioredis, memcached, mongodb, pg, mysql2, mysql or any other related pool or connection.

● insuranceLimiter

Default: undefined Type: any

Default: undefined For Redis, Memcached, MongoDB, MySQL, PostgreSQL.

Instance of RateLimiterAbstract extended object to store limits, when database comes up with any error.

All data from insuranceLimiter is NOT copied to parent limiter, when error gone

Note: insuranceLimiter automatically setup blockDuration and execEvenly to same values as in parent to avoid unexpected behaviour.

● execEvenly

Default: false Type: boolean

Delay action to be executed evenly over duration First action in duration is executed without delay. All next allowed actions in current duration are delayed by formula msBeforeDurationEnd / (remainingPoints + 2) with minimum delay of duration * 1000 / points It allows to cut off load peaks similar way to Leaky Bucket.

Note: it isn't recommended to use it for long duration and few points, as it may delay action for too long with default execEvenlyMinDelayMs.

● execEvenlyMinDelayMs

Default: duration * 1000 / points Type: number

Sets minimum delay in milliseconds, when action is delayed with execEvenly

● indexKeyPrefix

Default: {} Type: {}

Object which is used to create combined index by {...indexKeyPrefix, key: 1} attributes.

● omitResponseHeaders

Default: false Type: boolean

Whether or not the rate limit headers (X-Retry-After, X-RateLimit-Limit, X-Retry-Remaining, X-Retry-Reset) should be omitted in the response.

● errorMessage

Default: 'Rate limit exceeded' Type: string

errorMessage option can change the error message of rate limiter exception.

● customResponseSchema

Default: undefined Type: string

customResponseSchema option allows to provide customizable response schemas

Benchmarks

1000 concurrent clients with maximum 2000 requests per sec during 30 seconds.

1. Memory     0.34 ms
2. Redis      2.45 ms
0.0.12

5 months ago

0.0.10

5 months ago

0.0.11

5 months ago

0.0.9

5 months ago

0.0.8

5 months ago

0.0.7

5 months ago

0.0.6

5 months ago

0.0.5

5 months ago

0.0.4

5 months ago

0.0.3

5 months ago

0.0.2

5 months ago

0.0.1

5 months ago