0.0.44 • Published 5 months ago

@blocklet/benchmark v0.0.44

Weekly downloads
-
License
ISC
Repository
-
Last release
5 months ago

@blocklet/benchmark

中文文档

A powerful, flexible HTTP API benchmarking tool tailored for Blocklet and general Node.js services. Supports multiple modes (RPS, concurrency), ramp-up testing, AI-powered analysis, and outputs performance charts and logs.

📦 Installation

npm install -g @blocklet/benchmark

Or use it directly via npx:

npx @blocklet/benchmark

🚀 Quick Start

Step 1: Initialize Config File

npx @blocklet/benchmark init --type server

Other available types:

  • discuss-kit
  • tool
  • You can also combine them: --type server,tool

This will generate a benchmark.yml file in your current directory.

Step 2: Run the Benchmark

npx @blocklet/benchmark run

Options:

OptionDescriptionDefault
--configPath to config filebenchmark.yml
--formatOutput format: row, json, or tabletable
--modeBenchmark mode: rps, concurrent, allall

🧩 Configuration

Here's a sample benchmark.yml and explanation of the fields:

origin: https://example.blocklet.dev
concurrency: 100
timelimit: 20
ramp: 20
data:
  loginToken: your-login-token
  teamDid: your-team-did
  userDid: your-user-did
body: '{"example": true}'
logError: true
logResponse: false
aiAnalysis:
  enable: true
  language: en
  techStack: node.js
  model: gpt-4o
apis:
  - name: Get User Info
    api: /api/user/info
    method: GET
    assert:
      id: not-null
  - name: Update Status
    api: /api/status
    method: POST
    body: '{"status": "ok"}'
    assert:
      success: true

Top-Level Fields

FieldDescription
originBase URL of the API server
concurrencyNumber of concurrent users
timelimitDuration of the test per mode (in seconds)
ramp(Optional) Ramp step to gradually increase concurrency
dataDynamic values to be injected into API paths or headers
bodyDefault request body
logErrorPrint error logs to console
logResponsePrint full API responses
aiAnalysisEnable GPT-powered result interpretation (requires OPENAI_CLIENT in .env)
sitemapThe remote endpoint should return a JSON response

API List (apis)

Each item defines one endpoint to test:

FieldDescription
nameHuman-readable name of the test case
apiAPI path (joined with origin)
methodHTTP method (GET, POST, etc.)
bodyRequest body (if POST/PUT)
assertAssertions on response (supports not-null, null, or fixed values)
onlyIf true, run only this endpoint
skipIf true, skip this endpoint

🌐 Using sitemap to Auto-Load API Definitions

To simplify and centralize API configuration, @blocklet/benchmark supports loading APIs dynamically from a remote sitemap. This allows you to avoid manually writing all your API definitions in the benchmark.yml file, and instead retrieve them from a maintained endpoint.

🧩 Configuration

You can enable and configure the sitemap in your benchmark.yml like this:

sitemap:
  enable: true
  url: 'https://your-server-url.com/sitemap'
  • enable: Set to true to activate the feature.
  • url: URL of the remote endpoint that returns the sitemap JSON.

📌 If enable is set to false, or the request to the sitemap fails, it will fall back to using the apis defined in your benchmark.yml file.


📝 Expected Sitemap Response Format

The remote endpoint should return a JSON response with the following structure:

{
  "apis": [
    {
      "name": "/api/example",
      "api": "/api/example"
    },
    {
      "name": "/api/full",
      "api": "/api/full",
      "method": "GET",
      "cookie": "login_token=$$loginToken",
      "format": "json",
      "headers": {
        "Content-Type": "application/json; charset=utf-8"
      },
      "skip": false,
      "only": false,
      "body": {},
      "assert": {}
    }
  ],
  "data": {
    "key": "option use some data"
  }
}

📊 Output

All results are saved to the benchmark-output folder:

  • benchmark.log: All logs
  • 0-benchmark-raw.yml: Raw result file
  • *.png: Chart images (RPS, latency percentiles)
  • console output: A summary table of all benchmark results

If aiAnalysis is enabled and OPENAI_CLIENT is set in .env, a GPT-powered summary of the test will be provided in the console.

📘 License

MIT License

0.0.44

5 months ago

0.0.43

6 months ago

0.0.42

6 months ago

0.0.41

6 months ago

0.0.40

6 months ago

0.0.39

7 months ago

0.0.38

7 months ago

0.0.36

7 months ago

0.0.35

7 months ago

0.0.34

7 months ago

0.0.32

7 months ago

0.0.31

7 months ago