tjq-http-replay v0.1.3
http-replay
Installation
npm i -D @tjq/http-replay
How It Works
Usage
const setup = require("@tjq/http-replay");
const puppeteer = require("puppeteer");
describe("Browser test", () => {
it("Does something", async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await setup({
page,
dir: "/path/to/mock",
urls: ["http://localhost:8000/"],
replay: true,
id: "mocks",
});
await page.goto("http://example.com");
await browser.close();
});
});
Configuration
page
(puppeteer.Page)
Required
The puppeteer
page instance which will be conducting the visual regression test.
dir
(string)
Required
The directory which the generated JSON file will be stored. The dir
should not have a trailing slash.
urls
(string[])
Required
The host names and/or any applicable slugs that should be intercepted and written to the generated JSON file.
id
(string)
Default mocks
Name of the JSON file that will be written to the dir
.
replay
(boolean)
Default false
If false
, http-replay
will record all XHR
and Fetch
requests made by the browser page instance. The request headers and data, response headers and data, as well as the URL components are written to a JSON file whose path and name are delegated to the dir
and id
props, respectively.
If true
, requests made by the browser page instance will be matched against a JSON file identified by the id
property and responded to accordingly.
Motivating Example
In this example, we have a frontend project and a decoupled backend project API which are both being served over different ports on localhost. While developing and running tests locally, no problems should be seen if the projects are configured to communicate with each other.
This often breaks down in a CI environment where the API will not necessarily be available. This is complicated further performing visual regression tests in a CI environment, which may rely on asynchronous data affecting the render.
To solve this, http-replay
can be run locally in record
mode, where the developer has ensured access to the API, and save intercepted requests to a local JSON file to be later used as responses in case the API is no longer available.
File strcture
Initial files required include a Jest configuration as well as a test to run.
jest.record.config.js
jest.replay.config.js
package.json
replay.test.js
Installing dependencies
npm i -D jest puppeteer
Modifying test runner
Jest does not currently allow custom command line arguments passed in the form of flags. To circumvent this and leverage the same record-replay functionality within the same tests, create two separate configuration files supplying a global variable of the same name.
// jest.record.config.js
module.exports = {
testRegex: "./*\\.test\\.js$",
globals: {
__REPLAY__: false,
},
};
// jest.replay.config.js
module.exports = {
testRegex: "./*\\.test\\.js$",
globals: {
__REPLAY__: true,
},
};
Create npm
scripts in the package.json
to target the two configuration files based on the intent of the command.
// package.json
{
"scripts": {
"test:record": "jest --config jest.record.config",
"test:replay": "jest --config jest.replay.config"
}
}
The above scripts can be run with npm run test:record
and npm run test:replay
, respectively.
Writing a test
// replay.test.js
const puppeteer = require("puppeteer");
const setup = require("@tjq/http-replay");
describe("Example", function () {
it("renders with or without a server", async () => {
const browser = await puppeteer.launch({
headless: true,
args: ["--no-sandbox"],
});
const page = await browser.newPage();
await setup({
page,
replay: __REPLAY__,
dir: __dirname,
id: "example",
url: "localhost:8000/api/",
});
await page.goto("http://localhost:9000/", {
waitUntil: "networkidle0",
});
await browser.close();
});
});
Running the initial test
To generate responses for the requests made during the previous example, run the following:
npm run test:record
This will create a file at the root directory named example.json
that will contain the requests and responses made by the browser to the specified urls
, as well as their headers and data.
Running in CI
To ensure that all requests will have a proper response in CI, commit the generated JSON files to your source code manager. The following Github Action yml
file shows the use of the aforementioned npm
scripts to run the tests in replay
mode.
# run_tests.yml
name: Browser Tests
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Install dependencie
run: npm install
- name: Build application
run: npm run build
- name: Start and serve application
run: npm run serve &
- name: Run tests with recorded responses
run: npm run test:replay