es-micro v1.0.0
Micro — Asynchronous HTTP microservices
Features
- Easy: Designed for usage with
asyncandawait(more) - Fast: Ultra-high performance (even JSON parsing is opt-in)
- Micro: The whole project is ~260 lines of code
- Agile: Super easy deployment and containerization
- Simple: Oriented for single purpose modules (function)
- Standard: Just HTTP!
- Explicit: No middleware - modules declare all dependencies
- Lightweight: With all dependencies, the package weighs less than a megabyte
Installation
Important: Micro is only meant to be used in production. In development, you should use micro-dev, which provides you with a tool belt specifically tailored for developing microservices.
To prepare your microservice for running in the production environment, firstly install micro:
npm install --save microUsage
Create an index.js file and export a function that accepts the standard http.IncomingMessage and http.ServerResponse objects:
module.exports = (req, res) => {
res.end('Welcome to Micro')
}Micro provides useful helpers but also handles return values – so you can write it even shorter!
module.exports = () => 'Welcome to Micro'Next, ensure that the main property inside package.json points to your microservice (which is inside index.js in this example case) and add a start script:
{
"main": "index.js",
"scripts": {
"start": "micro"
}
}Once all of that is done, the server can be started like this:
npm startAnd go to this URL: http://localhost:3000 - 🎉
Command line
micro - Asynchronous HTTP microservices
USAGE
$ micro --help
$ micro --version
$ micro [-l listen_uri [-l ...]] [entry_point.js]
By default micro will listen on 0.0.0.0:3000 and will look first
for the "main" property in package.json and subsequently for index.js
as the default entry_point.
Specifying a single --listen argument will overwrite the default, not supplement it.
OPTIONS
--help shows this help message
-v, --version displays the current version of micro
-l, --listen listen_uri specify a URI endpoint on which to listen (see below) -
more than one may be specified to listen in multiple places
ENDPOINTS
Listen endpoints (specified by the --listen or -l options above) instruct micro
to listen on one or more interfaces/ports, UNIX domain sockets, or Windows named pipes.
For TCP (traditional host/port) endpoints:
$ micro -l tcp://hostname:1234
For UNIX domain socket endpoints:
$ micro -l unix:/path/to/socket.sock
For Windows named pipe endpoints:
$ micro -l pipe:\\.\pipe\PipeNameasync & await
Micro is built for usage with async/await. You can read more about async / await here
const sleep = require('then-sleep')
module.exports = async (req, res) => {
await sleep(500)
return 'Ready!'
}Transpilation
The package takes advantage of native support for async and await, which is available as of Node.js 8.0.0! In turn, we suggest either using at least this version both in development and production (if possible), or transpiling the code using async-to-gen, if you can't use the latest Node.js version.
In order to do that, you firstly need to install it:
npm install --save async-to-genAnd then add the transpilation command to the scripts.build property inside package.json:
{
"scripts": {
"build": "async-to-gen input.js > output.js"
}
}Once these two steps are done, you can transpile the code by running this command:
npm run buildThat's all it takes to transpile by yourself. But just to be clear: Only do this if you can't use Node.js 8.0.0! If you can, async and await will just work right out of the box.
Port Based on Environment Variable
When you want to set the port using an environment variable you can use:
micro -l tcp://0.0.0.0:$PORTOptionally you can add a default if it suits your use case:
micro -l tcp://0.0.0.0:${PORT-3000}${PORT-3000} will allow a fallback to port 3000 when $PORT is not defined.
Note that this only works in Bash.
Body parsing
For parsing the incoming request body we included an async functions buffer, text and json
const {buffer, text, json} = require('micro')
module.exports = async (req, res) => {
const buf = await buffer(req)
console.log(buf)
// <Buffer 7b 22 70 72 69 63 65 22 3a 20 39 2e 39 39 7d>
const txt = await text(req)
console.log(txt)
// '{"price": 9.99}'
const js = await json(req)
console.log(js.price)
// 9.99
return ''
}API
buffer(req, { limit = '1mb', encoding = 'utf8' })
text(req, { limit = '1mb', encoding = 'utf8' })
json(req, { limit = '1mb', encoding = 'utf8' })
- Buffers and parses the incoming body and returns it.
- Exposes an
asyncfunction that can be run withawait. - Can be called multiple times, as it caches the raw request body the first time.
limitis how much data is aggregated before parsing at max. Otherwise, anErroris thrown withstatusCodeset to413(see Error Handling). It can be aNumberof bytes or a string like'1mb'.- If JSON parsing fails, an
Erroris thrown withstatusCodeset to400(see Error Handling)
For other types of data check the examples
Sending a different status code
So far we have used return to send data to the client. return 'Hello World' is the equivalent of send(res, 200, 'Hello World').
const {send} = require('micro')
module.exports = async (req, res) => {
const statusCode = 400
const data = { error: 'Custom error message' }
send(res, statusCode, data)
}send(res, statusCode, data = null)
- Use
require('micro').send. statusCodeis aNumberwith the HTTP status code, and must always be supplied.- If
datais supplied it is sent in the response. Different input types are processed appropriately, andContent-TypeandContent-Lengthare automatically set.Stream:datais piped as anoctet-stream. Note: it is your responsibility to handle theerrorevent in this case (usually, simply logging the error and aborting the response is enough).Buffer:datais written as anoctet-stream.object:datais serialized as JSON.string:datais written as-is.
- If JSON serialization fails (for example, if a cyclical reference is found), a
400error is thrown. See Error Handling.
Programmatic use
You can use Micro programmatically by requiring Micro directly:
const micro = require('micro')
const sleep = require('then-sleep')
const server = micro(async (req, res) => {
await sleep(500)
return 'Hello world'
})
server.listen(3000)micro(fn)
- This function is exposed as the
defaultexport. - Use
require('micro'). - Returns a
http.Serverthat uses the providedfunctionas the request handler. - The supplied function is run with
await. So it can beasync
sendError(req, res, error)
- Use
require('micro').sendError. - Used as the default handler for errors thrown.
- Automatically sets the status code of the response based on
error.statusCode. - Sends the
error.messageas the body. - Stacks are printed out with
console.errorand during development (whenNODE_ENVis set to'development') also sent in responses. - Usually, you don't need to invoke this method yourself, as you can use the built-in error handling flow with
throw.
createError(code, msg, orig)
- Use
require('micro').createError. - Creates an error object with a
statusCode. - Useful for easily throwing errors with HTTP status codes, which are interpreted by the built-in error handling.
origsetserror.originalErrorwhich identifies the original error (if any).
Error Handling
Micro allows you to write robust microservices. This is accomplished primarily by bringing sanity back to error handling and avoiding callback soup.
If an error is thrown and not caught by you, the response will automatically be 500. Important: Error stacks will be printed as console.error and during development mode (if the env variable NODE_ENV is 'development'), they will also be included in the responses.
If the Error object that's thrown contains a statusCode property, that's used as the HTTP code to be sent. Let's say you want to write a rate limiting module:
const rateLimit = require('my-rate-limit')
module.exports = async (req, res) => {
await rateLimit(req)
// ... your code
}If the API endpoint is abused, it can throw an error with createError like so:
if (tooMany) {
throw createError(429, 'Rate limit exceeded')
}Alternatively you can create the Error object yourself
if (tooMany) {
const err = new Error('Rate limit exceeded')
err.statusCode = 429
throw err
}The nice thing about this model is that the statusCode is merely a suggestion. The user can override it:
try {
await rateLimit(req)
} catch (err) {
if (429 == err.statusCode) {
// perhaps send 500 instead?
send(res, 500)
}
}If the error is based on another error that Micro caught, like a JSON.parse exception, then originalError will point to it. If a generic error is caught, the status will be set to 500.
In order to set up your own error handling mechanism, you can use composition in your handler:
const {send} = require('micro')
const handleErrors = fn => async (req, res) => {
try {
return await fn(req, res)
} catch (err) {
console.log(err.stack)
send(res, 500, 'My custom error!')
}
}
module.exports = handleErrors(async (req, res) => {
throw new Error('What happened here?')
})Testing
Micro makes tests compact and a pleasure to read and write. We recommend ava, a highly parallel Micro test framework with built-in support for async tests:
const micro = require('micro')
const test = require('ava')
const listen = require('test-listen')
const request = require('request-promise')
test('my endpoint', async t => {
const service = micro(async (req, res) => {
micro.send(res, 200, {
test: 'woot'
})
})
const url = await listen(service)
const body = await request(url)
t.deepEqual(JSON.parse(body).test, 'woot')
service.close()
})Look at test-listen for a function that returns a URL with an ephemeral port every time it's called.
Contributing
- Fork this repository to your own GitHub account and then clone it to your local device
- Link the package to the global module directory:
npm link - Within the module you want to test your local development instance of Micro, just link it to the dependencies:
npm link micro. Instead of the default one from npm, node will now use your clone of Micro!
As always, you can run the AVA and ESLint tests using: npm test
Credits
Thanks to Tom Yandell and Richard Hodgson for donating the name "micro" on npm!
Authors
- Guillermo Rauch (@rauchg) - ZEIT
- Leo Lamprecht (@notquiteleo) - ZEIT
- Tim Neutkens (@timneutkens) - ZEIT
4 years ago
4 years ago