1.0.0-leenda.1 • Published 11 months ago

@liknens/tus-s3-store v1.0.0-leenda.1

Weekly downloads
-
License
MIT
Repository
github
Last release
11 months ago

@liknens/tus-s3-store

👉 Note: since 1.0.0 packages are split and published under the @tus scope. The old package, tus-node-server, is considered unstable and will only receive security fixes. Make sure to use the new package, currently in beta at 1.0.0-beta.5.

Contents

Install

In Node.js (16.0+), install with npm:

npm install @liknens/tus-s3-store

Use

const {Server} = require('@liknens/tus-server')
const {S3Store} = require('@liknens/tus-s3-store')

const s3Store = new S3Store({
  partSize: 8 * 1024 * 1024, // Each uploaded part will have ~8MB,
  s3ClientConfig: {
    bucket: process.env.AWS_BUCKET,
    region: process.env.AWS_REGION,
    accessKeyId: process.env.AWS_ACCESS_KEY_ID,
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
  },
})
const server = new Server({path: '/files', datastore: s3Store})
// ...

API

This package exports S3Store. There is no default export.

new S3Store(options)

Creates a new AWS S3 store with options.

options.bucket

The bucket name.

options.partSize

The preferred part size for parts send to S3. Can not be lower than 5MB or more than 500MB. The server calculates the optimal part size, which takes this size into account, but may increase it to not exceed the S3 10K parts limit.

options.s3ClientConfig

Options to pass to the AWS S3 SDK. Checkout the S3ClientConfig docs for the supported options. You need to at least set the region, bucket name, and your preferred method of authentication.

Extensions

The tus protocol supports optional extensions. Below is a table of the supported extensions in @liknens/tus-s3-store.

Extension@liknens/tus-s3-store
Creation
Creation With Upload
Expiration
Checksum
Termination
Concatenation

Termination

After a multipart upload is aborted, no additional parts can be uploaded using that upload ID. The storage consumed by any previously uploaded parts will be freed. However, if any part uploads are currently in progress, those part uploads might or might not succeed. As a result, it might be necessary to set an S3 Lifecycle configuration to abort incomplete multipart uploads.

Examples

Example: using credentials to fetch credentials inside a AWS container

The credentials config is directly passed into the AWS SDK so you can refer to the AWS docs for the supported values of credentials

const aws = require('aws-sdk')
const {Server} = require('@liknens/tus-server')
const {FileStore} = require('@liknens/tus-s3-store')

const s3Store = new S3Store({
  partSize: 8 * 1024 * 1024,
  s3ClientConfig: {
    bucket: process.env.AWS_BUCKET,
    region: process.env.AWS_REGION,
    credentials: new aws.ECSCredentials({
      httpOptions: {timeout: 5000},
      maxRetries: 10,
    }),
  },
})
const server = new Server({path: '/files', datastore: s3Store})
// ...

Types

This package is fully typed with TypeScript.

Compatibility

This package requires Node.js 16.0+.

Contribute

See contributing.md.

License

MIT © tus

1.0.0-leenda.1

11 months ago