1.0.1 • Published 4 months ago

s3-api v1.0.1

Weekly downloads
-
License
MIT
Repository
-
Last release
4 months ago

Overview

The s3-api module provides a simple, light wrapper around the AWS S3 API (version 3). It greatly simplifies things like uploading and downloading files to/from S3, as well as treating it like a key/value store.

Features

  • Uses AWS SDK v3.
  • Fully async/await, with support for classic callbacks.
  • Use S3 as a key/value store.
  • Use JSON, buffers, streams or files.
  • Upload or download multiple files or entire directories recursively.
  • Optional gzip compression and decompression for files and streams.
  • Automatically handles uploading files using multipart chunks.
  • Automatically handles pagination when listing files.
  • Automatic retries with exponential backoff.
  • Logging and perf helpers.
  • Optional caching layer for JSON files.

Setup

Use npm to install the module locally:

npm install s3-api

API Usage

To use the API in your code, require the module, and instantiate a class:

const S3 = require('s3-api');

let s3 = new S3({
	credentials: {
		accessKeyId: "YOUR_ACCESS_KEY_HERE",
		secretAccessKey: "YOUR_SECRET_KEY_HERE"
	},
	bucket: 'my-bucket-uswest1',
	prefix: 'myapp/data/'
});

The class constructor expects an object, which accepts several different properties (see below). At the very least you should specify a bucket and a prefix. You may also need to specify credentials as well, depending on your setup. The prefix is prepended onto all S3 keys, and is a great way to keep your app's S3 data in an isolated area when sharing a bucket.

Once you have your class instance created, call one of the available API methods (see API Reference for list). Example:

try {
	let args = await s3.uploadFile({ localFile: '/path/to/image.gif', key: 's3dir/myfile.gif' });
	// `args.meta` will be the metadata object from S3
}
catch(err) {
	// handle error here
}

The result args object properties will vary based on the API call. In the examples below, args is destructed into local variables using the let {...} = syntax. This is known as destructuring assignment. Example:

try {
	let { files, bytes } = await s3.list({ remotePath: 'mydir' });
	// `files` will be an array of file objects, each with `key`, `size` and `mtime` props.
	// `bytes` is the total bytes of all listed files.
}
catch(err) {
	// handle error here
}

Please note that the local variables must be named exactly as shown above (e.g. files, bytes in this case), as they are being yanked from an object. You can omit specific variables if you don't care about them, e.g. let { files } = (omitting bytes). If you don't want to declare new local variables for the object properties, just use the let args = syntax instead.

It is highly recommended that you instantiate the S3 API class one time, and reuse it for the lifetime of your application. The reason is, the library reuses network connections to reduce S3 lag. Each time you instantiate a new class it has to open new connections.

Key / Value Store

If you want to use S3 as a key/value store, then this is the library for you. The put() and get() API calls store and fetch objects, serialized to/from JSON behind the scenes. Example:

try {
	// store a record
	await s3.put({ key: 'users/kermit', value: { animal: 'frog', color: 'green' } });
	
	// fetch a record
	let { data } = await s3.get({ key: 'users/kermit' });
	console.log(data); // { "animal": "frog", "color": "green" }
}
catch(err) {
	// handle error here
}

See put() and get() for more details.

Caching

You can enable optional caching for JSON records, to store then in RAM for a given TTL, or up to a specific item count. Enable this feature by passing a cache object to the class constructor with additional settings. Example:

const S3 = require('s3-api');

let s3 = new S3({
	bucket: 'my-bucket-uswest1',
	prefix: 'myapp/data/',
	cache: {
		maxAge: 3600
	}
});

This would cache all JSON files fetched using get(), and stored using put(), in memory for up to an hour (3600 seconds). You can also specify other limits including total cache keys, and limit to specific S3 keys by regular expression:

let s3 = new S3({
	bucket: 'my-bucket-uswest1',
	prefix: 'myapp/data/',
	cache: {
		maxAge: 3600,
		maxItems: 1000,
		keyMatch: /^MYAPP\/MYDIR/
	}
});

This would limit the cache objects to 1 hour, and 1,000 total items (oldest keys will be expunged), and also only cache S3 keys that match the regular expression /^MYAPP\/MYDIR/.

Note that storing records via put() will always go to S3. This is a read cache, not a write cache. However, objects stored to S3 via put() may also be stored in the cache, if the key matches your ketMatch config property.

Remember that caching only happens for JSON records fetched using get(), and stored using put(). It does not happen for files, buffers or streams.

Using Files

The S3 API library provides wrappers for easily managing files in S3. Here is an example of uploading and downloading a file:

try {
	// upload file
	await s3.uploadFile({ localFile: '/path/to/image.gif', key: 's3dir/myfile.gif' });
	
	// download file
	await s3.downloadFile({ key: 's3dir/myfile.gif', localFile: '/path/to/image.gif' });
}
catch(err) {
	// handle error here
}

Streams are always used behind the scenes, so this can handle extremely large files without using significant memory. When downloading, the parent directories for the destination file will automatically be created if needed.

See uploadFile() and downloadFile() for more details.

Multiple Files

You can upload or download multiple files in one call, including entire directories, and traversal of nested directories. Here is how to do this:

try {
	// upload directory
	await s3.uploadFiles({ localPath: '/path/to/images', remotePath: 's3dir/uploadedimages' });
	
	// download directory
	await s3.downloadFiles({ remotePath: 's3dir/uploadedimages', localPath: '/path/to/images' });
}
catch(err) {
	// handle error here
}

This would upload the entire contents of the local /path/to/images directory, and place the contents into the S3 key s3dir/uploadedimages (i.e. using it as a prefix). Nested directories are automatically traversed as well. To control which files are uploaded or downloaded, use the filespec property:

try {
	// upload selected files
	await s3.uploadFiles({ localPath: '/path/to/images', remotePath: 's3dir/uploadedimages', filespec: /\.gif$/ });
	
	// download selected files
	await s3.downloadFiles({ remotePath: 's3dir/uploadedimages', localPath: '/path/to/images', filespec: /\.gif$/ });
}
catch(err) {
	// handle error here
}

This would only upload and download files with names ending in .gif. Note that the filespec only matches filenames, not directory paths. See uploadFiles() and downloadFiles() for more details.

Compression

The S3 API library can handle gzip compression and decompression for you by default. To do this, add compress for compression on upload, and decompress for decompression on download. Example use:

try {
	// upload file w/compression
	await s3.uploadFile({ localFile: '/path/to/report.txt', key: 's3dir/report.txt.gz', compress: true });
	
	// download file w/decompression
	await s3.downloadFile({ key: 's3dir/report.txt.gz', localFile: '/path/to/report.txt', decompress: true });
}
catch(err) {
	// handle error here
}

To control the gzip compression level and other settings, specify a gzip property in your class constructor:

let s3 = new S3({
	bucket: 'my-bucket-uswest1',
	prefix: 'myapp/data/',
	gzip: {
		level: 6,
		memLevel: 8
	}
});

See the Node Zlib Class Options docs for more on these settings.

When compressing multiple files for upload, you can specify an S3 key suffix (to append .gz to all filenames for example):

try {
	// upload directory w/compression and suffix
	await s3.uploadFiles({ localPath: '/path/to/images', remotePath: 's3dir/uploadedimages', compress: true, suffix: '.gz' });
}
catch(err) {
	// handle error here
}

And similarly, when downloading with decompression you can use strip to strip off the .gz for the decompressed files:

try {
	// download directory w/decompression and strip
	await s3.downloadFiles({ remotePath: 's3dir/uploadedimages', localPath: '/path/to/images', decompress: true, strip: /\.gz$/ });
}
catch(err) {
	// handle error here
}

Threads

When uploading, downloading or deleting multiple files, you can specify a number of threads to use. This defaults to 1, meaning operate on a single file at a time, but S3 can often benefit from multiple threads in many cases, due to connection overhead and service lag. To increase the thread count, specify a threads property:

try {
	// upload directory
	await s3.uploadFiles({ localPath: '/path/to/images', remotePath: 's3dir/uploadedimages', threads: 4 });
	
	// download directory
	await s3.downloadFiles({ remotePath: 's3dir/uploadedimages', localPath: '/path/to/images', threads: 4 });
}
catch(err) {
	// handle error here
}

However, please be careful when using multiple threads with compression. All gzip operations run on the local CPU, not in S3, so you can easily overwhelm a server this way. It is recommended that you keep the threads at the default when using compression.

Pinging Objects

To "ping" an object is to quickly check for its existence and fetch basic information about it, without downloading the full contents. This is typically called "head" in HTTP parlance (i.e. "HTTP HEAD"), and thus the S3 API call is named head(). Example:

try {
	// ping a remote object
	let { meta } = await s3.head({ key: 's3dir/myfile.gif' });
	console.log(meta);
}
catch (err) {
	// handle error here
}

The meta object returned will have the object's size in bytes (size), and it's modification date as an Epoch timestamp (mtime). If the object does not exist, an error will be thrown.

Listing Objects

To generate a listing of remote objects on S3 under a specific key prefix, use the list() method:

try {
	// list remote objects
	let { files, bytes } = await s3.list({ remotePath: 's3dir' });
	console.log(files);
}
catch (err) {
	// handle error here
}

This will list all the objects on S3 with a starting key prefix of s3dir, returning the array of files and total bytes used. The list() call traverses nested "directories" on S3, and also automatically manages "paging" through the results, so it returns them all in one single array (S3 only allows 1,000 objects per call, hence the need for pagination).

The files array will contain an object for each object found, with key, size and mtime properties. See list() below for more details.

To limit which objects are included in the listing, you can specify a filespec property:

try {
	// list remote gif files
	let { files, bytes } = await s3.list({ remotePath: 's3dir', filespec: /\.gif$/ });
	console.log(files);
}
catch (err) {
	// handle error here
}

This would only include S3 keys that end with .gif.

For even finer grain control over which files are returned, you can specify a filter function, which will be invoked for each file. It will be passed a single object containing the key, size and mtime properties. The function can return true to include the file or false to exclude. Example use:

try {
	// list files larger than 1 MB
	let { files, bytes } = await s3.list({ remotePath: 's3dir', filter: function(file) { return file.size > 1048576; } });
	console.log(files);
}
catch (err) {
	// handle error here
}

Deleting Objects

To delete an object from S3, simply call delete() and specify the S3 key. Example:

try {
	// delete a remote object
	await s3.delete({ key: 's3dir/myfile.gif' });
}
catch (err) {
	// handle error here
}

To delete multiple objects in one call, use the deleteFiles() method. You can then set remotePath to specify a starting path, and optionally filespec to limit which files are deleted. Example:

try {
	// delete remote gif files
	await s3.deleteFiles({ remotePath: 's3dir', filespec: /\.gif$/ });
}
catch (err) {
	// handle error here
}

Please note that deleteFiles() will recursively scan nested "directories" on S3, so use with extreme care.

Using Buffers

If you would rather deal with buffers instead of files, the S3 API library supports low-level putBuffer() and getBuffer() calls. This is useful if you already have a file's contents loaded into memory. Example:

let buf = fs.readFileSync( '/path/to/image.gif' );

try {
	// upload buffer
	await s3.putBuffer({ key: 's3dir/myfile.gif', value: buf });
	
	// download buffer
	let { data } = await s3.getBuffer({ key: 's3dir/myfile.gif' });
}
catch (err) {
	// handle error here
}

Remember, buffers are all held in memory, so beware of large objects that could melt your server. It is recommended that you use streams whenever possible (see next section).

Using Streams

Using streams is the preferred way of dealing with large objects, as they use very little memory. The API library provides putStream() and getStream() calls for your convenience. Here is an example of uploading a stream:

let readStream = fs.createReadStream( '/path/to/image.gif' );

try {
	// upload stream to S3
	await s3.putStream({ key: 's3dir/myfile.gif', value: readStream });
}
catch (err) {
	// handle error here
}

And here is an example of downloading a stream, and piping it to a file:

let writeStream = fs.createWriteStream( '/path/to/image.gif' );

try {
	// download stream from S3
	let { data } = await s3.getStream({ key: 's3dir/myfile.gif' });
	
	// pipe it to local file
	data.pipe( writeStream );
	
	writeStream.on('finish', function() {
		// download complete
	});
}
catch (err) {
	// handle error here
}

Note that putStream() will completely upload the entire stream to completion before returning, whereas getStream() simply starts a stream, and returns a handle to you for piping or reading.

Both stream methods can automatically compress or decompress with gzip if desired. Simply include a compress property and set it to true for upload compression, or a decompress property set to true for download decompression.

Custom S3 Params

All of the upload related calls (i.e. put(), uploadFile(), uploadFiles(), putBuffer() and putStream()) accept an optional params object. This allows you specify options that are passed directly to the AWS S3 API, for things like ACL and Storage Class. Example:

let opts = {
	localFile: '/path/to/image.gif', 
	key: 's3dir/myfile.gif',
	params: {
		ACL: 'public-read',
		StorageClass: 'STANDARD_IA'
	}
};

try {
	// upload file
	await s3.uploadFile(opts);
}
catch(err) {
	// handle error here
}

This would set the ACL to public-read (see AWS - Canned ACL), and the S3 storage class to "Infrequently Accessed" (a cheaper storage tier with reduced redundancy and performance -- see AWS - Storage Classes).

If you are uploading files to a S3 bucket that is hosting a static website, then you can use params to bake in headers like Content-Type and Cache-Control. Example:

let opts = {
	localFile: '/path/to/image.gif', 
	key: 's3dir/myfile.gif',
	params: {
		ContentType: 'image/gif',
		CacheControl: 'max-age=86400'
	}
};

try {
	// upload file
	await s3.uploadFile(opts);
}
catch(err) {
	// handle error here
}

You can alternatively declare some params in the class constructor, so you don't have to specify them for each API call:

let s3 = new S3({
	bucket: 'my-bucket-uswest1',
	prefix: 'myapp/data/',
	params: {
		ACL: 'public-read',
		StorageClass: 'STANDARD_IA'
	}
});

try {
	// upload file
	await s3.uploadFile({ localFile: '/path/to/image.gif', key: 's3dir/myfile.gif' });
}
catch(err) {
	// handle error here
}

When params are specified in both places, they are merged together, and the properties in the API call take precedence over those defined in the class instance.

For a complete list of all the properties you can specify in params, see the AWS - PutObjectRequest docs.

Logging

You can optionally attach a pixl-logger compatible logger to the API class, which can log all requests and responses, as well as errors. Example:

const Logger = require('pixl-logger');
let logger = new Logger( 'debug.log', ['hires_epoch', 'date', 'hostname', 'component', 'category', 'code', 'msg', 'data'] );

s3.attachLogAgent( logger );

Debug log entries are logged at levels 8 and 9, with the component column set to S3. Errors are logged with the component set to S3 and the code column set to one of the following:

Error CodeDescription
err_s3_getAn S3 core error attempting to fetch an object. Note that a non-existent object is not logged as an error.
err_s3_putAn S3 core error attempting to put an object.
err_s3_deleteAn S3 core error attempting to delete an object. Note that a non-existent object is not logged as an error.
err_s3_headAn S3 core error attempting to head (ping) an object. Note that a non-existent object is not logged as an error.
err_s3_jsonA JSON parser error when fetching a JSON record.
err_s3_fileA local filesystem error attempting to stat a file.
err_s3_dirA local filesystem error attempting to create directories.
err_s3_globA local filesystem error attempting to glob (scan) files.
err_s3_streamA read or write stream error.
err_s3_gzipAn error attempting to compress or decompress via gzip (zlib).

In all cases, a verbose error description will be provided in the msg column.

Console

To log everything to the console, you can simulate a pixl-logger compatible logger like this:

s3.attachLogAgent( {
	debug: function(level, msg, data) {
		console.log( code, msg, data ):
	},
	error: function(code, msg, data) {
		console.error( code, msg, data ):
	}
} );

Performance Tracking

You can optionally attach a pixl-perf compatible performance tracker to the API class, which will measure all S3 calls for you. Example:

const Perf = require('pixl-perf');
let perf = new Perf();
perf.begin();

s3.attachPerfAgent( perf );

It will track the following performance metrics for you:

Perf MetricDescription
s3_putMeasures all S3 upload operations, including put(), uploadFile(), uploadFiles(), putBuffer() and putStream()).
s3_getMeasures all S3 download operations, including get(), downloadFile(), downloadFiles(), getBuffer() and getStream()).
s3_headMeasures all calls to head().
s3_listMeasures all calls to list().
s3_copyMeasures all calls to copy().
s3_deleteMeasures all calls to delete() and deleteFiles().

API Reference

constructor

The class constructor accepts an object containing configuration properties. The following properties are available:

Property NameTypeDescription
credentialsObjectYour AWS credentials (containing accessKeyId and secretAccessKey) if required.
regionStringThe AWS region to use for the S3 API. Defaults to us-west-1.
bucketStringThe S3 bucket to use by default. You can optionally override this per API call.
prefixStringAn optional prefix to prepend onto all S3 keys. Useful for keeping all of your app's keys under a common prefix.
paramsObjectAn optional object to set S3 object metadata. See Custom S3 Params.
gzipObjectOptionally configure the gzip compression settings. See Compression.
timeoutIntegerThe number of milliseconds to wait before killing idle sockets. The default is 5000 (5 seconds).
connectTimeoutIntegerThe number of milliseconds to wait when initially connecting to S3. The default is 5000 (5 seconds).
retriesIntegerThe number of retries to attempt before failing each request. The default is 50. Exponential backoff is included.
loggerObjectOptionally pass in a pixl-logger compatible logger here. Or use attachLogAgent().
perfObjectOptionally pass in a pixl-perf compatible perf tracker here. Or use attachPerfAgent().
cacheObjectOptionally enable caching for JSON records. See Caching for details.

Example use:

let s3 = new S3({
	bucket: 'my-bucket-uswest1',
	prefix: 'myapp/data/'
});

attachLogAgent

The attachLogAgent() method allows you to attach a pixl-logger compatible logger to your API class. It will log all requests and responses. Example use:

s3.attachLogAgent( logger );

See Logging for details on what is logged.

attachPerfAgent

The attachPerfAgent() method allows you to attach a pixl-perf compatible performance tracker to your API class. It will measure all calls to S3. Example use:

s3.attachPerfAgent( perf );

See Performance Tracking for details on what is tracked.

put

The put() method stores an object as a JSON-serialized record in S3, treating it like a key/value store. Example:

try {
	// store a record
	let { meta } = await s3.put({ key: 'users/kermit', value: { animal: 'frog', color: 'green' } });
}
catch(err) {
	// handle error here
}

The method accepts an object containing the following properties:

Property NameTypeDescription
keyString(Required) The S3 key to store the object under. This may be prepended with a prefix if set on the class instance.
valueObject(Required) The object value to store. This will be serialized to JSON behind the scenes.
prettyBooleanOptionally serialize the JSON using "pretty-printing" (formatting with multiple lines and tab indentations) by setting this to true. The default is false.
bucketStringOptionally override the S3 bucket used to store the record. This is usually set in the class constructor.
paramsObjectOptionally specify parameters to the S3 API, for e.g. ACL and Storage Class. See Custom S3 Params.

The response object will contain the following keys, which you can destruct into variables as shown above:

Property NameTypeDescription
metaObjectA raw metadata object that is sent back from the AWS S3 service. It contains information about the request, used for debugging and troubleshooting purposes.

get

The get() method fetches an object that was written in JSON format (e.g. from put(), or it can just be a JSON file that was uploaded to S3), and parses the JSON for you. Example:

try {
	// fetch a record
	let { data } = await s3.get({ key: 'users/kermit' });
	console.log(data); // { "animal": "frog", "color": "green" }
}
catch(err) {
	// handle error here
}

The method accepts an object containing the following properties:

Property NameTypeDescription
keyString(Required) The S3 key of the object you want to get. This may be prepended with a prefix if set on the class instance.
bucketStringOptionally specify the S3 bucket where the record is stored. This is usually set in the class constructor.

The response object will contain the following keys, which you can destruct into variables as shown above:

Property NameTypeDescription
dataObjectThe content of the JSON record, parsed and in object format.
metaObjectA raw metadata object that is sent back from the AWS S3 service. It contains information about the request, used for debugging and troubleshooting purposes.

Note: When Caching is enabled and an object is fetched from the cache, the meta response object will simply contain a single cached property, set to true.

head

The head() method pings an object to check for its existence, and returns basic information about it. Example:

try {
	// ping a remote object
	let { meta } = await s3.head({ key: 's3dir/myfile.gif' });
	console.log(meta);
}
catch (err) {
	// handle error here
}

The method accepts an object containing the following properties:

Property NameTypeDescription
keyString(Required) The S3 key of the object you want to ping. This may be prepended with a prefix if set on the class instance.
nonfatalBooleanSet this to true to suppress errors for non-existent keys (meta will simply be null in these cases). The default is false.
bucketStringOptionally specify the S3 bucket where the record is stored. This is usually set in the class constructor.

The response object will contain the following keys, which you can destruct into variables as shown above:

Property NameTypeDescription
metaObjectA raw metadata object that is sent back from the AWS S3 service. It contains information about the request, used for debugging and troubleshooting purposes.

In this case the meta object is augmented with the record's size (size) and modification date (mtime):

Property NameTypeDescription
meta.sizeIntegerThe object's size in bytes.
meta.mtimeIntegerThe object's modification date in Epoch seconds.

Note: The head() method bypasses the Cache. It always hits S3.

list

The list() method fetches a listing of remote S3 objects that exist under a specified key prefix, and optionally match a specified filter. It will automatically loop and paginate as required, returning the full set of matched objects regardless of length. Example:

try {
	// list remote gif files
	let { files, bytes } = await s3.list({ remotePath: 's3dir', filespec: /\.gif$/ });
	console.log(files);
}
catch (err) {
	// handle error here
}

The method accepts an object containing the following properties:

Property NameTypeDescription
remotePathStringThe base S3 path to look for files under. This may be prepended with a prefix if set on the class instance.
filespecRegExpOptionally filter the result files using a regular expression, matched on the filenames.
filterFunctionOptionally provide a filter function to select which files to return.
olderNumberOptionally filter the S3 files based on their modification date, i.e. they must be older than the specified number of seconds. You can also specify a string here, e.g. "7 days".
bucketStringOptionally specify the S3 bucket where the records are stored. This is usually set in the class constructor.

The response object will contain the following keys, which you can destruct into variables as shown above:

Property NameTypeDescription
filesArrayAn array of file objects that matched your criteria. See below for details.
bytesIntegerThe total number of bytes used by all matched objects.

The items of the files array will contain the following properties:

Property NameTypeDescription
keyStringThe object's full S3 key (including prefix if applicable).
sizeIntegerThe objects's size in bytes.
mtimeIntegerThe object's modification date, as Epoch seconds.

listFolders

The listFolders() method fetches a listing of remote S3 files and "subfolders" that exist under a specified key prefix. The S3 storage system doesn't really have a folder tree, but it fakes one by indexing keys by a delimiter (typically slash). This method fetches one subfolder level only -- it does not recurse for nested folders. Example:

try {
	// list remote folders and files
	let { folders, files } = await s3.listFolders({ remotePath: 's3dir' });
	console.log(folders, files);
}
catch (err) {
	// handle error here
}

The folders will be an array of subfolder paths, and the files are all files from the current folder level (see below). Note that this API does not recurse for nested folders, nor does it paginate beyond 1,000 items. It is really designed for use in an explorer UI only.

The method accepts an object containing the following properties:

Property NameTypeDescription
remotePathStringThe base S3 path to look for folders under. This may be prepended with a prefix if set on the class instance.
delimiterStringOptionally override the delimiter for directory indexing. Defaults to /.
bucketStringOptionally specify the S3 bucket where the folders reside. This is usually set in the class constructor.

The response object will contain the following keys, which you can destruct into variables as shown above:

Property NameTypeDescription
foldersArrayAn array of S3 path prefixes for subfolders just under the current level.
filesArrayAn array of file objects at the current folder level. See below for details.

The items of the files array will contain the following properties:

Property NameTypeDescription
keyStringThe object's full S3 key (including prefix if applicable).
sizeIntegerThe objects's size in bytes.
mtimeIntegerThe object's modification date, as Epoch seconds.

listBuckets

The listBuckets() method fetches the complete list of S3 buckets in your AWS account. It accepts no options. Example:

try {
	// list buckets
	let { buckets } = await s3.listBuckets();
	console.log(buckets);
}
catch (err) {
	// handle error here
}

The response object will contain the following keys, which you can destruct into variables as shown above:

Property NameTypeDescription
bucketsArrayAn array of S3 bucket names.

walk

The walk() method fires an interator for every remote S3 object that exists under a specified key prefix, and optionally match a specified filter. It will automatically loop and paginate as required. The iterator is fired as a synchronous call. Example:

try {
	// find remote gif files
	var files = [];
	await s3.walk({ remotePath: 's3dir', iterator: function(file) { files.push(file); } });
	console.log(files);
}
catch (err) {
	// handle error here
}

The method accepts an object containing the following properties:

Property NameTypeDescription
remotePathStringThe base S3 path to look for files under. This may be prepended with a prefix if set on the class instance.
filespecRegExpOptionally filter the result files using a regular expression, matched on the filenames.
filterFunctionOptionally provide a filter function to select which files to return.
iteratorFunctionA synchronous function that is called for every remote S3 file. It is passed an object containing file metadata (see below).
olderNumberOptionally filter the S3 files based on their modification date, i.e. they must be older than the specified number of seconds. You can also specify a string here, e.g. "7 days".
bucketStringOptionally specify the S3 bucket where the records are stored. This is usually set in the class constructor.

Each item object passed to the iterator will contain the following properties:

Property NameTypeDescription
keyStringThe object's full S3 key (including prefix if applicable).
sizeIntegerThe objects's size in bytes.
mtimeIntegerThe object's modification date, as Epoch seconds.

copy

The copy() method copies one S3 object to another location. This API can copy between buckets as well. Example:

try {
	// copy an object
	let { meta } = await s3.copy({ sourceKey: 'users/oldkermit', key: 'users/newkermit' });
}
catch(err) {
	// handle error here
}

To copy an object between buckets, include a sourceBucket property. The destination bucket is always specified via bucket (which may be set on your class instance or in the copy API). Example:

try {
	// copy an object between buckets
	let { meta } = await s3.copy({ sourceBucket: 'oldbucket', sourceKey: 'users/oldkermit', bucket: 'newbucket', key: 'users/newkermit' });
}
catch(err) {
	// handle error here
}

The method accepts an object containing the following properties:

Property NameTypeDescription
sourceKeyString(Required) The S3 key to copy from. This may be prepended with a prefix if set on the class instance.
keyString(Required) The S3 key to copy the object to. This may be prepended with a prefix if set on the class instance.
sourceBucketStringOptionally override the S3 bucket used to read the source record. This defaults to the class bucket parameter.
bucketStringOptionally override the S3 bucket used to store the destination record. This is usually set in the class constructor.
paramsObjectOptionally specify parameters to the S3 API, for e.g. ACL and Storage Class. See Custom S3 Params.

The response object will contain the following keys, which you can destruct into variables as shown above:

Property NameTypeDescription
metaObjectA raw metadata object that is sent back from the AWS S3 service. It contains information about the request, used for debugging and troubleshooting purposes.

move

The move() method moves one S3 object to another location. Essentially, it performs a copy() followed by a delete(). This can move objects between buckets as well. Example:

try {
	// move an object
	let { meta } = await s3.move({ sourceKey: 'users/oldkermit', key: 'users/newkermit' });
}
catch(err) {
	// handle error here
}

To move an object between buckets, use sourceBucket. The destination bucket is always specified via bucket (which may be set on your class instance or in the copy API). Example:

try {
	// move an object between buckets
	let { meta } = await s3.move({ sourceBucket: 'oldbucket', sourceKey: 'users/oldkermit', bucket: 'newbucket', key: 'users/newkermit' });
}
catch(err) {
	// handle error here
}

The method accepts an object containing the following properties:

Property NameTypeDescription
sourceKeyString(Required) The S3 key to move from. This may be prepended with a prefix if set on the class instance.
keyString(Required) The S3 key to move the object to. This may be prepended with a prefix if set on the class instance.
sourceBucketStringOptionally override the S3 bucket used to read the source record. This defaults to the class bucket parameter.
bucketStringOptionally override the S3 bucket used to store the destination record. This is usually set in the class constructor.
paramsObjectOptionally specify parameters to the S3 API, for e.g. ACL and Storage Class. See Custom S3 Params.

The response object will contain the following keys, which you can destruct into variables as shown above:

Property NameTypeDescription
metaObjectA raw metadata object that is sent back from the AWS S3 service. It contains information about the request, used for debugging and troubleshooting purposes.

delete

The delete() method deletes a single object from S3 given its key. Please use caution here, as there is no way to undo a delete -- we don't use versioned buckets. Example:

try {
	// delete a remote object
	let { meta } = await s3.delete({ key: 's3dir/myfile.gif' });
}
catch (err) {
	// handle error here
}

The method accepts an object containing the following properties:

Property NameTypeDescription
keyString(Required) The S3 key of the object you want to delete. This may be prepended with a prefix if set on the class instance.
bucketStringOptionally specify the S3 bucket where the record is stored. This is usually set in the class constructor.

The response object will contain the following keys, which you can destruct into variables as shown above:

Property NameTypeDescription
metaObjectA raw metadata object that is sent back from the AWS S3 service. It contains information about the request, used for debugging and troubleshooting purposes.

Note: This will also remove the object from the Cache, if enabled.

uploadFile

The uploadFile() method uploads a file from the local filesystem to an object in S3. This uses streams and multi-part uploads in the background, so it can handle files of any size while using very little memory. Example:

try {
	// upload file
	let { meta } = await s3.uploadFile({ localFile: '/path/to/image.gif', key: 's3dir/myfile.gif' });
}
catch(err) {
	// handle error here
}

The method accepts an object containing the following properties:

Property NameTypeDescription
localFileString(Required) A path to the file on local disk.
keyString(Required) The S3 key of the object. This may be prepended with a prefix if set on the class instance.
compressBooleanSet this to true to automatically compress the file during upload. Defaults to false. See Compression.
bucketStringOptionally specify the S3 bucket where the record is stored. This is usually set in the class constructor.
paramsObjectOptionally specify parameters to the S3 API, for e.g. ACL and Storage Class. See Custom S3 Params.

The response object will contain the following keys, which you can destruct into variables as shown above:

Property NameTypeDescription
metaObjectA raw metadata object that is sent back from the AWS S3 service. It contains information about the request, used for debugging and troubleshooting purposes.

Note that you can omit the filename portion of the key property if you want. Specifically, if the key ends with a slash (/) this will trigger the library to automatically append the local filename to the end of the S3 key.

downloadFile

The downloadFile() method downloads an object from S3, and saves it to a local file on disk. The local file's parent directories will be automatically created if needed. This uses streams in the background, so it can handle files of any size while using very little memory. Example:

try {
	// download file
	let { meta } = await s3.downloadFile({ key: 's3dir/myfile.gif', localFile: '/path/to/image.gif' });
}
catch(err) {
	// handle error here
}

The method accepts an object containing the following properties:

Property NameTypeDescription
keyString(Required) The S3 key of the object to download. This may be prepended with a prefix if set on the class instance.
localFileString(Required) A path to the destination file on local disk.
decompressBooleanSet this to true to automatically decompress the file during download. Defaults to false. See Compression.
bucketStringOptionally specify the S3 bucket where the record is stored. This is usually set in the class constructor.

The response object will contain the following keys, which you can destruct into variables as shown above:

Property NameTypeDescription
metaObjectA raw metadata object that is sent back from the AWS S3 service. It contains information about the request, used for debugging and troubleshooting purposes.

Note that you can omit the filename portion of the localFile property if you want. Specifically, if the localFile ends with a slash (/) this will trigger the library to automatically append the filename from the S3 key.

uploadFiles

The uploadFiles() method recursively uploads multiple files / directories from the local filesystem to S3. This uses streams and multi-part uploads in the background, so it can handle files of any size while using very little memory. Example:

try {
	// upload selected files
	let { files } = await s3.uploadFiles({ localPath: '/path/to/images', remotePath: 's3dir/uploadedimages', filespec: /\.gif$/ });
}
catch(err) {
	// handle error here
}

The method accepts an object containing the following properties:

Property NameTypeDescription
localPathString(Required) The base filesystem path to find files under. Should resolve to a folder.
remotePathString(Required) The base S3 path to store files under. This may be prepended with a prefix if set on the class instance.
filespecRegExpOptionally filter the local files using a regular expression, applied to the filenames.
threadsIntegerOptionally increase the threads to improve performance (don't combine with compress).
compressBooleanSet this to true to automatically compress all files during upload. Defaults to false. See Compression.
suffixStringOptionally append a suffix to every destination S3 key, e.g. .gz for compressed files.
bucketStringOptionally specify the S3 bucket where the record is stored. This is usually set in the class constructor.
paramsObjectOptionally specify parameters to the S3 API, for e.g. ACL and Storage Class. See Custom S3 Params.

The response object will contain the following keys, which you can destruct into variables as shown above:

Property NameTypeDescription
filesArrayAn array of files that were uploaded. Each item in the array is a string containing the file path.

downloadFiles

The downloadFiles() method recursively downloads multiple files / directories from S3 to the local filesystem. Local parent directories will be automatically created if needed. This uses streams in the background, so it can handle files of any size while using very little memory. Example:

try {
	// download selected files
	let { files, bytes } = await s3.downloadFiles({ remotePath: 's3dir/uploadedimages', localPath: '/path/to/images', filespec: /\.gif$/ });
}
catch(err) {
	// handle error here
}

The method accepts an object containing the following properties:

Property NameTypeDescription
remotePathString(Required) The base S3 path to fetch files from. This may be prepended with a prefix if set on the class instance.
localPathString(Required) The local filesystem path to save files under. Parent directories will automatically be created if needed.
filespecRegExpOptionally filter the S3 files using a regular expression, matched on the filenames.
threadsIntegerOptionally increase the threads to improve performance (don't combine with decompress).
decompressBooleanSet this to true to automatically decompress all files during download. Defaults to false. See Compression.
stripRegExpOptionally strip a suffix from every destination filename, e.g. /\.gz$/ to strip the .gz. suffix off compressed files.
bucketStringOptionally specify the S3 bucket where the record is stored. This is usually set in the class constructor.
paramsObjectOptionally specify parameters to the S3 API, for e.g. ACL and Storage Class. See Custom S3 Params.

The response object will contain the following keys, which you can destruct into variables as shown above:

Property NameTypeDescription
filesArrayAn array of files that were downloaded. Each item in the array is an object with key, size and mtime properties.
bytesIntegerThe total number of bytes downloaded.

deleteFiles

The deleteFiles() method recursively deletes multiple files / directories from S3. Please use extreme caution here, as there is no way to undo deletes -- we don't use versioned buckets. Example:

try {
	// delete selected files
	let { files, bytes } = await s3.deleteFiles({ remotePath: 's3dir/uploadedimages', filespec: /\.gif$/ });
}
catch(err) {
	// handle error here
}

The method accepts an object containing the following properties:

Property NameTypeDescription
remotePathString(Required) The base S3 path to delete files from. This may be prepended with a prefix if set on the class instance.
filespecRegExpOptionally filter the S3 files using a regular expression, matched on the filenames.
olderMixedOptionally filter the S3 files based on their modification date, i.e. must be older than the specified number of seconds. You can also specify a string here, e.g. "7 days".
threadsIntegerOptionally increase the threads to improve performance at the cost of additional HTTP connections.
bucketStringOptionally specify the S3 bucket where the record is stored. This is usually set in the class constructor.

The response object will contain the following keys, which you can destruct into variables as shown above:

Property NameTypeDescription
filesArrayAn array of files that were deleted. Each item in the array is an object with key, size and mtime properties.
bytesIntegerThe total number of bytes deleted.

putBuffer

The putBuffer() method uploads a Node.js Buffer to S3, given a key. Example:

let buf = fs.readFileSync( '/path/to/image.gif' );

try {
	// upload buffer
	let { meta } = await s3.putBuffer({ key: 's3dir/myfile.gif', value: buf });
}
catch (err) {
	// handle error here
}

The method accepts an object containing the following properties:

Property NameTypeDescription
keyString(Required) The S3 key to store the object under. This may be prepended with a prefix if set on the class instance.
valueBuffer(Required) The buffer value to store.
compressBooleanSet this to true to automatically compress the buffer during upload. Defaults to false. See Compression.
bucketStringOptionally override the S3 bucket used to store the record. This is usually set in the class constructor.
paramsObjectOptionally specify parameters to the S3 API, for e.g. ACL and Storage Class. See Custom S3 Params.

The response object will contain the following keys, which you can destruct into variables as shown above:

Property NameTypeDescription
metaObjectA raw metadata object that is sent back from the AWS S3 service. It contains information about the request, used for debugging and troubleshooting purposes.

getBuffer

The getBuffer() method fetches an S3 object, and returns a Node.js Buffer. Beware of memory utilization with large objects, as buffers are stored entirely in memory. Example:

try {
	// download buffer
	let { data } = await s3.getBuffer({ key: 's3dir/myfile.gif' });
}
catch (err) {
	// handle error here
}

The method accepts an object containing the following properties:

Property NameTypeDescription
keyString(Required) The S3 key of the object you want to get. This may be prepended with a prefix if set on the class instance.
decompressBooleanSet this to true to automatically decompress the buffer during download. Defaults to false. See Compression.
bucketStringOptionally specify the S3 bucket where the record is stored. This is usually set in the class constructor.

The response object will contain the following keys, which you can destruct into variables as shown above:

Property NameTypeDescription
dataBufferThe content of the S3 record, in buffer format.
metaObjectA raw metadata object that is sent back from the AWS S3 service. It contains information about the request, used for debugging and troubleshooting purposes.

putStream

The putStream() method uploads a Node.js Stream to S3, given a key. Example:

let readStream = fs.createReadStream( '/path/to/image.gif' );

try {
	// upload stream to S3
	await s3.putStream({ key: 's3dir/myfile.gif', value: readStream });
}
catch (err) {
	// handle error here
}

The method accepts an object containing the following properties:

Property NameTypeDescription
keyString(Required) The S3 key to store the object under. This may be prepended with a prefix if set on the class instance.
valueStream(Required) The Node.js stream to upload.
compressBooleanSet this to true to automatically compress the stream during upload. Defaults to false. See Compression.
bucketStringOptionally override the S3 bucket used to store the record. This is usually set in the class constructor.
paramsObjectOptionally specify parameters to the S3 API, for e.g. ACL and Storage Class. See Custom S3 Params.

The response object will contain the following keys, which you can destruct into variables as shown above:

Property NameTypeDescription
metaObjectA raw metadata object that is sent back from the AWS S3 service. It contains information about the request, used for debugging and troubleshooting purposes.

getStream

The getStream() method fetches an S3 object, and returns a Node.js readable stream for handling in your code. Specifically, the data is not downloaded in the scope of the API call -- a stream is merely started. You are expected to handle the stream yourself, i.e. pipe it to another stream, or read chunks off it by hand. Here is an example of piping it to a file:

let writeStream = fs.createWriteStream( '/path/to/image.gif' );

try {
	// start stream from S3
	let { data } = await s3.getStream({ key: 's3dir/myfile.gif' });
	
	// pipe it to local file
	data.pipe( writeStream );
	
	writeStream.on('finish', function() {
		// download complete
	});
}
catch (err) {
	// handle error here
}

The method accepts an object containing the following properties:

Property NameTypeDescription
keyString(Required) The S3 key of the object you want to get. This may be prepended with a prefix if set on the class instance.
decompressBooleanSet this to true to automatically decompress the stream during download. Defaults to false. See Compression.
bucketStringOptionally specify the S3 bucket where the record is stored. This is usually set in the class constructor.

The response object will contain the following keys, which you can destruct into variables as shown above:

Property NameTypeDescription
dataStreamThe stream of the S3 contents, ready for piping.
metaObjectA raw metadata object that is sent back from the AWS S3 service. It contains information about the request, used for debugging and troubleshooting purposes.

License

The MIT License (MIT)

Copyright (c) 2023 - 2024 Joseph Huckaby and PixlCore.

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.