@electron/asar v4.0.0
@electron/asar - Electron Archive
Asar is a simple extensive archive format, it works like tar that concatenates
all files together without compression, while having random access support.
Features
- Support random access
- Use JSON to store files' information
- Very easy to write a parser
Command line utility
Install
This module requires Node 22.12.0 or later.
$ npm install --engine-strict @electron/asarUsage
$ asar --help
Usage: asar [options] [command]
Commands:
pack|p <dir> <output>
create asar archive
list|l <archive>
list files of asar archive
extract-file|ef <archive> <filename>
extract one file from archive
extract|e <archive> <dest>
extract archive
Options:
-h, --help output usage information
-V, --version output the version numberExcluding multiple resources from being packed
Given:
app
(a) ├── x1
(b) ├── x2
(c) ├── y3
(d) │ ├── x1
(e) │ └── z1
(f) │ └── x2
(g) └── z4
(h) └── w1Exclude: a, b
$ asar pack app app.asar --unpack-dir "{x1,x2}"Exclude: a, b, d, f
$ asar pack app app.asar --unpack-dir "**/{x1,x2}"Exclude: a, b, d, f, h
$ asar pack app app.asar --unpack-dir "{**/x1,**/x2,z4/w1}"Using programmatically
Example
import { createPackage } from '@electron/asar';
const src = 'some/path/';
const dest = 'name.asar';
await createPackage(src, dest);
console.log('done.');Please note that there is currently no error handling provided!
Transform
You can pass in a transform option, that is a function, which either returns
nothing, or a stream.Transform. The latter will be used on files that will be
in the .asar file to transform them (e.g. compress).
import { createPackageWithOptions } from '@electron/asar';
const src = 'some/path/';
const dest = 'name.asar';
function transform (filename) {
return new CustomTransformStream()
}
await createPackageWithOptions(src, dest, { transform: transform });
console.log('done.');Format
Asar uses Pickle to safely serialize binary value to file.
The format of asar is very flat:
| UInt32: header_size | String: header | Bytes: file1 | ... | Bytes: file42 |The header_size and header are serialized with Pickle class, and
header_size's Pickle object is 8 bytes.
The header is a JSON string, and the header_size is the size of header's
Pickle object.
Structure of header is something like this:
{
"files": {
"tmp": {
"files": {}
},
"usr" : {
"files": {
"bin": {
"files": {
"ls": {
"offset": "0",
"size": 100,
"executable": true,
"integrity": {
"algorithm": "SHA256",
"hash": "...",
"blockSize": 1024,
"blocks": ["...", "..."]
}
},
"cd": {
"offset": "100",
"size": 100,
"executable": true,
"integrity": {
"algorithm": "SHA256",
"hash": "...",
"blockSize": 1024,
"blocks": ["...", "..."]
}
}
}
}
}
},
"etc": {
"files": {
"hosts": {
"offset": "200",
"size": 32,
"integrity": {
"algorithm": "SHA256",
"hash": "...",
"blockSize": 1024,
"blocks": ["...", "..."]
}
}
}
}
}
}offset and size records the information to read the file from archive, the
offset starts from 0 so you have to manually add the size of header_size and
header to the offset to get the real offset of the file.
offset is a UINT64 number represented in string, because there is no way to
precisely represent UINT64 in JavaScript Number. size is a JavaScript
Number that is no larger than Number.MAX_SAFE_INTEGER, which has a value of
9007199254740991 and is about 8PB in size. We didn't store size in UINT64
because file size in Node.js is represented as Number and it is not safe to
convert Number to UINT64.
integrity is an object consisting of a few keys:
- A hashing
algorithm, currently onlySHA256is supported. - A hex encoded
hashvalue representing the hash of the entire file. - An array of hex encoded hashes for the
blocksof the file. i.e. for a blockSize of 4KB this array contains the hash of every block if you split the file into N 4KB blocks. - A integer value
blockSizerepresenting the size in bytes of each block in theblockshashes above
7 months ago
9 months ago
9 months ago
12 months ago
12 months ago
7 months ago
10 months ago
5 months ago
12 months ago
12 months ago
1 year ago
1 year ago
1 year ago
1 year ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago