0.1.6 • Published 6 years ago

salient-maps v0.1.6

Weekly downloads
11
License
MIT
Repository
github
Last release
6 years ago

salient-maps

Various open source salient maps.

npm.io

Developer Usage

Example using the Deep Gaze model.

const models = require('salient-maps');
const cv = require('opencv4nodejs');

const Deep = models.deep.load();
const deep = new Deep({ width: 200, height: 200 });
const salientMap = deep.computeSaliency(cv.imread('myimage.jpg'));

Options

OptionTypeDefaultInfo
widthnumber200Width of saliency map. It's not recommended to go above 300 or below 100.
heightnumber200Height of saliency map. It's not recommended to go above 300 or below 100.

What to do with salient map?

While it's entirely up to you how use these maps, the original intent of this project was to be paired with the salient-autofocus project for providing fast image auto-focus capabilities.

npm.io npm.io npm.io npm.io npm.io npm.io

Models

IDDescriptionLicenseUsage
deepMITDeep Gaze port of FASA (Fast, Accurate, and Size-Aware Salient Object Detection) algorithmRecommended for most static usage where high accuracy is important, and near-realtime is sufficient performance (tunable by reducing map size). May not be ideal for video unless you drop map size to 150^2 or lower.
deep-rgbMITA varient of Deep Gaze port but leveraging the RGB colour space instead of LAB.Not recommended. Useful for comparison. Can perform better.
spectralBSDA port of the Spectral Residual model from OpenCV Contributions.Amazing performance, great for video, but at the cost of quality/accuracy.
fineBSDA port of the Fine Grained model from OpenCV Contributions.Interesting for testing but useless for realtime applications.

Want to contribute?

Installation

Typical local setup.

git clone git@github.com:asilvas/salient-maps.git
cd salient-maps
npm i

Import Assets

By default testing looks at trainer/image-source, so you can put any images you like there. Or follow the below instructions to import a known dataset.

  1. Download and extract CAT2000
  2. Run node trainer/scripts/import-CAT2000.js {path-to-CAT2000}

The benefit of using the above script is it'll seperate the truth maps into trainer/image-truth, which are optional.

Preview

You can run visual previews of the available saliency maps against the dataset via:

npm run preview

Benchmark

Compare performance data between models:

npm run benchmark

Export

Also available is the ability to export the salient map data to trainer/image-saliency folder, broken down by the saliency model. This permits review of maps from disk, in addition to being in a convenient format for submission to the mit saliency benchmark for quality analysis against other models.

npm run export

License

While this project falls under an MIT license, each of the models are subject to their own license. See Models for details.

0.1.6

6 years ago

0.1.5

6 years ago

0.1.4

6 years ago

0.1.3

6 years ago

0.1.2

6 years ago

0.1.0

6 years ago